MongoDB Performance Issues – Fact or Fiction

I see posts and hear conversations quite frequently about MongoDB performance issues. They are a frequent, and hot, topic on sites like Quora, Hacker News, and Reddit. Many of these “hits” against MongoDB are based on outdated data and older versions of MongoDB.

There was a movie from the late 1980’s called Crocodile Dundee II. There is a scene in the movie where the guy from the Australian Outback, Mick “Crocodile” Dundee, visits a New York City hotel for the first time. Someone shows him that the room has a television. He turns it on and sees an old episode of I Love Lucy. Dundee shuts it off claiming he has already experienced television.

Similarly, many complaints from older versions of MongoDB still linger around. Someone who had a bad experience with an old version will answer a thread somewhere and claim “I used it once, didn’t like it, it’s garbage.” Much like Mick Dundee, they are basing their entire opinion on outdated knowledge.

Let’s take a look at some performance issues that are often raised and where things sit now with the latest version of MongoDB, 3.4.6. I raised some of these aspects in a previous post, but let’s take a deeper dive.

The Jepsen Test & Performance Issues of old

From a “documented issue” standpoint, many performance issues that plague MongoDB in social reviews are covered in a Jepsen test result post from 20 April 2015. This was based on version 2.4.3. Or an even older article from 18 May 2013. Clearly, there were some issues with data scalability and data concurrency in those earlier versions.

In fact, Jepsen has done extensive tests on MongoDB on lost updates and dirty and stale reads. Without getting too deep into the hows and whys of what was happening to the data, there were issues with writes when a primary went down, and read & write consistency. These issues have been addressed as of version 3.4.1.

Product Enhancements

With the new data enhancements, MongoDB version 3.4.1 passed all of the Jepsen tests. Kyle Kingsbury, the creator of Jepsen, offered the following conclusions:

MongoDB has devoted significant resources to improved safety in the past two years, and much of that ground-work is paying off in 3.2 and 3.4.

MongoDB 3.4.1 (and the current development release, 3.5.1) currently pass all MongoDB Jepsen tests….These results hold during general network partitions, and the isolated & clock-skewed primary scenario.

You can read more about his conclusions in his published results.

Beyond data security, customers are finding huge benefits in performance in the more current releases of MongoDB. Improvements to, or the introduction of, technologies such as replication compression, the WiredTiger storage engine, in memory cache, and performance enhancements to sharding and replica sets have been a win for users.

WiredTiger Case Study

A friend who works at Wanderu.com, a MongoDB user, was very generous and forthcoming with some information about their MongoDB experience. When choosing a database option they felt that NoSQL, and MongoDB specifically, fit their business and data model better than a relational model would. They process a very diverse set of data for their bus and train travel application.

They take information from a vast assortment of bus and train vendors which arrive in XML, JSON, PDF, CSV, and other formats. Data is then ingested and transformed so that everything works with price checking and booking calls in vendor specific formats. The data model was determined to be incredibly complex and fragile for implementing in a relational database.

In May 2017, Wanderu migrated to the WiredTiger storage engine in MongoDB 3.4. They took some screenshots of some of their performance graphs. The graphs cover a 10 day period, five before and five after, their migration on 5/5. They were kind enough to share these images with me and approved of their use in this article.

Wanderu Charts
MognoDB Active Reads/Writes
Before WiredTiger, the write load had a very limited max. After migration writes spiked as necessary.
Queued Reads and Writes
Writes stayed fairly constant while queued reads fell dramatically.
MongoDB Index Size
Index size took a dramatic decrease in size as well.
MongoDB Memory Usage
Not surprisingly, Memory Usage dropped too.
MongoDB Page Faults
Page Fault improvements
Replication Lag
If there wasn’t a doubt, replication lag was improved as well.

In the four years since Wanderu launched, it has relied heavily on MongoDB. They store the station and trip information for each local, regional, and national carrier. With the new $graphLookup capability in MongoDB version 3.4, they are looking at the possibility of utilizing that technology for their graphing needs as well.

Further Industry Thoughts

MongoDB is a widely used NoSQL database. It is used by companies large and small, for a variety of reasons. I reached out to a few other known MongoDB users to get some real world feedback and product experiences.

Carfax

CARFAX, for example, has been using MongoDB in production since version 1.8. They load over a billion documents a year and generate over 20,000 reports per second. Jai Hirsch, a Senior Systems Architect at CARFAX, wrote a nice write up about why they decided on MongoDB. They have achieved some tremendous performance benefits from compressed replication.

GHX

GHX switched from MMAPv1 to WiredTiger with the 3.2 release of MongoDB. Jeff Sherard, their Database Engineering Manager, had another very positive experience.

Definitely the switch to WiredTiger in 3.2 was a huge boost. Especially on the compression side – we experience about 50% compression. Document level locking vs. Collection level locking also improved performance for us significantly.

He also experienced benefits with sharding and replica sets as well with an upgrade to 3.4.4.

We recently upgraded to 3.4.4 and are particularly pleased with the improvements in balancing on shards (the parallelism makes balancing really fast). And the initial sync improvements in replica sets [sic] have been really useful too.

Tinkoff Bank

Tinkoff Bank landed on using MongoDB instead of Oracle based on their finding that the performance of Oracle’s CLOBs are not as fast and are not searchable. They are able to process approximately 1,500 requests per second using their three node replica set. These queries put a load of 5-10% on the CPU of the primary node.

Wrap Up

I’m sure that the SQL vs. NoSQL debate will live on. Much the same as Windows vs. Mac, or cats vs. dogs. I hope, however, that based on the information and testimonies provided here we can lay to rest the notion that MongoDB isn’t “enterprise ready.” If we are going to argue the virtues of MongoDB, we should at least be talking about the most current version. As in the scene with Mick Dundee, he comes across looking foolish for basing his entire view of a product based on something he experienced years ago.

There are several MongoDB specific terms in this post. I created a MongoDB Dictionary skill for the Amazon Echo line of products. Check it out and you can say “Alexa, ask MongoDB what is a document?” and get a helpful response.


Follow me on Twitter @kenwalger to get the latest updates on my postings.

Facebooktwitterredditlinkedinmail

Indexing in MongoDB

 

I get asked about and see a lot of posts and comments on the internet about MongoDB not being as quick on query reads as people think it should be. These questions and/or comments are often followed by a panning of MongoDB itself. Often based on the user’s experience in this one situation. My first question in these situations typically is “What indexes are set up on your collection that relate to your queries?” More often than not I get a deer in headlights look back at me. After some stammering, the answer typically is “I don’t know.”, “Is indexing important?”, “Whatever is standard.”, or the most popular, “What’s an index?”.

Indexing Overview

In this blog post, I’d like to touch briefly on what indexes are in MongoDB and how they greatly impact performance. What is an index? If we start with the definition provided by MongoDB:

Indexes are special data structures that store a small portion of the collection’s data set in an easy to traverse form.

we get an idea from the “easy to traverse” statement that they make something that is complicated, easier. In this case, indexes make traversing a collection easier (faster).

Let’s consider a data set that includes all of the postal codes in the United States. (zips.json can be downloaded here). Without an appropriate index if our application wants to find, for example, the zip code for a particular city, let’s say Keizer, Oregon (97303), MongoDB would have to scan our entire collection for that city to return the appropriate zip code. In fact, based on our data set, it would have to look through all 29,467 records to be able to find and return the one record.

That’s a lot of unnecessary looking through the database to try to find the correct match of our search term. Imagine if our data set was much larger and included a million or more records. That would be a lot of overhead and searching. If we look at what is going on in a basic query for looking for our city of “KEIZER” buy having MongoDB explain the execution stats for our query, db.zips.find({"city": "KEIZER"}).explain("executionStats"), we can see a few things that are performance bottlenecks.

No Index Used
Full Collection Scan

First, we see that even in our relatively small database the query execution time was 34ms. Then, as expected, we looked at all 29,467 documents and that a collection scan was performed to do this query. Again imagine scanning a much larger data set and how that could be a slow process.

Now, what happens if we add an index? Since we are, in this case, searching by city name, it would make sense to create an index on that field. That can be accomplished in the Mongo Shell with the command:

db.zips.createIndex({"city": 1})

Which will create an ascending index on the city field in our collection. Now if we run the same query as before we should expect a couple of things. First, our query execution time should be significantly lower as well as the documents examined number.

Index Used
Find with a defined and used Index

Wow, with an index in place on the city field, doing a search on a city we get some amazing improvements. Our Actual Query Execution Time went from 34ms to zero, we are doing an index scan now (IXSCAN) instead of a collection scan (COLLSCAN) and the number of documents examined decreased to only having to examine a single document. That’s pretty powerful and highlights the need to have indexes on your collection.

After explaining this to MongoDB users I often get a “Why don’t I just index every field then?” response. Well, there’s no such thing as a free lunch, right? Indexes come with overhead. Some examples include memory usage and write performance of your data due to having to update indexes based on new data being stored.

We could also create indexes on multiple fields as well. We might, for example, not only be querying our database on a single city but on a city and state combination. In that case, we might want to look at generating a compound index that references multiple fields in the same index. In this example something like db.zips.createIndex({"city": 1, "state": 1}) might be useful.

Wrap Up

When deciding on an index to create there are a few common things to think about in general. First, create indexes which support your queries. If you are not going to query the zip code collection based on population (“pop”), there isn’t a need to generate an index for that field. Second, if your queries are doing sort operations, make sure that your indexes are supporting those in an efficient way. Third, make sure that your queries are allowing MongoDB to be selective in the results it provides from the query. This allows MongoDB to use the index for the majority of the work.

Indexes are an important part of proper application design with MongoDB. Having a properly designed index can have a large positive impact on the performance of your application. I would highly recommend reading more about them prior to your application deployment to ensure a great end user experience.

Facebooktwitterredditlinkedinmail