Performance Testing a Postgres Database vs Elasticsearch 5: Column Statistics

24 January 2017

This is the first post on benchmarking a postgres database vs a (1 node) elasticsearch instance. The subject of this test are numeric column statistics, based on 10 Million products inserted into both the database and elasticsearch index.

Up to date list of articles diving into my ecommerce performance investigations:



Rails.logger.level = :info

Benchmark.ips do |x|
  column = :brand_id
  x.report("Product Brand ID Elasticsearch Stats") {Product.elasticsearch_stats(column)}
  x.report("Product Brand ID PG Stats") {Product.pg_stats(column)}
  x.compare!
end
Warming up --------------------------------------
Product Brand ID Elasticsearch Stats
                        42.000  i/100ms
Product Brand ID PG Stats
                         1.000  i/100ms
Calculating -------------------------------------
Product Brand ID Elasticsearch Stats
                        451.179  (± 8.4%) i/s -      2.268k in   5.066563s
Product Brand ID PG Stats
                          3.249  (± 0.0%) i/s -     17.000  in   5.236520s

Comparison:
Product Brand ID Elasticsearch Stats:      451.2 i/s
Product Brand ID PG Stats:        3.2 i/s - 138.86x  slower

Point, blouses Elasticsearch.

Read More...

Intro to the Ecommerce SaaS Benchmark Application

24 January 2017

In my search for speed and scalability, I’ve had the pleasure to spend a lot of time recently with Elasticsearch. It’s fast, powerful and continually updated to make it better at all it does. Besides Elasticsearch, I have my eyes on other technologies such as RELC (Redis Labs Enterprise Cluster), Citus DB, and many others which are geared towards scalability and ultimate performance. As a consultant, much of what I do revolves around enabling businesses to make money quicker and more efficiently. The core of many businesses these days is ecommerce. As such, I’ve created a stubbed out Ecommer SaaS project which will be specifically used to benchmark various technologies and how they scale on different orders of magnitude.

As time progresses, I’ll collect more data, expand the application’s features to more closely mimic an actual ecommerce app so that we can investigate what effects different technologies, platforms and data sets will have on the app’s performance.

Up to date list of articles diving into my ecommerce performance investigations:



Read More...

The ABC of My Life: Always Be Constructing

10 January 2017

Always Be Closing for Productivity and Profit

For sales, there’s the classic line from Glengarry Glen Ross, “ABC: Always Be Closing”. It’s used as the mantra to drive their actions towards the end goal of more sales. Over the last 18 months (since I decided to become a consultant), I’d been living by my own ABC, though I hadn’t sat down and thought about it much til now. The ABC for my life is this: Always Be Constructing.

Read More...

Think Big: Continue on the Path to Scalability as a Lead Developer

08 January 2017

As the lead developer on a project, you’ve already either created or been given the high level design by your project’s software architect and will now have to implement it. What sort of goals should you keep in mind and shoot for as you lead development of the project in order to maintain the initial momentum towards a scalable product? Thinking big is still part of the game; you must identify specific challenges and potential or actual bottlenecks which could challenge the long term viability of your web application. Whether that’s performing volume testing on specific and vital endpoints of your application or performance testing some common user flows, you have to be cognizant at all times of areas that could be pain points during the growth of your product.

Here are some actions to take during development:

  • Leave a SQL logger running and see if any specific requests generate more queries than you’d expect
  • Go wild: Add a million items to a shopping cart, spam likes and comments
  • Be evil: Try to break things. Create loops in parent/child categories for instance.
  • Add a ton of web processes on a production clone to see how your database handles it (connection pooling/raw resources)
  • Perform simple requests with stupid amounts of test data. Accidentally loading all records from your DB anywhere?
  • Ensure any services such as Redis or Elasticsearch can handle traffic spikes.

There are many more places to take action and monitor; the above should be a starting point to inspire other actions. What do combinations of the above yield, and how does it apply to your application? Thinking on and answering that will provide new ideas which you can combine with the originals until you’ve synthesized a large amount to take care of and think about. Whether you formalize testing of these or not, always remember that they all revolve around two points. Any endpoints, and user or automated actions could be potential weak spots for exacerbating an unidentified hot spot, so keep these following two in mind:

  1. Malicious actions (intentional or no)
  2. Large Amounts of Information (whether data or users)

Be aware and mindful of those two, let them guide you as you review features and perform final testing. A little preemptive action on these will go a long way towards saving you for the day where you get slashdotted or decide to turn your product into a SaaS offering. Covering and catching even the few most likely candidates for slowdowns will save you massive amounts of time later.

Read More...

Scaling: The Final Question. Performance and Load Testing: The Answer

04 January 2017

Spree E-Commerce For Ruby on Rails

Back in my days at a student, one of my networking professors brought up the same point every time we’d meet for a class: “When you’re implementing something, always ask yourself, ‘How does it scale?’” Over the last few months I’ve spent a lot of time dealing with systems and making sure they scale well using various technologies. I ran into an unexpected on today, though. It was Spree. Yes, the extremely configurable and awesome e-commerce platform does not play well with certain scenarios.

The issue was that whenever an of an order was updated which altered shipping costs, a callback chain was invoked. This either creates or updates shipping information for each item being ordered. What this means for the application is that (in order to remain highly configurable and track everything perfectly) each individual item from an order has a ShippingItem in the database. Buy 10 widgets, you create 10 ShippingItems. Buy 1000 widgets (or even 200), and Heroku will throw H* timeouts, which introduces [other errors with your Spree Addresses] that have to be solved so your users can view in progress orders and continue the checkout process.

The unforunate effect of this is that the more items someone wants to order from your site, the slower it will run. To throw some numbers in, an order with a total quantity of 2000 items took approximately 2 minutes to update the user’s address during the order process on a 1x Heroku dyno. The saving grace here is that the Spree team keeps a Slack channel open, and @brendan was able to point me to a previous issue and fix over at the Spree Github. Even better, dropping it in worked like a charm. Now, it has introduced a few more complications, such as needing to update a handful of template files and adjust a little logic, but in the course of an hour we had large orders running through the customer’s system again.

The point of this post isn’t to point out any weaknesses on Spree’s part, but to highlight the importance of making sure your systems scale. “Your” involves not just the code you write, but also other code being included. More importantly, make sure to test it on a system identical to your production environment. The reason I say that is I’d run many test orders through the system, but had only ever ran large orders through locally, assuming that it wouldn’t make a difference. I’d even noticed the substantially large amount of SQL queries for the checkout process on the area that had ended up hanging. What was different though was that on my development machine those queries completed at least an order of magnitude quicker (I recently timed it, it was ~4s vs the aforementioned 2 minutes). When things like that jump out at you, make a note and be sure to test on production. To quickly touch on it, there is a difference between performance testing and load testing. In this case, a very specific type of load testing was needed; namely checking out with large quantities in the cart.

To go back to what that professor shot our way every class period: Make sure your system scales. This doesn’t mean overly optimize early on or spend an inordinate amount of time testing every aspect of your system, but make sure to get real numbers in as benchmarks and base levels of speed so that you can check in time to time and be sure your application stays healthy and performant. At a bare minimum, fire up Siege and point it at some endpoints, then run through the most common user flows while New Relic or Librato is hooked up and you’ve got a window up tailing the logs so you can be sure to have a mental model and a few specific numbers of where the majority of requests to your application will go.

Read More...

Previous Page: 3 of 11 Next