Add Output to Your Long Running Rake Tasks

21 July 2017

Expectations vs Reality

Have you ever worked on an item, tested it thoroughly on a staging environment, done extra dry runs for good measure, been completely satisfied with the results, only to have it hit production and you have no idea whether it’s working properly? I had such an experience recently with a one-off rake task. The following details that, along with what I learned and how to prevent it from happening to your projects. Chalk up another lesson about what it means for a feature to be complete

The Task

Recently I was involved on a team project around developing ingestion and display of user data. The basic process was this:

  • Get a list of all the objects from an S3 bucket
  • Sort them by when they were last modified
  • Enqueue background jobs (in order) for each of the sorted files

Simple enough; 15 lines of code or so to handle these requirements. For good measure, the below steps were done to ensure quality:

  • Test locally
  • Test against a staging system
  • Code reviews from 4 other team members
  • Run as a rake task rather than copy pasted into a console to ensure consistency

From my perspective, the above looked fantastic. However, there was one key question I left off of development…

Does it scale?

From a performance standpoint, it scaled nicely. Sorting the few hundred thousand AWS S3 objects in ruby wasn’t an issue for either RAM or the CPU (we’ll see how it goes if this has to be done again with 3 or 4 orders of magnitude more items), which is where my head was at when I wrote the code.

No, where it didn’t scale nicely was from the perspective of the people running thise task post deploy, of whom I was one. In the end, it did exactly what it was supposed to, but there was a period of 10 minutes where we didn’t have any metrics, so there was no way to tell if it was hung up. When you’re doing a late night deploy with 4 people, nobody relishes having to spend any extra time just because some code got hung up and you had to blindly wait rather than immediately retry.

The gap here was that there was no output from the time the task started pulling down info about the objects in the bucket until it started enqueuing background jobs. Even in batches of 1000 objects from S3, that’s still hundreds of network calls with large payloads that had to be consumed before we saw any progress.

How I Fixed It

Since this task may need to be run again in the future, I did a few things:

  • Add incremental feedback for long running sub tasks
  • Provide final output when a task completed

For the S3 portion of the task, my code looked like the following:

objects = []
s3.list_objects(bucket: bucket_name).each do |response|
  objects.concat(response.contents)
  puts "Objects Received from #{bucket_name}: #{objects.count}"
end

puts "Total Objects Received from #{bucket_name}: #{objects.count}"

That way in the future we’ll see updates as each network call completes, along with a final update of how many objects were found. This will also help in providing confirmation that all the data was successfully reprocessed.

Takeaways

I’ve placed a lot of focus recently on adding good metriccs and instrumentation to critical parts of codebases. Instrumentation is an excellent example of how a healthy application is a process, not a goal. This illustrated to me that even for code which will only be used once, displaying proper feedback about progression is key. Just like it’s helpful to see file download progress in your browser, seeing the progress of a task as it runs eases a lot of potential pain points.


If you’ve got critical pieces of business logic into which you have little or no insights and would like that fixed, I’m your man; whether that’s in implementation or consulting on best practices with teams and architects. Being able to see what, how long and how many times specific actions happen in addition to your standard APM service can make all the difference for having confidence your applications are running as expected.

Read More...

Developers and Systems vs Goals

21 July 2017

Systems and Goals

For most developers, our days is composed of accomplishing individual tasks, which makes it very easy to get lost in those details and solely focus on what’s at hand rather than the direction in which accomplishing that takes us. Rather than focusing on each step or feature as an individual item to accomplish, I say that it’s best to reframe each of them instead as a small step in the right direction.

Don’t Lose Sight of the Forest for the Tree in Front of You

“Don’t lose sight of the forest for the trees” is a common saying. It means that you shouldn’t let what’s right in front of you make you forget the big picture. Letting individual tasks own your thoughts is great; it allows you to hyper focus, buckle down and quickly do what needs to be done. However, if that’s all you do, simply moving your hyperfocus from task to task as they’re accomplished, it gets very easy to lose track of why those tasks are being done, and can shut down critical thinking about if that task should be altered in any way, since you may just be checking boxes and blitzing through items.

Tasks (Goals) Should Be Malleable for Systems Thinkers

Why should an individual task be open to change? There are a few reasons, but the one I’ll dive into here regards momentum, and the mindset of “All or Nothing” which is very common among developers.

Momentum

Anyone who really knows how I think knows that I highly value momentum, even to the point of changing my driving routes on the fly if I hit a red light and could instead take a right at the intersection and still move where I need to. Why is that? Sitting still can breed frustration and also takes up patience. Like everything else in life, patience is resouce; you’ve only got so much of it for any given time period. No need to waste it when it you could instead continue moving on.

Ask yourself, how does that relate to development?

The most common way I’ve seen that momentum and patience tie in to daily development is the concept of All or Nothing.

All or Nothing

This mindset is great when a few conditions are met:

  • The “All” win condition is known
  • “All” is technically achievable with your team’s given skillset
  • “All” can be accomplished in a reasonable timeframe

However, when any of those are missing, going for All is a detriment and will be an outstanding instance of the Pareto Principle, aka the 80/20 rule. Much of the time, there will be some degree of one of those constraints, so the 80/20 rule will come into play. If you’ve got excellent team management, your feature pipeline will be worked out far in advance, time will be allocated and most or all of the implemenation details, acceptance criteria and potential pitfalls will have been discussed and planned for. What happens when something comes out of the blue though? Even for mission critical items, I say it’s best to quickly settle for “good enough” when it’s a last minute feature requirement. This is known as “stop the bleeding” when it’s a mission critical, high severity issue.

Much like agile development vs waterfall (another great example of systems vs goals), your focus should be on continually rolling out functional, complete items which have the best bang for your buck on time invested. This helps you build momentum which will always move you in the desired direction on your project.

Your Project’s Direction and Momentum Should be Your Focus

Whenever possible, focus on maintaining the momentum and direction of your progress.

For developers, the System is to continually move in the right direction.

Systems Yield Success

Goals also yield success, but typicaly at a much lower rate than systems. They also take longer and leave you lost when the goal is finished. Since moving to thinking in terms of systems rather than goals, my productivity, income, success and satisfaction with life have all skyrocketed. It could just be coincidence, but observing how other successful people and organizations operate makes it clear to me that systems are the way to go.

Read More...

What Is A Complete Feature: A Developer's Perspective

16 July 2017

What is a complete feature, and how does that affect your team and product?

As a developer, what does a complete feature mean? This is a question that should be answered both as an individual and for the team or organization as whole. Part of being a cohesive team revolves around everyone being on the same page regarding what constitutes completeness. A higher level of completeness ensures that your customers, developers and managers will be happier since they’ll enable you to do things correctly the first time. Delivery of polished items is how key stakeholders are impressed and kept coming back for more or, in the case of budget makers, approving bigger and better things for your team.

The different levels

Read on for the different levels of completeness. Each level encompasses the previous levels and describes what’s needed to break through for increased operational efficiency and product reliability.

Level 1: Feature Functionality is 100%

This is the most basic level of completeness. The developer in question has built out the code neccessary to complete a feature; this typically involves manual testing in a development environment to ensure that it works as expected.

Level 2: Full Unit Test Coverage

Hopefully all the code in your codebase meets at least this level of completeness. Here, all committed code has 100% coverage via unit tests. This is the bare minimum needed to ensure that:

1) Each function does what’s expected 2) Should there be edge cases, they can be easily added to coverage so you can ensure they aren’t repeated

Level 3: Manual Feature Testing with Permutations

The next level involves a more formal QA process around the feature, with acceptance from someone who isn’t the developer. This ensures there are a second set of eyes on the functionality and also helps spot unintended side effects that the dev may not have noticed. It’s typical for a developer to do operations the same way every time, so if there’s another path to accomplish the same thing, a dev may not think about testing it as, in their minds, “Why would someone do it that way?”. I also include code reviews here. At this level it’s all about having extra sets of eyes and different approaches to the same feature that way you can be as sure as possible that all your bases are covered.

Level 4: Full Integration Coverage

Adding full integration coverage for a feature is the closest you can come to being sure that your feature behaves as expected end-to-end and that it stays that way. Doing full manual regressions across every feature in your application every deploy just isn’t feasible; that’s why you need integration tests. The QA team will focus their attention towards the most critical items in your application but will rarely hit every permutation of a feature. Each user is unique, and the users of your application will always find a way to do things which you didn’t anticipate. Should they be able to break something, it’s important that you’re able to replicate it. Most well run codebases fall within this tier of completeness.

Level 5: Feature Has Complete Documentation

Adding documentation to your code is an oft overlooked aspect of feature completeness. Maintaining good documentation is key for certain areas of your app which aren’t easily apparent to a new dev picking up the project. This is particulary true for the following:

  • Complicated user workflows
  • Specific restrictions or limitations and why they are in place (ie, do we limit users to X items in a SaaS app because of performance, business or paid tiering?)
  • API endpoints (speicifically Protocol, URL, Port, Payload and Return Values)
  • Programming patterns, why they developed that way and how they’re helpful
  • Custom code which replicates commonly used libraries (was there a reason you wrote your own Delayed::Job implementation and extensions rather than using Sidekiq?)
  • Internal processes
  • How to access/trigger key functionality within the codebase
  • Bootstrapping the app for a new dev
  • Deployment process

All points of code are fair game for documentation. The above however are what I currently consider the bare minimum for maintaing a good bus factor.

Level 6: Instrumentation Around Key Metrics Are Present

Finally is instrumentation. This refers to sending metrics about key parts of your app off to a service so that you can set up alerts, graphs and monitors around the counts, execution times and usage patterns of your application. A well put together dashboard and alert system will save you a bunch of trouble as it enables you to address many problem points before they negatively impact all your users. Being able to quickly respond to, mitigate and clean up hot spots is what seperates a good operation from an excellent, tightly run ship.

Level N + 1

This list is by no means meant to be the be all, end all of what completeness entails. Always be on the lookout for identfying new ways to increase your code quality.

The Takeaway

There are always ways to put out more complete features. Be aware of them and put out the best code possible for your business and timeline constraints. Keep them in mind when dedicating to feature development and how they’ll alter your development pace. Remember, more time spent upfront on these will reduce code churn in the future, make for less problems in production and enable you to more quickly address any issues which do arise.

Read More...

How to Specify the Foreign Class for Rails Associations in a Different Namespace

15 June 2017

Referencing Classes From Another Namespace

Say you’ve got the following classes:

class Foo < ApplicationRecord
end

class Foos::FooBarBazs::Baz < ApplicationRecord
    belongs_to :foo
end

Normally for Rails, we can pass a simple has_many :baz, but for models in a different namespace, you have to specify the class name, so it would look like this:

class Foo < ApplicationRecord
    has_many :baz, class_name: "Foos::FooBarBazs::Baz"
end

The above works great, and is what’s recommended in the official docs. However, it presents a problem when code is refactored and you don’t have sufficient code coverage. To remedy that, do the following:

  • enable eager loading for your test environment
  • use a constant, not a string

Using a Constant

The following don’t work:

  • has_many :baz, class_name: Foos::FooBarBazs::Baz
  • has_many :baz, class: Foos::FooBarBazs::Baz

However, what DOES work is: has_many :baz, class_name: Foos::FooBarBazs::Baz.to_s

Now eager loading will catch any typos made here since that would introduce a NameError: uninitialized constant. Also, we’re working with classed and objects in ruby, not strings, so my opinion is that it looks and reads much nicer as well.

Read More...

Customize Spree Views Without Deface

05 February 2017

Spree and Deface

Spree is a very powerful and highly configurable Rails based ecommerce solution. It’s loaded full of awesome features which will let you handle most any situation or special edge case you may have for your business needs. From a user’s perspective, it’s awesome. In the current version of the Spree guide for customizing view, the recommendation is to use deface for these customizations. Deface is very powerful and aims to make Spree upgrades seamless. Unfortunately, it adds extra complexity and will make your codebase diverge from the standard Rails conventions. These conventions are part of what make efficient development using Rails possible, so why would you get away from that? Looking for a customized ecommerce solution? Spree is fantastic, shoot me a message if you’d like to discuss your needs.

In this post I’ll run through a quick example of what using Deface looks like, and then propose we return to the built in Rails functionality of overriding views as a means to have a more useable and efficient codebase.

Read More...

Page: 1 of 11 Next