• Category Archives: performance

Benchmarking Karafka – how does it handle multiple TCP connections

Recently I’ve released a Ruby Apache Kafka microframework, however I don’t expect anyone to use it without at least a bit information on what it can do. Here are some measurements that I took. How Karafka handles multiple TCP connections Since listening to multiple topics require multiple TCP connections it is pretty obvious that in […]

Read more at the source

Empty? vs blank? vs any? – why you should not use any? to test if there’s anything in the array

I often work with junior programmers and somehow they tend to use syntax like this: And there’s almost nothing wrong with it, as long as you don’t work with huge arrays. Here’s a difference in speed between empty?, blank? and any? with different array sizes (worse scenario): Why any? is so slow with bigger arrays? […]

Read more at the source

Ruby hash initializing – why do you think you have a hash, but you have an array

We all use hashes and it seems, that there’s nothing special about them. They are as dictionaries in Python, or any other similar structures available for multiple different languages. And that’s more or less true. Although Ruby hashes are really special, because from time to time they really are… arrays. You won’t see that clearly […]

Read more at the source

Ruby global method cache invalidation impact on a single and multithreaded applications

Most of the programmers treat threads (even green threads in MRI) as a separate space, that they can use to simultaneously run some pieces of code. They also tend to assume, that there’s no “indirect” relation between threads unless it is explicitly declared and/or wanted. Unfortunately it’s not so simple as it seems… But before […]

Read more at the source

Exceptions should not be expected – stop using them for control flow (or any other logic handling) in Ruby

If your exceptions aren’t exceptions but expectations, you’re doing it wrong. Here’s an example what programmers tend to do: I’ve seen also few cases, when exceptions parameters were used to pass objects that the programmer was later on working with! As you can see, the whole flow of this piece of code is handled with […]

Read more at the source

Mongoid (MongoDB) has_many/belongs_to relation and wrong index being picked

Sometimes when you have a belongs_to relations, you may notice, that getting an owner object can be really slow. Even if you add index like this: Mongoid might not catch it. Unfortunately you can’t just explain it, since this relation is returning an object not a Mongoid::Criteria. Luckily you can just hook up to your […]

Read more at the source

ActiveRecord count vs length vs size and what will happen if you use it the way you shouldn’t

One of the most common and most deadly errors you can make: using length instead of count. You can repeat this multiple times, but you will always find someone who’ll use it the way it shouldn’t be used. So, first just to make it clear: #count – collection.count Counts number of elements using SQL query […]

Read more at the source

Reducing MySQL’s memory usage on OS X Mavericks

Recently, I found myself re-installing everything from Homebrew and began to notice that MySQL was consuming nearly half a gig of memory. Given that I don’t do too much with MySQL on a regular basis, I opted to override a handful of default configuration options to reduce the memory footprint.

As you can see, a fresh MySQL install via homebrew was consuming over 400mb of memory.

Here is how I reduced my memory footprint:

$ mkdir -p /usr/local/etc

Unless you already have a custom MySQL config file, you will want to add one into this directory.

$ vim /usr/local/etc/my.cnf

We’ll then paste in the following options into our file… and save it.

  # Robby's MySQL overrides
  [mysqld]
  max_connections       = 10

  key_buffer_size       = 16K
  max_allowed_packet    = 1M
  table_open_cache      = 4
  sort_buffer_size      = 64K
  read_buffer_size      = 256K
  read_rnd_buffer_size  = 256K
  net_buffer_length     = 2K
  thread_stack          = 128K

Finally, we’ll restart MySQL.

$ mysql.server stop

If you have MySQL setup in launchctl, it should restart automatically. After I did this, my MySQL instance was now closer to 80mb.

So far, this has worked out quite well for my local Ruby on Rails development. Mileage may vary…

Having said that, how much memory are you now saving?

Read more at the source

Setting Akamai Edge-Control headers with Ruby on Rails

Just a short and sweet little tip.

Several months ago we moved one of our clients over to Akamai’s Content Delivery Network (CDN). Ww were previously using a combination of Amazon S3 and CloudFront with some benefits, but we were finding several key areas of the world were not s covered by Amazon (yet) for asset delivery. Along with that, we really wanted to take advantage of the CDN for more of our HTML content with a lot of complex rules that related to geo-targeting and regionalization of content.

I’ll try to cover those topics in another post, but wanted to share a few tidbits of code that we are using to manage Akamai’s Edge-control caches from within our Rails application.

With Akamai, we’re able to tell their Edge servers whether it should hold on to the response so it can try to avoid an extra request to the origin (aka our Rails application). From Rails, we just added a few helper methods to our controllers so that we can litter our application with various expiration times.

  # Sets the headers for Akamai
  # acceptable formats include:
  #   1m, 10m, 90m, 2h, 5d
  def set_cache_control_for(maxage="20m")
    headers['Edge-control'] = "!no-store, max-age=#{maxage}"
  end

This allows us to do things like:

  class ProductsController < ApplicationController
    def show
      set_cache_control_for('4h')
      @product = Product.find(params[:id])
    end
  end

Then when Akamai gets a request for http://domain.com/products/20-foo-bar, it’ll try to keep a cached copy around for four hours before it hits our server again.

Read more at the source
close