“Trust but Verify” Your Chef Infrastructure

A cornerstone of infrastructure as code is treating our infrastructure as we would any other software project by thoroughly testing all changes. As Chef users we have a plethora of options when it comes to testing infrastructure code. We can ensure teams use agreed upon coding standards by linting with Rubocop and Foodcritic.  We can test the outcome of complex cookbook logic quickly with unit testing in Chefspec.  We can perform lengthy, but thorough integration tests with bats, Serverspec, or Inspec running from within Test Kitchen. Combined these tests ensure high quality code powering our infrastructure.

Where testing tends to get complex is how we bring those various frameworks together so that any change to our infrastructure is tested before it enters production. In particular, for those of us that use monolithic repository structure for our chef environments, roles, and cookbooks testing is particularly tricky.  With a monolithic repository all code is stored in a single git repository that a team works out of.  This makes it difficult to determine what chef assets to test as a pull requests comes in. While we may find it acceptable to run full integration tests after changing a single cookbook, we can’t afford the time it would take to run integration tests on every cookbook in our repo when simple change is made. We decompose the changes in a commit to determine which assets have been changed so we can test just those individual assets.

To test just the assets that change in a monolithic repository I wrote Reagan.  Reagan is built to “Trusty but Verify” Chef repository pull requests in Github using Jenkins.  A Jenkins job polls your chef repository for new pull requests to test.  When a new PR is opened Reagan retrieves the list of changed files in the pull request from the Jenkins API.  From that list it builds a list of assets that have been changed.  For data bags, environments, and roles it runs simple JSON validation.  For cookbooks Reagan runs “knife cookbook test” and then checks the chef server to ensure the cookbook version has been incremented.  After performing those basic sanity tests it also runs tests defined in a per cookbook YAML file.  This allows you to vary your testing methods depending on the code.  The reagan_test.yml config contains a simple list of commands to run like this:

tests:
  - rubocop .
  - foodcritic . -f any
  - rspec

Jenkins then updates the PR testing status and depending on configuration may send e-mails or post to chat systems like Slack.  Here’s an example of a successful lint test run by Reagan:

success

Here’s an example of a similar test failing due to a Rubocop offense:

failure

Reagan helped me to ensure quality code entered our production environment without adding burdensome process.  The source is available at https://github.com/tas50/reagan with setup instructions in the Readme.  The gem is available on Rubygems as well.

Graphing Chef Application Deploys with Librato

This is a followup to my previous post on graphing Chef application deploys to Graphite. In my new role we use Librato for time series metrics instead of Graphite. One of the best features in Librato is the concept of metric annotation.  Annotations are very similar to the simple vertical lines I previously used in Graphite, but not only do they display in a more intuitive fashion, they include additional information.  In additional to the vertical line, annotations allow for a name, description, and even a URL to be displayed with the annotation.  This is really useful days / weeks later when you don’t necessarily know what version a line corresponds to.

We’re using annotation within Chef to create annotations for each of our applications during a deploy.  In order to accomplish this I created a LWRP in Chef for Librato metric annotations, and we notify to that LWRP when a deploy action is completed.  You can find the code from the provider below as it’s a bit too large to paste inline with this post.  With our core cookbook included, this provider is now available for any of our application cookbooks to graph deploys.  It can either be notified when a deploy resource updates or it can be used via the “after_deploy proc” functionality in the artifact cookbook, which we use for our application deploys.

artifact_deploy 'my_app' do
  version node[:my_app][:version]
  artifact "http://therealtimsmith.com/artifacts/my_app/#{node[:my_app][:version]}.tar.gz"
  deploy_to '/mnt/my_app'
  owner 'my_app_user'
  group 'my_app_group'

  after_deploy proc {
    if deploy? || manifest_differences?

      # graph the deploy to librato
      mycorecookbook_graph_deploy 'my_app' do
        version node[:my_app][:version]
      end
    end
  }

Deduping

One of the side benefits of all this rich deploy information within the annotations is the ability to de-dupe the annotations.  With multiple servers deploying our application we don’t want to fill our graphs with overlapping, duplicate annotations.  Using the Librato gem we can first query Librato first to see if the annotation exists, and skip adding another annotation if we find anything.  This adds to the complexity of this LWRP over my previous Graphite LWRP, but it makes it much more useful on graphs.

End Result

Librato Deploy

providers/graph_deploy.rb:

# Support whyrun
def whyrun_supported?
  true
end

action :graph do
  if node.chef_environment  == 'production' || node.chef_environment  == 'staging'
    converge_by("Graph deploy of #{@new_resource.name}") do
      graph_deploy
    end
  else
    Chef::Log.info "Not graphing #{@new_resource.name} as we're not in Staging/Prod."
  end
end

def unique_annotation?
  annotations = Librato::Metrics::Annotator.new
  annotation_name = (node.chef_environment + '_' + @new_resource.name + '_deploys').to_sym

  # fetch annotations for this app. rescue because 404 == unique (first ever) annotation
  begin
    host_annotations = annotations.fetch annotation_name, :start_time => (Time.now.to_i - 1800)
  rescue
    return true
  end

  # unassigned is our source. if its key is not there we got back an empty set and we're unique
  return true unless host_annotations['events'].key?('unassigned')

  # if any of the event titles match the desired title then we're not unique
  message = "Graphing deploy of #{@new_resource.name} version #{@new_resource.version}"
  host_annotations['events']['unassigned'].each do |a|
    return false if a['title'] == message
  end

  true
end

# send the annotation to librato if unique
def graph_deploy
  require 'librato/metrics'
  annotation = (node.chef_environment + '_' + @new_resource.name + '_deploys').to_sym
  creds = Chef::EncryptedDataBagItem.load('librato', 'keys')
  begin
    Librato::Metrics.authenticate creds['username'], creds['token']

    # bail out if this deploy has already been graphed
    return unless  unique_annotation?

    Librato::Metrics.annotate annotation, "Deployed #{@new_resource.name} version #{@new_resource.version}", :source => node.name
    Chef::Log.info "Graphing deploy of #{@new_resource.name} version #{@new_resource.version}"
  rescue
    Chef::Log.error 'An error occurred when trying to graph the deploy with Librato'
  end
end

 resources/graph_deploy.rb

actions :graph

default_action :graph

attribute :name, :name_attribute => true, :kind_of => String, :required => true
attribute :version, :kind_of => String, :required => true

attr_accessor :exists

 

Important Note:

This provider requires the librato-metrics gem to be loaded into your omnibus Chef install.  In order to do this you either need to do a chef gem install during compile time or you need to set the following attribute with the chef-client cookbook:

default[‘chef_client’][‘load_gems’] = %w(librato-metrics)

Low Cost Operations Metrics Display Setup

One of the most important roles of any operations team is metrics gathering and display.  Without metrics you are flying blind in Ops, and without proper display those metrics are rarely consumed by your team. After Cozy’s move into our new office we decided to set up a wall of displays for Ops and Dev metrics.  As a startup we’re not made of money so we needed to do this on a bit of a budget.  Here’s the simple setup we used and the script we used to setup our hosts.

We had initially setup two borrowed Raspberry Pi’s connected to small monitors we had leftover in the office.  While this worked the displays were too small to be seen from across the room and the Pi is just too slow for modern dashboards applications.  We decided to purchase two small TVs to augment our existing displays, and four new low end PCs.  For the low end PCs we settled on the Radxa Rock Pro and for the TVs we used Seiki 32” TVs.

Seiki TVs

seiki_tv

We purchased 32″ Seiki TVs, which run about $170 each on Amazon. These are without a doubt the cheapest TVs you could possibly buy and it shows.  Poor contrast ratio, bad color preproduction, and they’re a bit on the thick side.  Why did we get them then?  Well we’re not watching Blu-Rays on these, we just need them to show metrics so everything that makes the a poor choice as a TV makes them perfect for metrics monitors.

Radxa Rock Pro

radxa_pro

Radxa is a small (2 person?) Chinese based company that sells what is effectively a Android phone in a small plastic case.  The systems have a quad core ARM processor, 2GB RAM, 8GB NAND storage (Pro model), HDMI, and ethernet/wifi.  At $99 including the case they’re a great alternative to the Pi since they offer significantly more CPU power and have the built in storage. They can run either Android, which ships on the system, or Linaro Linux which is an Ubuntu derivative that you need to flash on to the system.  We chose to flash Linaro onto the hosts since we wanted a more full featured PC.

Flashing the Radxa

Originally we thought it made sense to buy the Radxa Rock Pro model, which includes the onboard NAND storage, 2GB ram (vs 1GB), and a plastic case. After realizing that flashing the NAND requires a special image file and an application that only runs on Windows we ended up just buying micro SD cards instead. Most images in the Radxa community are in the SD image format and flashing can be done via a PC or Mac without any special utilities.  I’d recommend buying a name brand Class 10 16GB micro SD card for plenty of storage.  They’re about $10 shipped on eBay. If you’re on a budget might actually make more sense to just buy the Radxa Rock Lite and a case vs. the Pro also.

Setting up Linux on the Radxa

The Radxa ships with Linaro Linux, which is a light version of Ubuntu for ARM processors.  It used the LDXE desktop and a different kernel version, but otherwise it’s basically Ubuntu.  Since we had four of these hosts we created a small script to automate the setup.  The script performs the following.

  1. Sets the hostname
  2. Removes a bogus entry left in the resolveconf file
  3. Sets the password for the rock user (make sure to change this in the script)
  4. Disables bluetooth, cups, saned, and ppp-dns ervices
  5. Turns off the front LEDs which are horribly bright
  6. Remove NetworkManager and sets up eth0 / wlan0 using /etc/network/interfaces ( ake sure to change wlan0 auth in the script)
  7. Sets a new unique MAC address for eth0 using the last digit of the hostname.  The Rock Pros do not have unique MAC addresses so this will cause lots of issues when you have more than one unless you set a new address in software
  8. Resizes the root drive.  Oddly enough the Rock ships with an image where the root filesystem is next to nothing is size so installing a few packages will fill up the filesystem even though you have gigs of space on the SD card.
  9. Sets a boot script to sync the time since the Rock has no battery to save the data on reboot
  10. Upgrades all packages to the latest
  11. Installs VIM
  12. Installs Firefox and sets it to open at boot
  13. Installs X11VNC and sets a password (make sure to change in the script)
  14. Installs OpenVPN and configures resolvconf to work better with VPN tunnels

Download the setup script here:

Github Repo

Mounting the InfoRads

To mount the TVs and PCs to our wall we chose a simple fixed VESA mount from Monoprice ($7.50). We then bought a 1/4 inch masonry bit ($4) and 25 pack of 1.5 inch hammer set wall anchors ($14).  We carefully determined where we wanted to monitors to rest and then taped the backing of the wall mount to our concrete walls.  With the backings taped we drilled the 24 mounting holes and hammered in the anchors.  Keep in mind that once the anchors are in they’re pretty much never coming out so get it right the first time.

With the TVs mounted to the walls we bought 12 foot extension cords ($20) and simple cable channels ($20).  Each TV has it’s own extension cord which powers the PC and the TV allowing us to easily power reset the devices by unplugging the proper extension cord.  The Radxas were mounted to  to the back of each TV using a large patch of velcro ($4), which hides the PC and also allows access to the reset button and the SD card if needed.

End Result

I won’t lie the Radxas were a lot more than we had bargained for.  There’s a lot of quirks with these systems that we had to workaround in the setup script.  Once we worked out all the issues those the inforads have been wildly useful.  We rotate Librato dashboards on 2 screens and show build / test status screens out of Jenkins on two others.  Using these screens has led us to notice odd behavior on our site and increased our awareness of test status.

Inforads Mounted on Wall

Using Chef to Graph Deploys in Graphite

It’s pretty obvious at this point that I think Chef is a pretty amazing product.  I’m also quite smitten with Graphite for graphing the world, or at least the little part of the world that I’m responsible for.   Chef combined with Graphite can do some pretty amazing things, one of those things being the graphing of product deploys.

I rely on a little trick that Etsy first showed off (codeascraft – track every release ) where you can graph any value in Graphite as a vertical line when the value is 1 or more.  If you create a metric for deploys you can just send values of “1” every time you do a deploy.  Then you can overlay that data on top of your system or network metrics and search for patterns.  If you were to do this via the command line it would look something like this:

echo “servers.my_current_server.deploy 1 $(date +%s)” | nc graphite.mydomain.com 2003

If you want to run this same sort of thing via Chef you can just create an execute resource.  You can notify that resource anywhere in your recipe that you might consider a “deploy” action and you have ohai data that will allow you to send data to the right location.  Here’s an example:

execute “graph_deploy” do
command %Q[echo “servers.#{node.chef_environment}.#{node[‘fqdn’].gsub(‘.’,’_’)}.deploys.my_app_name 1 $(date +%s)” | nc graphite.mydomain.com 2003]
timeout 5
action :nothing
end

Now for the breakdown:  I setup my Graphite system with all servers are in a folder called “servers”, and under that things are broken out by Chef environment so I use the node.chef_environment variable.  From there I need to make sure the value goes under the current server.  I use FQDNs for my nodes in Graphite, but periods are used as the folder delimiter in Graphite so I need to replace the periods with underscores using gsub.  From there I create a folder for all my deploys since I run multiple applications on a system, and within that folder I create the actual metric with the same name as the service.  The resource times out after 5 seconds so if my Graphite server goes down Chef runs continue and the resource never executes on its own.  If I want to execute, I can notify an action of “run” from another resource.

And here’s the end result giving me a deploy to overlay on a few metrics:

Graphite Graph

Using Chef to Automate the Installation of Kismet

I can admit it; I’m a very dorky guy.  One of my hobbies is war driving.  I map day to day, and upload my results to social Wifi mapping site Wigle.net.  I usually stick to “wiglewifi” on a spare Android phone, but I prefer Kismet running on Linux for more accurate results.  Wiglewifi works great when you’re crusing around town, but your average cell phone lacks the wifi sensitivity to really shine.  With Kismet you can use a powerful external wireless adapter and run multiple adapters to listen on more than one channel at a time.

My setup consists of an Ubuntu 12.04 VM running under VirtualBox on my Mac.  I use a small waterproof magnetic GPS by USGLOBALSAT and two Alfa AWUS036H 1000mW USB wireless adapters.

I manually setup the Linux VM with Kismet and GPSd, but I figured it would be fun to automate the process using Opscode’s Chef.  Why not?  I wrote a chef recipe for kismet / gpsd available at https://github.com/tas50/chef_kismet.git

Chef is traditionally used in a client / server setup, but Chef also includes a standalone mode called Chef Solo.  The instructions below will allow you to setup a kismet system using Chef Solo

Step 1: Install Chef
apt-get install build-essential ruby rubygems ruby-dev
gem install chef ohai

Step 2: Configure Chef Solo
Edit /etc/chef/solo.rb and add the following values:
file_cache_path “/tmp/chef-solo”
cookbook_path “/root/cookbooks”

Step 3: Clone the necessary cookbooks
mkdir /root/cookbooks
cd /root/cookbooks
clone https://github.com/tas50/chef_kismet.git kismet
clone https://github.com/opscode-cookbooks/apt.git

Step 4: Create a JSON configuration file for the Installation
This configuration file will determine how gpsd and kismet are installed on your system.  The Readme for the Chef cookbook contains detailed information on each Chef attribute used to control the installation.  The below sample is what I use to sniff on wlan0 and log just nettxt file to the /root directory of my system.

Create kismet.json containing:
{
“kismet”: {
“servername”: “mobile_kismet”,
“logprefix”: “/root”,
“ncsource”: “wlan0”,
“logtypes”: “nettxt”
},
“run_list”: [“recipe[kismet::default]”]
}

Step 5: Run Chef Solo
chef-solo -c /etc/chef/solo.rb -j /root/kismet.json

You now have a kismet system

This whole process might seem a bit overly complicated for a kismet install, but why wardrive in the first place.  It’s all just fun stuff to keep you occupied.

NTP on Windows aka W32TM is Garbage

In a large scale computing environment accurate time becomes a constant battle. Systems across multiple data centers must interact with each other, and often those systems rely on each servers time interpretation for transactions. Accurate time is a must, and unfortunately in Windows environment accurate time is not an easy task.

In Microsoft’s flawed NTP implementation administrators rely on time sync within their Active Directory domains.  Each Active Directory forest contains a PDC Emulator, which amongst other things serves as a source of accurate time for the forest.  PDC emulators in other domains within the forest will sync to the root domain’s PDC emulator.  Domain members then sync their own clock every 8 hours. Often this set it and forget it mentality works, but often enough it doesn’t.  The reason for this is that Microsoft in their infinite wisdom did not implement the full network time protocol (NTP), but instead implemented a subset of the full standard which they call Windows Time (aka W32TM).

W32TM works well in small-scale single data center environments, but it fails to scale out to large multi-site environments, while still retaining the necessary accuracy applications often demand. W32TM was designed to keep system clocks accurate enough for Kerberos to function within an Active Directory forest, but was never built for truly accurate time.  For Kerberos to function in Active Directory the time of systems only has to be within 5 minutes of the Domain Controller, which is far from accurate, thus why they can get away with an every 8 hour sync.  The problem is so bad that Microsoft actually admits in a MSDN article that W32TM shouldn’t be used if true time accuracy is needed:

“The W32Time service is not a full-featured NTP solution that meets time-sensitive application needs and is not supported by Microsoft as such”

W32TM is prone to inaccuracy when used across high latency network links and fails to function accurately when the individual systems become heavily taxed. The use of a single AD Forest PDC system as the time source for an organization is perhaps the largest flaw within the system. With systems spanning across multiple data centers, all time requests must travel over WAN links to the PDC emulator. Whether they be ISDN or 10GB Ethernet these links introduce significant latency that Windows Time was not designed to compensate for.

For those that require accurate time on their Windows systems it is possible to run a fully compliant NTP client on Windows system.  Meinburg, a German company that develops hardware for highly accurate time keeping, has developed a simple yet effective NTP client for Windows.  It runs as a service and doesn’t have any odd prerequisites like you might find in applications that make it from the *nix world to the Windows world.

Meinburg NTP can either be installed via the GUI installer or in a more automated fashion with an installer config file and the command line.  I’ve used the command line configuration to add Windows NTP support to the Opscode Chef NTP recipe.  I’m putting the finishing touches on this Chef recipe and hopefully it will make it into the upstream NTP cookbook release soon.   For those of you that want to automate the installation either via PowerShell or via your WDS deployment processes here’s how to setup a simple command line install:

 

1) Download the client Meinburg NTP client from http://www.meinberg.de/english/sw/ntp.htm

2) Create a ntp.conf file for your clients.  This is the same format as NTP conf files in *nix so you can simply copy this from one of your *nix systems.  Here’s my sample config

 

driftfile “C:\NTP\etc\ntp.drift”

 

server ntp.datacenter1.dmz iburst

server ntp.datacenter2.dmz iburst

server ntp.datacenter3.dmz iburst

server ntp.datacenter4.dmz iburst

 

restrict 127.0.0.1

restrict default nomodify notrap noquery

 

3) Create a file called ntp.ini.  This is what you will pass the installer to define your installation.  Here’s my sample config:

 

[Installer]

InstallDir=C:\NTP

UpgradeMode=Reinstall

Logfile=C:\NTP\install.log

Silent=yes

 

[Components]

InstallDocs=yes

InstallTools=yes

InstallOpenSSL=yes

CreateStartMenuEntries=yes

 

[Service]

ModifyFirewall=yes

ServiceAccount=@SYSTEM

DisableOthers=yes

AllowBigInitialTimestep=yes

EnableMMTimer=yes

AutoStart=yes

StartAfterInstallation=yes

 

[Configuration]

UseConfigFile=\\SERVER\path\to\config\file\ntp.conf

 

Keep in mind that I have enabled AllowBigInitialTimestep so if the system is off by a day the clock is going to jump all at once.  I’m ok with this, but it might not be acceptable in your environment.  Use caution.

 

4) Pass the ntp.ini file to the installer to silently install:  ntp-4.2.4p8@lennon-o-win32-setup.exe /USEFILE=\\SERVER\path\to\config\ntp.ini

 

5) Confirm the the installation and NTP status by running C:\NTP\bin\ntpstatus.bat

Only Deploying When You Want To With Chef aka Don’t Break Prod

Continuous Chef Runs

By design Chef’s client application, chef-client, runs as a service on systems in a chef managed environment with a chef-client run occurring every 20 minutes.  The continuous chef-client runs ensure that systems are always configured as expected and changes can easily be pushed out with quick convergence.  The downside is that any action in a Chef cookbook will run every 20 minutes including the deployment of product code if you use chef to deploy your actual application code.  This could lead to your web application being reinstalled and restarted every 20 minutes unless you’re careful.

 

Preventing Accidental Continuous Deployment

To avoid continuously deploying product you can wrap any process that would cause impact to customers in a “deploy flag”.  This allows you to use Chef’s 20 minute continuous runs to ensure your system is in the appropriate state, while not deploying code.   When you wish to run through the complete process of building a working system from scratch, including pulling down new application code, you can simply set an environmental variable of deploy_build=true.  This can be done remotely during code rollouts via Capistrano or Rundeck to allow for a orchestrated deploy of code.

 

Using the Deploy Flag

At the beginning of your recipe include code as follows to write out the state of the flag and then execute any uninstall recipes or resources.

log "Deploy build is #{ENV["deploy_build"]}"

if ENV["deploy_build"] == "true" then
  include_recipe "COOKBOOKNAME::uninstall"
end

Further in your code when you would like to actually deploy components. (delete directories, copy in files, or anything else that would be customer impacting)

if ENV["deploy_build"] == "true" then
  DEPLOY ME HERE
end

 

End Result

Now running “chef-client” will ensure your system is in the appropriate state, while running “deploy_build=”true” chef-client will run a full deploy of your application.

Minimizing JS and CSS to Speed Page Load Times

Website performance is a constant battle for any operations team.  As the proliferation of high bandwidth home and office connections continues so does the demand for near instant response times from websites by consumers.    The enormous complexity of modern websites has made this near instant response time a lofty goal, but one that can be met using a multitude of small tweaks in both web servers and web applications.

One of the easiest ways to speed web experiences is to reduce the amount of data sent to the client, while still maintaining the same client experience.  In my past blog posts I discussed http compression and reducing excessive files, which are both excellent methods to reduce the data payload of a site, but for this post I’ll be discussing Javascript and CSS minimizing.  Javascript and CSS minimizing is the reduction in non interpreted content from the Javascript and CSS in your site, and can easily shave at least 25% of the size of those files.  With the enormous amount of CSS stylus and Javascript content in a modern website a 25% reduction can add up to a huge boost in end user performance.

Javascript and CSS are rather simple markup language.  Snippets of code are placed on lines and neatly organized so that web designers can easily review their code and make modifications as necessary.  Unfortunately all that formatting isn’t actually necessary for the code to run and takes up a huge amount of space.  Comment blocks, spaces, and carriage returns can all be removed from your code to reduce the space.  Dozens of pages of CSS or Javascript code can be reduced to a single large line that is interpreted by your browser at the exact same speed as the nicely formatted file.

So what exactly does a snippet of minimized code look like?  Very plain is the answer:

 

Original code:

/*

Name: Your Product Name

Description: Handheld mobile device styles.

Version: 1

Author: Some guy at your company

Tags: handheld, mobile, iphone

 

*/

 

/*

—————————————————————-

H A N D H E L D

—————————————————————- */

@media screen and (max-device-width: 480px) { html { -webkit-text-size-adjust: none; } }

 

Minimized Code:

@media screen and (max-device-width:480px){html{-webkit-text-size-adjust:none}}

 

Notice that all comment blocks have been removed as have about a half dozen spaces that made the CSS content just a little bit easier for web designers to read.  The single line CSS content is interpreted exactly the same by all major web browsers and the user will never know the difference, except that the one line content loaded several times faster.

 

How Big is the Impact?

The impact can be enormous.  Here’s the stats of a sample corporate webpage I found that didn’t minimize code:

 

4 CSS Files:

– Default.css: 4940 bytes unminimized, 3844 bytes minimized (22% reduction)

– Style.css: 69523 bytes unminimized, 55848 bytes minimized (20% reduction)

– All-ie.css: 212 bytes unminimized, 177 bytes minimized (17% reduction)

– Profile-capture.css: 1485 bytes unminimized, 1302 bytes minimized (12% reduction)

 

JavaScript Files:

– jquery.js: 252880 bytes unminimized, 141924 bytes minimized (56% reduction)

– customeffects.js: 3230 unminimized, 1800 bytes minimized (55% reduction)

– eloqua.js: 646 bytes unminimized, 555 bytes minimized (86% reduction)

– jquery.form.js: 26750 bytes unminimized, 14525 bytes minimized (54% reduction)

– jquery-lightbox_me.js: 10577 bytes unminimized, 4117 bytes minimized (38.9% reduction)

– jquery.eloqua.js: 3123 bytes unminimized, 2571 bytes minimized (82% reduction)

– jquery.profile_caputre.js: 2272 bytes unminimized, 1527 bytes minimized (67% reduction)

– actions.js: 1000 bytes unminimized, 687 bytes minimized (68% reduction)

– priv-tools.js: 6117 bytes unminimized, 4866 bytes minimized (79% reduction)

The Impact of Excessive Files on Web Page Load Times

The use of dozens of CSS and JS files in web products greatly increases the time that these sites take to load. The further a client is from the hosting server the longer the initial connection time is for each of these small files. When a server is located in the US and a client is located in Europe or Asia that initial connection time can become larger than the actual time the content takes to download. By reducing the number of JS and CSS files in a web site you can provide the same functionality, but reduce the time these sites take to load, without additional operational costs.

For this load time breakdown I will use a sample site: “Product XYZ”. The site is a modern web property, relying heavily on multiple CSS and JS files to provide an highly stylized interface. Product XYZ utilizes a content delivery network (Akamai Web Application Accelerator aka Akamai WAA) to locate these static CSS and JS files closer to customers.

CSS / JS files in Product XYZ: 42 files total

  • 9 CSS files
  • 33 JS files

Load Impact of a Single File in Product XYZ:
A single static JS or CSS file cached by Akamai WAA and retrieved close to the customer site has an average first byte connection time of .003 seconds. Without Akamai WAA that average time jumps to .267 seconds per file.

Total First Byte Time for Product XYZ

CSS:

  • 0.021 seconds with Akamai
  • 2.326 seconds without Akamai

JS:

  • 0.105 seconds with Akamai
  • 8.356 seconds without Akamai

Potential Improvement by Reducing File Count

If Product XYZ was to be optimized to a single JS and a single CSS file it could reduce the initial connection time within Product XYZ by .12 seconds (1.2%) when using Akamai WAA and 10.148 seconds (46%) without Akamai WAA. The real benefit to this reduction is the potential to eliminate the need for Akamai WAA caching entirely and thus greatly reduce operational costs. A non-cached Product XYZ with a single JS and CSS file would load in an estimated 11.852 seconds compared to approximately 10 seconds when using Akamai WAA caching. As Akamai bills based on data transfer this could reduce costs significantly by removing WAA caching.

HTTP Compression – Why it’s awesome and how to use it in IIS

HTTP Compression is one of the best bang for the buck optimizations available in Microsoft’s IIS webserver. HTTP compression works by compressing data before it’s sent to the client. Images and executables are already heavily compressed, but much of a website’s content is completely uncompressed. IIS can be setup to compress HTML, ASP, ASPX, JS, JSON, CSS, any other uncompressed content found on your web site.

Why Use HTTP Compression?
HTTP Compression first became popular in the days of incredibly slow dialup connections where shaving a few kilobytes off a website was essential. With modern high speed Internet connections it might seem like the compression / decompression time incurred when using HTTP compression would no longer make it beneficial, but there’s still a huge benefit to compression. Where compression really shines in modern web servers, are long distance connections, particularly those over high latency underwater fiber. Any site serving international customers will see a huge performance increase from compressing content prior to transmission. The latency from long haul connections greatly restricts the transmission speeds due to the time it takes acknowledgement flags from the client to reach the server. These TCP ACK flags act as a restrictor for international web transmissions and will leave your server trickling out data in small bursts. The more content that can be compressed the better as compressed data requires fewer packets and thus fewer ACK flags.

Improvements in IIS 7 HTTP Compression
HTTP Compression has long been an afterthought in IIS. IIS 5 required third party add-ons to enable compression and IIS 6 buried the necessary switches deep in the Metabase.xml file. In IIS 7 Microsoft finally provided a simple GUI method to enable both compression of static content (HTML, CSS, JS, etc) and of dynamic content (JSON, ASP, ASPX, etc). Additionally Microsoft changed much of the underlying functionality of their compression module including the method for determining what content to compress. In IIS 6 the metabase.xml file listed all file extensions that would be considered static and dynamic content. Only content on that list would be compressed, which required the list to be updated any time new file extensions were introduced. In IIS 7, compression is based on MIME types with text/*, message/*, and application/javascript being enabled by default. These 3 entries are a catch all for the majority of static and dynamic content, so out of the box most content will already be compressed. Another particularly welcome change is built in CPU based throttling that automatically ceases compression when CPU usage reaches critical levels. This feature alone makes enabling HTTP compression a no brainer for any admin, as there is no longer the worry that high cpu usage during the compression process will slow dynamic generated content delivery. Despite the improvements to the setup of http compression in IIS 7 an effective configuration still requires either manually editing your applicationHost.config file or using the new appcmd.exe configuration utility.

Compression Setup in IIS 6
First you need to enable both dynamic, static, and on demand compression. Dynamic and static are fairly self-explanatory, but On Demand compression requires a bit of explanation. When On Demand compression is enabled IIS will serve content it has never compressed in an uncompressed format to the client, and then compress the content in a background thread for future use. Enabling this feature improves response time for new content by not waiting to first compress the content before trasmitting.

cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/Parameters/HcDoStaticCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/Parameters/HcDoOnDemandCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/Parameters/HcDoDynamicCompression TRUE

Once you’ve enabled http compression you need to enable the two forms of compression supported by IIS, gzip and deflate. For each method you must specify which file extensions should be compressed as dynamic (ScriptFileExtensions) and which extensions should be compressed as static (FileExtensions). You also need to specify compression levels which range from 0-10, with 10 being the highest level of compression. Since we’ve enabled the on demand compression feature we can be a bit more aggressive with the compression levels, setting static to nine, and dynamic to a safer level of six.

cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcDoStaticCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcDoOnDemandCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcDoDynamicCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcFileExtensions “txt” “js” “css” “htm” “html” cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcScriptFileExtensions “exe” “dll” “asp” “aspx” “svc” “xml” cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcOnDemandCompLevel 9 cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/deflate/HcDynamicCompressionLevel 6

cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcDoStaticCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcDoOnDemandCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcDoDynamicCompression TRUE cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcFileExtensions “txt” “js” “css” “htm” “html” cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcScriptFileExtensions “exe” “dll” “asp” “aspx” “svc” “xml” cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcOnDemandCompLevel 9 cscript.exe C:\inetpub\AdminScripts\ADSUtil.vbs Set W3SVC/Filters/Compression/gzip/HcDynamicCompressionLevel 6

Once you’ve enabled compression and setup formats, extensions, and levels it might seem like everything would be complete. Many articles on compression in IIS 6 actually end at this step, but unfortunetly due to a bit of poor planning on Microsoft’s part there is one additional step. In order for compression to function you must create a Web Extension for compression and point it to the correct DLL. Without this nothing will work.

cscript.exe C:\Windows\system32\iisext.vbs /AddFile C:\Windows\System32\inetsrv\gzip.dll 1 “HTTP Compression” 1 “HTTP Compression”

At thit point you can run an iisreset and observe your newly compressed content.

Compression Setup in IIS 7
Enabling compression is IIS 7 became both easier and at the same time more complex. A basic setup requires far fewer commands, and a very basic setup can be completed in the GUI alone. At the same time though Microsoft introduced many additional commands for changing how IIS handles the compression of content. The first step is to install the dynamic compression module, by running the command below. Be warned it takes a while to run so be patient.

start /w pkgmgr.exe /iu:IIS-HttpCompressionStatic;IIS-HttpCompressionDynamic

Similar to IIS 6, you must enable the compression modules in IIS 7. This can be done in the GUI at the server or site level or you can run the two commands below to configure compression at the server level %inetsrv_location%\appcmd.exe set config /section:urlCompression /doStaticCompression:True %inetsrv_location%\appcmd.exe set config /section:urlCompression /doDynamicCompression:True

IIS 7 by default only enables GZIP compression, which is fine as almost all clients support both Gzip and Deflate and specify their preference for GZIP over Deflate in the HTTP header of their request. If you’d like to include support for deflate you can by running the following command %inetsrv_location%\appcmd.exe set config /section:httpCompression /+”[name=’deflate’,doStaticCompression=’True’,doDynamicCompression=’True’,dll=’%Windir%\system32\inetsrv\gzip.dll’]” /commit:apphost

As in IIS 6, IIS 7 includes compression levels from 0 to 10. IIS 7 includes additional functionality that automatically disables compression when the CPU hits high utilization. This feature allows you to set compression at more aggressive levels without compromising the response time of dynamically generated content. Despite this new functionality, a compression level of nine for static content and six for dynamic provide a balance of high compression and low CPU usage. %inetsrv_location%\appcmd.exe set config /section:httpCompression -[name=’gzip’].dynamicCompressionLevel:6 %inetsrv_location%\appcmd.exe set config /section:httpCompression -[name=’gzip’].staticCompressionLevel:9 %inetsrv_location%\appcmd.exe set config /section:httpCompression -[name=’deflate’].dynamicCompressionLevel:6 %inetsrv_location%\appcmd.exe set config /section:httpCompression -[name=’deflate’].staticCompressionLevel:9

With compression enabled and compression levels set you now need to specify additional mime types that are used by your application. By default IIS includes support for javascript and any mimetype starting with text/. You can add additional mime types by first removing the deny all rule at the end of the mimetype list and then adding your required mime types and a new deny all rule.

appcmd.exe set config -section:system.webServer/httpCompression /-“dynamicTypes.[mimeType=’*/*’,enabled=’False’]” /commit:apphost appcmd.exe set config -section:system.webServer/httpCompression /+”dynamicTypes.[mimeType=’application/json’,enabled=’True’]” /commit:apphost appcmd.exe set config -section:system.webServer/httpCompression /+”dynamicTypes.[mimeType=’*/*’,enabled=’False’]” /commit:apphost

At thit point you can run an iisreset and observe your newly compressed content.