All Posts in The Programming Mechanism

October 17, 2014 - Comments Off on A Brief Description of the SPF record

A Brief Description of the SPF record

A client recently came to us with problems receiving emailed form submissions from their website. We did a little testing and realized that indeed the emails were being sent by the server but something was stopping them during transit. In our research, we had to dig deeper into SPF records.

An SPF (Sender Policy Framework) record, validates an IP address as permitted to send emails from that Domain. Once propagated, the DNS record serves as a list of verified senders, which the recipient of an email can use to check against.

The SPF record was introduced because the ubiquitous SMTP (Simple Mail Transfer Protocol) allows a computer to send email claiming to be from any domain. Spam and phishing emailers use this to send email purporting to be from a trusted source.

With an SPF record, a receivers spam filter is more likely to accept the message as it is coming from a validated source. It also discourages potential spammers from using your domain as a sending address, knowing that domains with SPF records are more likely to be rejected by recipient spam filters, if the senders IP does not match one associated with the record.

Once an SPF record is created, it is important to ensure that all computers that will be sending emails are included, as the presence of the record indicates that any email not declared is dubious. This is the problem we ran into, where an SPF record was added to the DNS zone, but did not include the server hosting the website in it’s list of IP addresses.

Here is an example SPF TXT record: TXT “v=spf1 ip4: ip4: a ~all”

Published by: georgebrassey in The Programming Mechanism

August 19, 2014 - Comments Off on Could the iWatch Revolutionize Medical Research?

Could the iWatch Revolutionize Medical Research?

“Talkback Tuesdays” is an original weekly installment where a team member of The Mechanism is asked one question pertaining to digital design, inspiration, and experience. The Q&A will be featured here on The Mechanism Blog as well as on The Mechanism’s Facebook, Twitter, and Instagram, every Tuesday. Feel free to offer up your 2¢ in the comments.

George Brassey, The Mechanism’s lead developer, discusses the great potential smart watches can have in revolutionizing medical research and healthcare management. It seems like only a matter of time!

What new piece of tech are you most excited about hitting shelves?

I'm excited to see what sensors Apple will introduce with the iWatch. I'm hoping they announce a watch with an array of sensors which might revolutionize health care research. Last year there was a huge amount of media buzz around the wearable space, with nothing appearing. This year the rumor mill is turning again and it sounds like Apple will finally announce an iWatch next month to be released later this year/early next year. Why am I interested? Last year I didn't like the idea of the wearable. The potential uses didn't interest me. I already have a phone, tablet and laptop. I don't need yet another screen. Especially considering how limited the functionality will be on such a small device. This year, however, I've been hearing about the sensors that will be included.

I'm a migraine sufferer. From time to time, without warning, I get massive blind spots in my field of vision, followed by debilitating headaches. Research on migraines has been inconclusive. The Mayo Clinic lists: hormones, foods, food additives, drinks, stress, sensory stimuli, changes in wake-sleep pattern, physical factors, changes in the environment, and medications; as potential causes. That's a long list with very little practical information as to how to prevent a migraine. I will be interested to see what could be learned by analyzing various health markers preceding migraines.

Depending on how Apple's new Healthkit SDK deals with privacy, the platform could standardize the sharing of medical records. Currently, there is very little access to medical data for researchers. Fears of records getting into the wrong hands means that acquiring data for research often requires a new study, even if a similar study has been done before. This involves, raising money, finding volunteers and conducting the study which may take months, even years. Most health information is under lock and key. The proliferation of devices to passively record a wealth of data could provide easy access for life saving research.

August 15, 2014 - Comments Off on How to Build an Easy Embedabble Widget

How to Build an Easy Embedabble Widget

Building with iframes is a fantastic way to create seamless, easy to implement, embeddable widgets. Once set up, creating multiple instances linking to your service is easy to do. Existing within a website, an iframe is like a window onto another website. Rather than forcing open another window, a user can interact with another service within the context of the website they have navigated to. The experience is smooth for the visitor, who will find the relevant service presented inline.

Over here at The Mechanism, we have been using iframes to integrate our custom bug tracking solution with client websites during user assisted testing. We needed an inconspicuous tool which would allow clients to seamlessly review work and submit bugs as they find them.

The requirements for this front end bug catcher:

  1. Simple to embed and easy to implement across many projects
  2. Sandboxed so that it doesn't cause any conflicts with the project DOM, JS or CSS
  3. Simple architecture which works on all browsers
  4. Responsive; it must work on all device sizes and form factors
  5. Context Aware; diagnostic information will require knowledge of the parent document (the website under bug tracking )

Our first iteration of the bug tracking widget violated the second requirement. For our first prototype, we loaded a script and pulled in our view and styling files through JSONP (a method for circumventing Same-Origin Policy, you can read about it here). This worked in our limited prototype but caused a few issues. First of all, we had to give our DOM elements verbose ID's and class names to ensure there would no conflict with the parent document. Secondly, we ran the risk of causing script conflicts with our dependencies. We used a script loader to minimize this risk, however we could never be sure. Finally, we were at the mercy of the stylesheets loaded by the parent which required us to write additional resets to ensure consistent appearance across projects. However, repairs like this are equivalent to bailing a sinking ship rather than repairing the leak.

So for our second iteration we converted the widget into an iframe. To do so we still had to find a way to get around the Same-Origin Policy. The Same-Origin Policy restricts communication between two documents; the parent window and the iframe. Cross Document Messaging is a new addition to the HTML5 specification, which allows for simple string communication between documents. This is supported by most modern browsers, however, there are many hacks necessary for older browsers.


easyXDM is a great library built to cross this great divide introduced by iframes. It uses the HTML5 postMessage() method when available and uses many fallbacks when necessary, ensuring the free flow of information between documents. For the developer, it exposes two protocols for data transfer. The first is a socket, which will send a string between the documents. This method requires us to parse and decipher the string before performing the relevant action on that information. The second option is an RPC (Remote Procedure Call), specifically JSON-RPC. JSON-RPC is a specification for calling functions from remote software, and for data to be returned. This allows for a much more dynamic interaction between our two documents where each process exists in their relative scope and can communicate as discreet functions would be expected. For our needs, the simpler option and the one we will be employing is the RPC protocol.

To implement easyXDM we must load our dependency and create an RPC instance, with the necessary proxy objects and method stubs. We will initiate this within a script loaded on the parent document. This script will embed our iframe and act as our gateway to the bug tracking widget.

On our parent document we will place an asynchronous script call to our remote script


// footer.html


In our script we will start by loading our easyXDM dependency


// main.js

var serverURL = '',

iframeFile = 'iframe.html',

depends = {

'easyXDM': serverURL + 'js/easyXDM.min.js'


Object.size = function(obj) {

var size = 0, key;

for (key in obj) {

if (obj.hasOwnProperty(key)) size++;


return size;


var scriptCount = Object.size(depends); // count of scripts required

var scriptLoads = 0; // count of script loaded

for (var key in depends) {

if (depends.hasOwnProperty(key)) {

loadScript(key, depends[key], function() {


if (scriptLoads === scriptCount) {






function loadScript (dependency, src, callback) {

// this function checks if the dependency is present.

// it waits for load before executing the callback.

if (window[dependency] === undefined) { // if dependency is not present

var scriptTag = document.createElement('script');

scriptTag.setAttribute('type', 'text/javascript');

scriptTag.setAttribute('src', src);

if (scriptTag.readyState) {

scriptTag.onreadystatechange = function () { // For old versions of IE

if (this.readyState == 'complete' || this.readyState == 'loaded') {




} else { // Other browsers

scriptTag.onload = callback;


(document.getElementsByTagName("head")[0] || document.documentElement).appendChild(scriptTag);

} else {





  1. Lines 1-5 we are declaring a some variables that will be used later.
  2. Lines 7-16 we extend the Object object with a method which will return the length of our depends array on line 15, along with a variable to hold an index value.
  3. Lines 18-27, we run a loop through the depends array and call a function loadScript(), which takes the name of our dependency, the url it can be found at and a callback which will be run once the dependency is loaded.
  4. Lines 29-49, our function which will test the presence of the dependency and load the script if it is not found. It uses various methods to ensure the script is loaded before running the callback function.

Next we will create our RPC instance which will load the iframe


// main.js




var iframeContainer = document.createElement('div'); = 'fixed'; = 999; = 0; = 0; = "auto"; = "auto";['max-height'] = '100%';['max-width'] = '100%';


var rpc = new easyXDM.Rpc({

remote: serverURL + iframeFile,

container: iframeContainer,

props: {

id: 'bug-iframe',

frameborder: '0',

scrolling: 'no',

marginwidth: '0',

marginheight: '0',

allowTransparency: 'true',

style: {

height: '100%',

width: '100%',

display: 'block'





local: {

resizeiFrame: function (widthReq, heightReq, allowScroll) {

var windowWidth = window.innerWidth || document.documentElement.clientWidth || document.body.clientWidth,

windowHeight = window.innerHeight || document.documentElement.clientHeight || document.body.clientHeight;

var width = (widthReq < windowWidth) ? widthReq : windowWidth;

var height = (heightReq < windowHeight) ? heightReq : windowHeight; = width + 'px'; = height + 'px';

var sc = (allowScroll) ? 'yes' : 'no';

document.getElementById('mech-bug-iframe').scrolling = sc;

return {

x: width,

y: height



parentInfo: function () {

return {

width: window.innerWidth || document.documentElement.clientWidth || document.body.clientWidth,

height: window.innerHeight || document.documentElement.clientHeight || document.body.clientHeight,

url: window.location.href






  1. Lines 5-16; we are creating the iframes container with properties for it's layout within the parent documents DOM
  2. Lines 18-34; our RPC instance with the address to find the iframe contents, a container to place the div in, and some properties to control it's appearance
  3. Lines 34-64; is where we declare the methods we will be exposing to our iframe. resizeiFrame() and parentInfo() will allow us the adjust the size of the iframe and return diagnostic information respectively. They will be called from within our iframe

In our iframes markup we will load easyXDM and a shiv for older browsers without support for JSON, plus another .js file where we will instatiate our RPC connection.



In our iframe-main.js file, we will create another instance of easyXDM.Rpc and create stubs for our remote methods


// iframe-main.js

var rpc = new easyXDM.Rpc({},


remote: {

resizeiFrame: {},

parentInfo: {}






rpc.parentInfo(function(parentInfo) {

var diagObject = {

'width' = parentInfo.width,

'height' = parentInfo.height,

'url' = parentInfo.url




  1. Lines 2-8; we create our rpc object with the relevant stubs, referring to the remote methods
  2. Lines 14-20; an example of how we call our remote function. Notice the anonymous function we pass to the remote function to return our requested data. This is an asynchronous function

Stay tuned for more on the Venus project to find out where it goes next. Dhruv Mehrotra will be back in a few weeks with a blog post going over some of the steps taken to set up the Ruby on Rails server behind Venus. And we will have meetup at our offices the second week of September. Hope to see you there!

Published by: georgebrassey in The Programming Mechanism
Tags: , , , ,

June 20, 2014 - Comments Off on Building The BugTrap JavaScript Widget

Building The BugTrap JavaScript Widget

Over here at The Mechanism's headquarters, Team Mechanism has been busy working on a better way to track bugs, code named: project:Venus. We want to make it easier for our clients to report bugs while reviewing projects and improve our workflow by allowing internal communication on a bug by bug basis.

The idea came to us as while working on another project. We realized we had the technology to build a swift prototype by leveraging tools that were already part of our arsenal. More on this in later posts. For now, we will focus on the front end javascript widget, "The Bug Trapper" if you will.

As an agency, we often have many projects in process so our bug tracker needs to be easy to implement across multiple projects and domains: we wanted to use a simple script tag which would be added to projects during test phases, with the project id included in the script 'src' GET parameters.

This posed a couple problems:

  1. Scripts are not aware of the GET parameters from their requests
  2. AJAX requests cannot load anything other than scripts from other domains


1. Javascript and Parameters Passed through GET Requests

We came across this tip for pulling the parameters from the script request. It actually has little to do with a GET request as the parameters are parsed by the script on the client side. By providing the name of the script, it searches the DOM for itself (javascript has no awareness of how it has been called or where it exists in the DOM). And then we do some regex magic to construct an object of key/value pairs. We have abstracted the code slightly from the original. Below is our script:


// Extract "GET" parameters from a JS include querystring

function getScriptTag(script_name) {

// Find all script tags

var scripts = document.getElementsByTagName("script");

// Look through them trying to find ourselves

for(var i=0; i<scripts.length; i++) {

if(scripts[i].src.indexOf("/" + script_name) > -1) {

return scripts[i]



// No scripts match

return {};


function getParams(script_tag) {

// Get an array of key=value strings of params

var pa = script_tag.src.split("?").pop().split("&");

// Split each key=value into array, the construct js object

var p = {};

for(var j=0; j <pa.length; j++) {

var kv = pa[j].split("=");

p[kv[0]] = kv[1];


return p;




2. Cross Domain AJAX Requests

For security purposes, AJAX requests only accept scripts from other domains. This is a problem for our widget which we'd like to build modularly, separating the script logic, markup and styling into separate files, while maintaing the simplicity of including a single script when creating new instances.

There is a work around and it involves turning the response from our server into JSONP. JSONP is JSON with Padding. Essentially our server response is turned into a JSON object and then gets wrapped in a function, which will get called on the client side and return an object containing our data.

Thank the heavens for jQuery. jQuery's AJAX/getJSON method has baked in support for JSONP which will expect the callback function name to be a random string (an added layer of security), and will process the data, provided it gets the correct response from the server. On the client side, all we need to do is indicate we will be expecting the response to contain a callback function by adding "?callback=?" to our URL.


var stylesheetURL = "";

$.getJSON(requestURL, function(data) {





On our rails server we route this request by wrapping the intended response in a function, the name of which is passed by jQuery as a parameter through the request. Below is the ruby on rails controller code to do this on our server:


def getmystylesheet

css ="path/to/stylesheet.css").to_s

json = {"css" => css}.to_json

callback = params[:callback]

jsonp = callback + "(" + json + ")"

render :text => jsonp, :content_type => "text/javascript"



We're beginning a test cycle with the bug tracker on some internal projects and we will be rolling this out on upcoming client work. Hopefully, with feedback from our clients, we will continue this project with a view to scaling it. Stay tuned for future updates!

Published by: georgebrassey in The Programming Mechanism
Tags: , , , ,

May 30, 2014 - Comments Off on Configuring Sunspot Solr Search Controller

Configuring Sunspot Solr Search Controller

Search is the compass of the internet. It guides us to the content that we are really looking for and helps avoid the stuff we don’t really care about. Or at least that’s how it is supposed to work. It turns out that beyond just the complexity of installing and configuring a search server, it can also be difficult to account for the various use cases of your search tool. Lets take a quick look at how The Mechanism engineers were able to tackle this challenge when building a restaurant search application for SafeFARE.

The good folks at enlisted our services to build a restaurant search application that will allow users to find allergy-aware restaurants based on any combination of 9 criteria. Using the Ruby on Rails framework and Sunspot Solr (a Ruby DSL for the Lucene Apache Solr search server) we built this search app, and learned a few things on the way.

If a user searches for restaurants in a ZIP code should we only return restaurants within that ZIP code, or should we include restaurants from other nearby ZIP codes in our search results? And if we include other ZIP codes, how many other ZIP codes? How should we order the results? These and other similar questions helped up to come up with the structure of our search controller.

Figure 1.1

if params[:search].present?

@search = Restaurant.solr_search do

fulltext params[:restaurant_name] # runs a full text search of

with(:approved, :true) #facets approved restaurants

if params[:cuisine_search].present? #user also entered cuisine preference

any_of do

params[:cuisine_search].each do |tag|

with(:cuisines_name, tag) # facet by matching cuisines




if params[:address].present? || params[:city_search].present? || params[:state_search].present? || params[:zip_search].present?

#if any location fields are present, geocode that location

with(:location).in_radius(*Geocoder.coordinates(whereat), howfar)

#facet based on user given location,



@restaurants = @search.results



It took us about a week but we were finally able to come up with enough if statements to cover every one of the 362,880 possible combinations of search queries. Figure 1.1 is a small sampling of how we implement search when a user types in a restaurant name, cuisine preference, and restaurant location. First we search the solr index for whatever the user enters in the restaurant_name field, then cut that list down to only the approved restaurants, then we check to see if the user also entered a cuisine preference, if so we facet our list down to restaurants that match that cuisine, if the user did not enter a cuisine, we skip that step, then we check if the user entered a location that they would like to search like a city, or state, and we facet our list down to only restaurant’s in that area. Using this strategy we can create sort of a Venn diagram that allows us to drill down only to the information that we want, and point that result to the restaurant variable. To increase the functionality of the site, The Mechanism engineers implemented an IP lookup to automatically detect the IP address and location of the user, and order search results by how close the restaurant is to the user.

A second major challenge that many developers face when using a search server is deployment. In order to use solr in a production environment, you will need a Java app servlet like Tomcat or Jetty, and you will need an instance of Apache Solr. Developers may consider installing standalone versions of Tomcat and Solr Sunspot depending on their hardware capabilities, but sunspot comes bundled with a Jetty server which can be used in production by running the command RAILS_ENV=production rake sunspot:solr:start

And voila! we have implemented an advanced search tool that will help users find allergy-aware restaurants all across the nation and may even save somebody’s life one day.

Published by: Sharon Terry in The Programming Mechanism
Tags: , ,

May 16, 2014 - Comments Off on What We Talk About When We Talk About Testing

What We Talk About When We Talk About Testing

Until recently I considered writing tests for my applications much like reading Dickens in high school: boring, repetitive, hard to understand, and yet for some reason a total necessity.  What's more, I wrote tests about as frequently as I read Dickens - and I've never read any Dickens.  My meandering point here is that Test Driven Development (TDD) seems to be the standard of the rails community, yet I don't know a single person who actually does it.  With that it mind, I decided to develop our last client's application with tests written to the best of my ability.  At first I thought I was burying my productivity in the minutia of each test, but I ended up learning quite a few things, including the value of testing.


I guess the first thing to do here is to describe my tools.  To test with Rails I used RSPEC, Factory Girl, and Capybara.

RSpec is a testing tool for Ruby.  Baked into the gem is a rich command line program with detailed error reporting. The beauty of Ruby, and RSpec, is that it enables you to write human readable tests that tell a story. For instance:

it 'has a list of employees' do

employee ='John', 'Smith')

company =[student])

expect(company.employees).to include(student)



Though testing with RSPEC ensures that our models behave the way we expect, there is a serious problem with this approach: It takes forever. Testing any sort of interesting behavior not only involves extensive amounts of setup to your environment, but a tremendous amount of code to create various instances of your model. This is where Factory Girl comes in.  Factory Girl allows a tester to create factories that create multiple records for a model with some generic attributes that are able to be overridden as needed.  This means that creating a unique record is as simple as:

FactoryGirl.create(:employee, name: "Jon Snow").

My last tool in my little testing toolkit is Capybara. Where RSpec is a way to test your models, Capybara allows your application to test external behavior. In other words Capybara provides a simple way to test user stories and general behavior. Heres an example:

'When I sign in' do

visit user_sign_up_path

fill_in 'Login', :with => ''

fill_in 'Password', :with => 'whereWereTheSpiders?'

click_link 'Sign in'




So What's The Point

When I started, I assumed that testing would help to prevent bugs. While this is likely the case, I found that using TDD did something far more valuable. By letting tests to drive the code, developers are forced to conceptualize their application before a single semi-colon is ever written. This forces developers to step through their own logic, which in my case, is full of inconsistencies and little things that I had not thought through fully...likely because I have never read Dickens.

Published by: dhruvmehrotra in The Programming Mechanism
Tags: , , ,

May 8, 2014 - Comments Off on An Event Apart • Boston • 2014

An Event Apart • Boston • 2014

Here are some thoughts on talks at the recent An Event Apart, in Boston.

Understanding Web Design - Jeffrey Zeldman

  • Web Design is held to the expectations of other media. Often ignoring the intrinsic strengths of web
  • Like typography, web design's primary focus is communicating content
  • Technology is often a hangup for people, when the user and their needs should be the primary focus of designers. "Design for people, not browsers!"
  • Design is about detail
  • A great website will subtly guide the user to their desired destination

Designing Using Data - Sarah Parmenter

  • Design is no longer a differentiator. Making things look nice is common. The differentiator today is designing with purpose — answer the question 'why?'
  • When the right metrics are studied, data offers objective and actionable feedback
  • Data should allow a team to unite behind an objective goal — such as: Increase clicks etc.
  • Customer facing advertising is most effective when honest and transparent
  • Iterative design allows you to be flexible and try new things

Responsive Design is Still Hard/Easy! Be Afraid/Don't Worry! - Dan Mall

  • Frameworks rather than processes, mean you define a set of constraints within which a project exists, and within this you find out what you can do that's unexpected
  • Be active within your framework and volunteer/get involved with stages of production outside of your discipline
  • Each member of a team will have divergent perspectives at the start of each project cycle, they should become convergent by the end. These are focal points
  • Rinse and repeat the cycle, getting smaller each time to increase team involvement
  • Extensive preparation should make the assembly part of the process the shortest

Screen Time - Luke Wroblewski

  • Mobile is the dominant web browser worldwide
  • Responsive design includes additional considerations than just screen size (multiple input types, variable ambient lighting etc)
  • Screen size is a poor proxy for many of these considerations (screen size does not reveal input type)
  • A user's posture or distance from device will also affect it's design, independent of screen size or number of pixels
  • Design for human proportions, not pixels.

Content/Communication - Kristina Halvorson

5 key points for working with a client:

  • Principles: these are internal motivators based on our better intentions. They can unify a team
  • Strategy: pinpoint your goals and provide helpful constraints with which to execute
  • Process: the process is not God, it should change and grow as needs change. Regular post mortems are encouraged
  • Roles: RACI key for each agent on the client end. Responsible. Accountable. Consulted. Informed
  • Perceptions: Translate to facilitate communication between different disciplines

UX Strategy Means Business - Jared Spool

  • Design is the rendering of intent. Both user and provider
  • Content delivery is as important as the content itself and vice versa. Great UX cannot exist without great content
  • Advertising is unhelpful for all parties involved
  • Strategic priorities in business can inform design considerations (increase revenue, reduce cost etc)
  • There are a variety of models for monetizing the web

The Long Web - Jeremy Keith

  • HTML allows for fantastic accessibility, deprecation and backward compatibility
  • New HTML specifications can be adopted early as they will be skipped over when unsupported
  • Progressive enhancement means you start with the lowest common denominator and then enhance as much as you like
  • Progressive enhancement protects the experience from unaccountable errors such as unrelated javascript errors
  • Text formats will last longer than binaries. Binaries are forever changing and becoming outdated

Responsive Design Performance Budget - Paul Irish

  • Mobile users expect their content to load faster than the desktop
  • Web growing is latency limited. The nature of requesting many small files means that a user's experience is improved by reducing the number requests
  • UX can be greatly enhanced by prioritizing critical data and rendering early on
  • Separate the critical CSS from non-critical. Load non-critical at the end of the page. Aim for main content to load in 1 sec (< 14kb)
  • The number of higher latency users is increasing

The Chroma Zone: Engineering Color on the Web - Lea Verou

  • Colors in web browsers have many nuances and limitations
  • Hex and RGB are poor representations for human reading
  • HSL and HSLa are better although they are not perceptually uniform (we perceive 50% yellow as much lighter than 50% blue)
  • New color properties in CSS level 4 will make color coding more human readable (HWB = Hue Whiteness Blackness)
  • There is room for much more improvement in web colors

Mind the Gap: Designing in the Space Between Devices - Josh Clark

  • Designing for the space between screens. Not content but tasks. Verbs not nouns
  • The technology is available today, we just haven't imagined the possibilities yet
  • Interfacing with machine is likely not going to change much (touch and mouse are great interfaces)
  • Physical things are beginning to have digital representations (avatars)
  • How about affecting how we interface with physical world and communicating that to our devices.
  • Software makes hardware scale, The endless possibilities

Web+: Can the Web Win the War Against Native Without Losing its Soul? - Bruce Lawson

  • Web technology has inherent strengths, despite the popularity of native apps
  • Web tech should not try to replicate — though it can learn from native. Build to the strengths of web
  • Progressive enhancement and interoperability make web accessible and global. Always accessible by everyone
  • Widgets failed as they were a poor imitation of native apps. They existed as a snapshot without the ability to update
  • W3C is built for accessibility and interoperability. This means that it is designed for low level functions. Can be complicated but powerful

How to Champion Ideas Back at Work - Scott Berkun

  • Great things are achieved in difficult circumstances
  • Success and acclaim only arrive once a project is complete
  • Charm and convincing people of your ideas is important!
  • A network increases your potential. Reach out and get advice to harness that potential
  • To enact change, start small with something you can excel at and expand from there

April 18, 2014 - Comments Off on Lightweight Drag and Drop for iOS with CSS3 Translate

Lightweight Drag and Drop for iOS with CSS3 Translate

This post explores issues we experienced with a recent project, involving jQuery UI’s draggables and how we solved it using CSS3 translate and javascript touch events.

In the midst of full production, we discovered an issue with iPad handling the combination of jQuery UI “draggables” and high quality images (for using jQuery UI with iPad touch events).

As production had already begun, we needed a shim to would work alongside what was already built, and replace the jQuery UI functionality on iOS devices.

We performed tests, sectioning off the drag and drops from the rest of the project and realized that, even one large background image severely affected the performance of the dragging animation on iOS devices. Scale that up to a production size eLearning platform and we suffered serious memory bleeds, causing Mobile Safari to crash instantly.

We googled far and wide but could find no solution. (HTML5 drag and drop would not fit the bill as it would require rebuilding everything we had done so far.)

And so we resolved to build a jQuery plugin and were pleasantly surprised to discover this undertaking was much simpler than first anticipated. Not only that, but our solution meant that, aside from changing the script which controlled these activities, we did not have to change any of the markup already written for dozens of pages.


This blog post was a great jumping off point, it had done much of the hard work for us, showing us how to tie a touch event to a moving element. Despite being a great resource, the script animates with the “top” and “left” properties. While these work on all platforms, they use a lot of CPU power, too much for the poor iPad. And so we updated the code to use CSS3 translate. This change was light and day. iOS webkit hardware accelerates CSS translates through the GPU and the performance improvement was significant.

Next we needed to add functionality to drop a “draggable” inside a “droppable”. This was done in two steps. First we added an initialization for “droppable” elements which would calculate the coordinates of the “droppable” and store these values in the data attribute of that element. Next we added an event handler for dropping an element, which finds if the last touch event occurred inside the bounds of a “droppable”. If this is the case, then we translate the “draggable” to sit on top of the “droppable”.

Along the way we added certain functionality specific to our project such as populating an object named “dragInput”, which contains the placement of any dragged items and can then be compared against another object which holds the correct matches for a quiz style drag and drop activity.


Since integrating this into our project, I have tried to extend the plugin by adding mouse event listeners. There are limitations, such as dropping an element when the mouse escapes the bounds, despite the ‘mousedown’ event still being active. I have seen this behavior elsewhere. jQuery UI must use event listeners on the window to make up for this deficiency. Although I bemoan jQuery UI’s use of pre-CSS3 techniques, having tried to replicate the functionality with mouse events, I appreciate the depth of their project. The touch events were comparatively robust and behaved as expected.

CSS transforms are very powerful and although confusing at first glance, they give web developers exciting possibilities for creating native like experiences within browsers. By using transforms, our drag activities went from crashing the iPad to outperforming jQuery UI draggables on a desktop.

I hope this post proves helpful and I will continue to develop the plugin, as time and persistence permit.

Check out the github here.

Published by: georgebrassey in The Programming Mechanism
Tags: ,

December 3, 2012 - Comments Off on Drupal 7 Forms API: The One About the Date Fields

Drupal 7 Forms API: The One About the Date Fields

Drupal, as a content management system, allows you to create content types with date fields which means you can create events and calendars. There are many great modules which allow you to display content with date fields in calendars, forms, etc. But before you can fly, you must first learn to code... I think that's how the saying goes. Entering one, or even a couple of events into you site is not too difficult or cumbersome. The default node add forms are utilitarian if nothing else. I have even been know to gussy the forms up with some fieldsets, CSS and some occasional Jquery whiz-bang-iness. The challenge comes when you have to enter months worth of events at one sitting. After entering and saving a new event, the system will default to the current day (or whatever default relative date you set in the widget) the next time you add the next event. This means each time you add an event you have to set the month/day/year each time- and when this spans the next calendar year, or even several months, you finger may fall off from all the clicking needed.

The solution? Set the default date in the date widget programmatically, based on the last saved event date. This means if you are entering a lot of events in sequential order, you will have less clicking to do to set the date. It also means if you save an event several months in the future and later add a new event you will have to do the clicky-dance, but for adding many events at once, you will have saved yourself some time.

The way to change the default date in the form is to use the Drupal Forms API (formerly FAPI in D6). You will need to create a module, or add this code to an existing custom module. The Forms API, in simple terms, allows you to talk to Drupal and manipulate the forms through code. Instead of hacking away at a flat HTML file, editing <inputs> and trying to get form values to save in the right places, you can systematically address each form element and get/set values, change CSS attributes, add JS/Jquery and manipulate field settings/defaults. If you've ever installed Captcha, LoginToboggan or any other Drupal form altering module, that's exactly what they are doing. By 'hooking' into the Form API, they can perform all sorts of manipulation without rewriting or replacing the form's core- they simply alter it.

This function will find the currently displayed form and alter it if and only if it is the event form we want to alter. It next retrieves the date of the last event you saved and changes the sets that as the default value. If no date was last saved, then the form will default to the current date.


function mymodule_form_alter($form, &$form_state, $form_id) {
switch ($form_id) {
case 'event_node_form' :
// Set the default start date for Events to the last saved event month
if ($form['#action'] == '/node/add/event') {
$date = variable_get('event_last_date', '');
$form['field_event_date']['und'][0]['#default_value']['value'] = $date['start'];
if (isset($date['end'])) {
$form['field_event_date']['und'][0]['#default_value']['value2'] = $date['end'];


This next function reads the values of the event being saved and stores the date(s) as an array to the system table.


function mymodule_node_presave($node) {
switch ($node->type) {
case 'event' :
$date['start'] = $node->field_event_date['und'][0]['value'];
if (isset($node->field_event_date['und'][0]['value2'])) {
$date['end'] = $node->field_event_date['und'][0]['value2'];
variable_set('event_last_date', $date);


You will notice that there are 2 date fields being saved here, value and value2. If you have set the date field to allow an end date value to be set, it is called value2, and in this code is also saved. If you do not set it, or do not have it visible on the form, it will ignore it. The Second function calls hook_node_presave() which, like the Forms API, is an entry point into the Node API and allows you to interact with the node object via code. When any node is being processed for insertion/updating in Drupal, it will call this function and if it matches your 'event' type it will save the date field(s) to the system table for later retrieval by the first function. Place these two functions in a module and test it out. You will need to make sure your code matches  the content type name and CCK date field names of your site.

Modifications such as these can greatly improve the user experience of a site. When you are creating sites to be turned over to others for the content management, this attention to detail and usability can make their lives much easier.


Published by: chazcheadle in The Programming Mechanism

November 28, 2012 - Comments Off on Drupal 7: Create Previous | Next links for nodes using CCK Date fields

Drupal 7: Create Previous | Next links for nodes using CCK Date fields

Recently we needed to implement a Next | Previous link feature for a site for two content types. For providing these links on a simple content type like a blog, the Flippy module may fit the bill. Flippy creates a themeable pager that gets its date from the node's 'created' date field. Blogs generally will benefit from this method of sorting and navigation, but what if your content type has a different, CCK, date field that you want to use for the links. The following code will take your CCK date field and use it to compare with same CCK date field of the current node:
// Find the nid of the node with the timestamp just prior to $date
$prev_nid = db_query("SELECT n.nid FROM {node} n LEFT JOIN {field_data_field_event_date} f ON n.nid = f.entity_id WHERE type = 'event' AND UNIX_TIMESTAMP(f.field_event_date_value) < :posted ORDER BY field_event_date_value DESC LIMIT 1", array(':posted' => $date))->fetchField();
// Find the nid of the node with the timestamp just after $date
$next_nid = db_query("SELECT n.nid FROM {node} n LEFT JOIN {field_data_field_event_date} f ON n.nid = f.entity_id WHERE type = 'event' AND UNIX_TIMESTAMP(f.field_event_date_value) > :posted ORDER BY field_event_date_value ASC LIMIT 1", array(':posted' => $date))->fetchField();

if ($prev_nid > 0) {
$prev_link = l('Previous', "node/$prev_nid", array('html'=>TRUE, 'attributes' => array('title' => 'See Previous', 'class' => array('prev-link'))));

if ($next_nid > 0) {
$next_link = l('Next', "node/$next_nid", array('html'=>TRUE, 'attributes' => array('title' => 'See Previous', 'class' => array('next-link'))));
print(" | " . $next_link);


The two db_query() calls in the code query the database for nodes based on their CCK Date field which is converted from a Mysql date (MM/DD/YY HH:MM:SS) to the unix epoch timestamp format for comparison. If there is a resulting nid, it is stored for output by drupal's l() function. The field we have is for an event and the field's name is 'field_date_event_value', seen in the SQL. You will have to dig into your content type to determine the exact name of the field you will use. Additionally you can see the syntax for the l() function for adding additional attributes to the generated link. The third argument takes a series of nested arrays that hold the 'title', 'class', 'id', etc.

This code could be extended to generate fuller pager with First, Last, Skip 5, or even Random links.


Published by: chazcheadle in The Programming Mechanism