Our team has built one of the largest search engines in the world.
We created the development blog to share our experiences with the community.

Mocavo’s Growth Pains: Learning to Let Go, and Finding the Talent We Need

10 Nov 2015

For a long time now, I’ve been working hard behind the scenes here at Mocavo. I’ve been tasked with a wide variety of work over the years: sending newsletters, improving the user experience design, building out tools for data ingestion, both books and structured datasets, adding features to our search engine that our competitors quickly copied, all along thinking about how we can enable the Mocavo community to best inherit genealogical information about their forebearers. I, like my colleagues, have always kept the ambitious goal of our company in mind. After we were acquired by FindMyPast in the summer of 2014, we took a snapshot photo of the Boulder office I’d love to share with everyone… (I’d love to also share a photo of the Provo team, but unfortunately couldn’t find one to capture the same time frame!)


I consider everyone in this photo the closest thing to family. Thank you all, for helping build Mocavo over the years.  A lot of hard work, long nights, and love has gone into building this platform. We’ve made mistakes over the years, and there have been shortcomings in the product that we didn’t prioritize. The reason for these choices generally boiled down to one of two things: lack of usage by the Mocavo comunity, and, unfortunately, lack of resources, both monetary and personnel. For those of you who don’t check our about page often, over the past year and a half we have lost a lot of personnel. Unfortunately, of the people in this photo, only the product engineering development team remain with the company: myself, Zack Thoutt (then intern, now data analyst), and Jeff Stephens.

Jeff, like myself, started out at Mocavo during the summer months as an intern. Unlike Jeff, I had the luxury of being the only intern in the company, and had the benefit of being able to absorb more of my co-workers time to learn lots of amazing and challenging tasks. Jeff was one of eight interns during the summer of 2013 at our office in Boulder, Colorado (There were some days when there were more interns than employees in the office!) It was a blast, and everyone we brought on during that summer did great work, but Jeff was the one that most excelled.

He demonstrated what we seek here above all else: he has a genuine interest in the mission of Mocavo. Although you will see his work across the site, it was the independent projects we create during Dev Night where he shined and earned his wings. Unfortunately, Jeff was still enrolled as a student at Washington University in St. Louis during his internship, and had to return to St. Louis the following fall, where he stayed until Spring of 2014. After he graduated, he moved back to Boulder and became full time, a week before our company’s acquisition by FindMyPast. Since then, our team has done some amazing things. Unfortunately, being acquired by a company comes with some trade offs and many new challenges. Although we were granted access to a large swath of new databases that we’re working hard to get online still today, I’m sad to admit that we lost momentum for a brief period of time, and the team we have now is hard at work to make sure that is never the case again.

Unfortunately, next week we lose my long time friend and cohort, Jeff Stephens. His departing the company will be especially impactful because it leaves myself as the only remaining core engineering pre-acquisition hire remaining with the company. Although I don’t believe we’ll ever truly replace him or any of the old guard that has moved on with their careers, we are searching for those whom can bring their own skills to the table and help us excel as best we can. If you or another software engineer you know is looking for work, please refer to our job postings on the about page. We have brought on board a large number of amazing co-workers, both in our Boulder and Provo, Utah offices, to fill the gaps but we’re still lacking the ability to push out our exciting advancements in this industry in a timely manner because we’re fighting a lack of personnel.

I would not be the developer I am today without the strength and passion of our team, the same strength and passion I want to pass on to future employees.

Thank you,

– Andrew Purkett,
Senior Engineer at Mocavo.com

Faster is Better: Turbocharging our SEO

02 Oct 2015

At Mocavo we take a lot of pride in our technology. Whether it’s a core product like Mocavo’s custom-built genealogy search engine, or experimental tools like our handwriting recognition technology, we have some incredibly talented engineers creating technological breakthroughs to bring family history resources online.

One particular challenge about searching over 275,000 datasets for billions of names is to ensure that results are lightening-quick. A fast site is good for many reasons; importantly, our users can quickly find new discoveries and search engines like Google can expose our free content to new people.  In fact, over the past few years Google has incorporated site speed as a critical factor in its ranking algorithms; through meticulous testing, they found that faster sites = happier browsers. And they reward swift sites accordingly.

In the summer of 2013 our engineering team set out to re-architect the way we serve up our site, to make it lightning fast for users as well as search engines. This process of optimizing our site for search engines is known as SEO (Search Engine Optimization). I’d like to share the progress.

Size Matters

First, let’s put some context around the size of our collection: my colleague Derrick provided an excellent overview of how we manage our datacenter, which includes over a petabyte of storage. A petabyte is a unit of data storage, represented by a 16 digit number. That’s over 1000 times larger than the storage capacity of the average new PC sold today. For context, a petabyte worth of music would play continuously for over 2000 years; a petabyte of movies would fill over 223,000 DVDs.*

That’s a huge amount of data!

So to create a lightning fast site, we had to figure out a novel way to distribute this massive (and growing) dataset across the servers in our datacenter and retrieve results very quickly with our genealogy search engine.

Changing Search Retrieval Time

Last summer before we started this project, our average end-user load times were more than 4 seconds. Roughly a quarter of this load time was spent on retrieving search results.

After several weeks of testing out different methodologies, our engineering team created a system of custom caching servers that can store and index all of the content very quickly. That means that our site stores a ‘copy’ of every record on our servers and can retrieve this content without a lot of processing overhead. This new caching mechanism allows us to retrieve some records from our search engine in under 20 milliseconds and most book pages in under 75 milliseconds.

Additionally, we spent some time simplifying other client- and server-side code, further reducing page load time. Once all of these changes were deployed in October, our aggregate page load times dropped in half:

Screen Shot 2014-03-04 at 11.26.43 AM

Other SEO Friendly Changes

Another important SEO consideration for us was to update and standardize our URL structure. During the previous two years of rapid growth, our URLs assumed various different styles, some legible to users, others not so much. And some of these versions were less-than-ideal in terms of search engine friendliness.

For example, for our popular Social Security Death Index collection, we had all of the following URL styles at the same time:

  1. http://www.mocavo.com/records/ssdi/16889126271618839064
  2. http://www.mocavo.com/records/ssdi/JUAN-ESPARZA-1898-1969
  3. http://www.mocavo.com/Juan-Esparza-1898-1969-Social-Security-Death-Index/16889126271618839064

As a user browsing through a set of results on Google, which style most intuitively indicates what the page is about? Indeed, the third version quickly tells you the who, what and when about the page. Something like /ssdi/16889126271618839064 doesn’t communicate a whole lot of context.

So after careful considerations like this, we overhauled the entire URL structure of the site and then submitted new sitemaps to Google.

Google Crawl Rate

With the combination of a faster site and consolidated URL structure, Googlebot is now eating up our content as fast as it can. In August 2013, Google crawled as few as 75,000 pages per day as the site took over 1.5 seconds to deliver a single page. But after we rolled out and tweaked the custom caching solution, the time for Googlebot to download a page dropped to roughly 242 milliseconds.

Screen Shot 2014-03-04 at 11.28.25 AM

As the page loading time decreased, Googlebot increased the number of pages per day that it crawled. Today they’re accessing about 2 million pages per day; that’s over 23 pages per second!

Screen Shot 2014-03-04 at 11.29.42 AM

It took a few weeks for Google to digest the various changes, but we’re proud to report that the number of Mocavo pages indexed by Google has increased nearly 10-fold in a few months. Here is a great screenshot from Google Webmaster Tools showing the evolution of our site in the Google index:

Screen Shot 2014-03-04 at 11.30.25 AM


A Big Slice of the Web

But just how big is that?  According to estimates from http://worldwidewebsize.com/, there are somewhere between 20-50 Billion webpages online. That means Mocavo’s index represents somewhere between 0.11%-0.29% of the entire web. And it’s growing every day!

We’re quite proud of this investment in SEO as the growth in Google means our content is available to an even greater audience, all of it free forever.


*Source: http://www.ft.com/cms/s/2/bc7350a6-8fe7-11e2-ae9e-00144feabdc0.html

A New Chapter for Mocavo

23 Jun 2014

Today is an exciting day for genealogists everywhere as we’re announcing that Mocavo has been acquired by Findmypast/DC Thomson Family History. This is a groundbreaking development for the industry and a major turning point in Mocavo’s quest to bring all the world’s historical information online for free. The wonderful folks at DC Thomson Family History share our vision of the future of family history, and we couldn’t be more excited to join them.

For the past few years, the Mocavo team and I have dedicated ourselves to bringing innovation and competition to an industry that is sorely lacking in both. From the very beginning of Mocavo’s history, we had this burning desire to figure out how to organize all of the historical information disparately spread across the Web. Not long ago, even with a hard-working and incredibly talented team, our service wasn’t resonating with users and our business wasn’t working. In October of last year, we decided to do something audacious and bold – something never before tried in the industry. We launched our Free Forever revolution and this became the day when Mocavo’s soul was born. Everything turned around once we put a stake in the ground and stood for free genealogy (and now Mocavo is growing rapidly, putting more than 1,000 free databases online every single day and more users discovering us than ever). We have our loyal and supportive users to thank more than anyone!

One of the immediate benefits of the acquisition is that we’re putting the complete US Census index online for free (forever!), making us the first commercial provider in history to ever do this. Search the United States Federal Census Now.

The next few months are going to be incredibly exciting as we bring together two companies with enormous resources, content, and technology to bring you more of what you love. Nothing on either site will be going away – just getting better (and quickly!).

Lastly, we could not have done this without the support of our loyal community members. We appreciate your dedication and patience, and we look forward to helping you discover even more of your family’s story.

Mocavo Acquired By Findmypast: A New Chapter Begins

23 Jun 2014

London, UK, 23 June 2014Findmypast, the leading British family history company, announced today that it has acquired Mocavo, the fastest growing genealogy company in the US.

Findmypast, the leading brand in the DC Thomson Family History portfolio, has been at the forefront of the British family history market for over a decade. It has an established collection of 1.8 billion historical records and an extensive network of partners including the British Library, the Imperial War Museum, the Allen County Public Library and Family Search.

Founded by Cliff Shaw in 2011, Mocavo is a technological innovator in the genealogy industry. Its highly sophisticated search engine brings together, in one place, a diverse range of sources, such as family history record indexes, school and college yearbooks, church records and biographies, which help millions of family history enthusiasts to fill in blanks in their family trees and add colour to their family stories.

This acquisition, coupled with the recent tender win of the 1939 Register for England and Wales and the purchase of Origins.net, forms an important part of the growth strategy set out by Annelies van den Belt, CEO of Findmypast, and her new team.

Together Findmypast and Mocavo will create one of the fastest growing global genealogy businesses. The two companies will provide customers with easier access and more relevant information to help add colour and depth to family history.

Additionally, they both remain committed to delivering on Mocavo’s promise to provide free access to family history records on an individual database level forever. Toward that commitment, Findmypast is announcing today that the full indexes to the US Census from 1790 to 1940 are available for free at Mocavo.com.

Mocavo will become a fully-owned subsidiary of Findmypast. It joins the Findmypast family of brands including the British Newspaper Archive, Genes Reunited and Lives of the First World War.

Annelies van den Belt, CEO of Findmypast, said: “Findmypast’s strategy is about growth and the US market is key. Our purchase of Mocavo, combined with our existing US customer base, gives us an excellent platform for expansion in the world’s number one genealogy market. Together we can provide a dynamic family history experience that offers customers the opportunity to make a real connection with their family heritage.”

Cliff Shaw, founder and CEO of Mocavo, said: “We are thrilled to join forces with Findmypast and become a part of their family of leading brands. The combination of our companies will provide family history enthusiasts with unprecedented access to the stories of their ancestors. Expect Mocavo to grow stronger with Findmypast’s support and to continue to drive innovation in the family history category.”

Joshua Taylor, newly appointed Director of Family History, Findmypast, said: “Our heritage and rich record collections coupled with Mocavo’s sophisticated technology will make for a powerful combination enabling us to offer our customers even more ways to unlock the fascinating stories within their family history.”

The Mean, Lean, Green Mocavo Machine

28 Feb 2014

Here’s a riddle for you: What runs on clean burning natural gas, is cooled by ice cold mountain air, has 99.9% reliability, and processes 40 teraflops per second? Why it’s nothing less than Mocavo’s primary datacenter.


Reading the world’s genealogical records one at a time and making them searchable is no small feat. It requires a fine tuned infrastructure with plenty of processing power, storage, and redundancy.

With over 500 multi-core Dell Datacenter grade servers under the hood we have the ability to perform OCR on over 1 million documents per day. In fact, we’re in final stages of re-engineering our OCR process to increase that number to over 5 million, all without affecting the performance of the website whatsoever!

The processed documents have to go somewhere, and we’re pleased to announce that we have increased our storage capacity to over 1 Petabyte! That’s a lot of spinning platters, check out below how we keep them all spinning!

What good is all that power and processed documents if there is a fire, flood, or zombie apocalypse that destroys it in one fell swoop? We have an off-site datacenter connected via a 10Gb dedicated fiber link that keeps all of our (and your) precious records safe and available instantly for recovery. We like being able to sleep at night, the backup cluster makes that possible.


The most expensive part of running a datacenter isn’t power or cooling, it’s the labor to keep it running all the time. When you’re working with 500 servers, seconds count. Even spending just 30 seconds per server puts you over 4 hours in labor. Out in the wild you’ll find the server to administrator ratio ranges from about 15:1 to 100:1. So for 500 physical machines, what do we consider lean? Try 500:1, which is plenty- if you have the right tools.

Enter Puppet, Icinga, and Fabric.

Puppet is enterprise level configuration management. It works seamlessly in our DevOps workflow. Every 30 minutes every single physical server in our datacenter checks in with the puppet-master asking for updates. Last week I added a new subnet and needed to add a route to about 100 machines. So I opened up our nodes.pp file and added this:

exec { "route add -net gw dev eth0":
   unless => "route | grep 10.10.108",

So in 5 minutes I added a static route to 100 machines. That equates to about 3 seconds per machine. Not too bad, but I can do better.

Lets say I wanted to add that route to all 500 machines, and I couldn’t wait for the half hour puppet update. Let’s get Fabric involved.

Fabric is a python module that sends pre-defined (or on the fly) commands over SSH to hosts in hostgroups or roles. In my fabfile.py I already have a function to restart puppet:

def kick_puppet():
    sudo('service puppet restart')

So after I add the route in puppet, I’ll restart the puppet client on all machines with fab -R All_Machines kick_puppet I have now touched 500 machines in less than 6 minutes, which takes me to less than a second per machine. I’m sure you see where this is going… but you can’t automate everything, can you? What if you have to reinstall a server from scratch?

In case of a corrupted OS drive, or a new server that has never been on the network, (re)building from scratch is quick and easy. Power on the server, press F12 to boot from the network, and PXE takes over. The OS gets installed, it reboots, then puppet takes it from vanilla OS to production ready, all without being touched again. One touch installs, try it. You’ll be glad you did.

I suppose there are a few things I’ll never be able to automate, like changing out a hard drive or a bad stick of ram. I don’t have time to run tests on each machine to see how it’s doing, but Icinga has 24 hours each day to do just that, and it never gets bored or tired of it.

Icinga is a fork of Nagios, and right now it makes over 3000 individual checks for us every 10 minutes, and it’s not even breaking a sweat. We use puppet to automate the creation of the checks, and Icinga will holler when a hard drive fails, puppet stops running, a web server stalls, or a machine becomes unresponsive. It can even perform actions based on an alert through a handler (like restarting puppet if it’s not running or rebooting the unresponsive machine)

So on the occasion that we must physically touch a machine, Icinga narrows it down for us so we can get in and get out, because contrary to what you see in the movies, datacenters are LOUD and generally uncomfortable to work in for long periods of time.


Mocavo is concerned with being efficient and taking care of our natural resources. Often those two goals work very well together, here are some initiatives we have at Mocavo to lower our footprint while providing an excellent product:

The power we use here at the datacenter comes from clean burning natural gas, which we like because it’s less expensive and better for the environment.

We don’t run redundant power supplies on each server and instead rely on a redundant infrastructure. The load is distributed so if a server drops out, the application can continue to run smoothly until it can be repaired.

We run the datacenter at a balmy 82° F. With adequate airflow for heat removal, our equipment runs comfortably when warm, saving energy from cooling. To give us extra heat ballast for thermal load changes and to prevent static build up, we run a humidifier to boost the ambient humidity above 30%.

We’ve engineered a free-cooling air exchanger to make use of the cold and arid mountain air to cool the datacenter. When running at capacity it saves 4 tons (14kW) of cooling, which annually saves 75 tons of CO2 from the atmosphere, bringing our PUE down to around 1.27. According to the Uptime Institute’s 2012 Data Center Survey, our PUE is 32% less than the respondents’ largest data centers that average between 1.8 and 1.89 and is quickly approaching Google’s internal datacenter PUE of 1.12.

Technology makes genealogy possible, in a lean, mean, green, BIG way!

Photo Detection in Historical Documents

27 Feb 2014

We have continued to improve our handwriting detection and recognition tools. In doing so, we stumbled upon another exciting new feature that we think will help change the way people learn about their family history. We are excited to share that we have developed the ability to very easily extract pictures, photographs and other images from our historical books. It’s not exactly like stumbling upon penicillin, but we were pleasantly surprised at how perfectly we are able to identify these images!

Notice the red outline in the examples below




The next step for us will be to not only extract the image, but to also read the associated caption to enable our community members to search for information about the image. In the vast majority of cases, the caption describing the image is relatively easy for our search engine to identify for the following reasons:

  • its proximity to the image
  • additional whitespace around the block of text
  • the caption may also have different type characteristics from the page content (font size, weight, casing, etc)

What is particularly exciting about this discovery is that when we put the finishing touches on this technology, we’ll be able to add Image-specific search capabilities to Mocavo. This development will open up a whole new realm of exciting discoveries for our community. Stay tuned!

Coming Soon: Online Transcription from Mocavo

26 Feb 2014

Everyday at Mocavo we’re looking for new opportunities to bring more of the world’s historical content online for free, forever. We are excited to share a new service that will be launching soon – our own web-based transcription tool.

We’re very proud to release 1,000 databases everyday; but within those databases are signatures and hand-written notes that could be the answer to a riddle one of our community members (maybe you!) has been trying to solve for decades.

Our transcription tool will soon be “ready for prime time” and we will be inviting our community members to help index these valuable resources. The tool is being tested internally, and the initial experience is so exciting that we wanted to give you a sneak peek of what’s to come.



You’ll be able to contribute to transcription projects simply and easily within your browser. No confusing software to install. No frustrating spreadsheets to maintain. You’ll just select an active project and away you go.

The tool is fast, intuitive to use, and relies on the hand-writing detection system that we announced several months ago. Popover windows will appear above the text and allow you to easily transcribe without ever leaving your keyboard.

Our arbitration process will allow us to quickly review every submission to ensure we maintain the quality standards the Mocavo community expects.

Current Projects

When the time comes to launch the transcription tool, we’ll send you an invitation along with a tutorial that explains how to get started. You will be able to join a Current Project with a single click, and our system will immediately take you to a page like the one in the example above. It’s that simple: Join a project and start contributing!


Recent Activity

When you’re part of a community, there’s nothing quite as exciting as drawing from the energy and momentum of the people around you! It’s important that we share a collective sense of progress and camaraderie, so we’re including an activity stream that will be constantly updating as other community members add transcriptions.



As part of the transcription tool, we will show you the top contributors on individual projects, as well as the top contributors overall.


Coming Soon

We still have a little bit of work to do so that your first experience is as rewarding and bug-free as possible, but we hope you’re as excited as we are about the potential to bring even more content online for the world to enjoy for free, forever.

A little something we’ve been working on…

20 Nov 2013

A little over a year ago, Mocavo acquired ReadyMicro and the incredible mind known as Matt Garner. One of Matt’s lifelong passions and curiosities is to enable computers to read historical handwritten documents to bring genealogy search to the next level. It’s well known in the genealogy industry that historical handwriting recognition is the Holy Grail – the single largest technological advancement that would enable more content to become accessible online (except for maybe the invention of the Web). For the past year, we’ve joined with Matt to tackle this very hard problem, and have finally made enough progress that we can begin to report on it.

Let me start by explaining the problem. Ask a computer to read the page below and it will stumble all over place.


OCR (optical character recognition) technology could read some of the typewritten text – but would be confused by the handwriting (and invent typewritten letters that it thinks it sees inside handwritten text). To make matters worse, this page has multiple typewritten font types, including one that looks like cursive handwriting.

The first process we had to develop was a way to perfectly separate handwriting from typewritten text. If we could do this, the OCR could read the typewritten text, and Matt’s code could attempt to read the handwritten text. We call this process Handwriting Detection, and we figured that if the system couldn’t detect the presence of handwriting, how on Earth would we hope to decipher the marks into words? In the example below, you can see how our system marks typewritten text in green and handwritten text in red – with blue to denote what it believes are graphics or images. It’s not 100% perfect, but hopefully you agree that it’s headed in the right direction.


Now that we’ve detected where the handwriting is, we can start having some fun. Let’s go back 130 years and change the ink from black to blue.


Now, this is just handwriting detection (where we don’t understand what’s written – we just know that handwriting is there).

Let’s talk recognition.

Historical handwriting recognition is one of the toughest technical challenges to solve. First, penmanship is entirely unique to the individual. Second, because it’s historical handwriting, it’s in cursive. All the letters run together, adding another layer of complexity. Third, the way we wrote cursive in the 1700’s is different than the cursive we write now. There are even variations between decades. Our mind has an incredible capability of seeing through incomplete sets of data (a missing character stroke, poor handwriting, an A that sort of looks like an O, etc). Our brains do all of this for us and we don’t even notice it. When you think about how to describe this to a computer, you begin to lose your mind! I believe some of the greatest problems mankind can solve are those that someone would never have started if they had known how hard the challenge was ahead of time. Matt fooled himself just enough to start on the problem and now he’s making real progress from which we are all going to benefit.

Here’s the exciting part: Our recognition technology is starting to work. With limited vocabularies (potential answers), we’re achieving 90-95% accuracy. Sometimes, the technology is able to read things we’re convinced are unreadable (but after getting the answer back from the computer, you realize what was actually written). We grow closer to the Holy Grail every day and can’t wait until we can use the technology to bring more content online, free forever.

Matt and I will keep you updated on our progress over the coming weeks and months, which should hopefully make for some exciting news in genealogy.

Creating A Front-End Ready PSD

11 Nov 2013

A few tips to help clean up your PSD’s and make them front-end ready for a developer or other designer.


There is nothing worse than filtering through unorganized layers in a massive Photoshop file. You end up dragging an image out of a clipping mask or overlooking an element or state. As I work I build the page elements as I go without any type of organization with Photoshop. Occasionally I’ll highlight a background layer with a color so that I can quickly refer to it’s position in the layers toolbar. When I’m happy with the design and have the majority of the possible states built, I go back through and do a quick clean up.

Screen Shot 2013-11-05 at 4.35.43 PM

Be sure that your background layer is locked and use Auto-Select to quickly group page elements. Select all of the layers, then select New Folder. Yes, this deselects what you just selected, but it will place a new folder at the top of the layers you selected. So then you can easily reselect the layers and drag them into the new group without having to search for it in your layer panel. I mostly separate background images, header, navigation, sidebar, main body elements (as a whole or separately depending on the page) the footer and any additional states.

Name the folders as you go until you only have folders in the layers panel. If there is a modal or particular state, place it at the top and highlight it with a color to make it clear.


Screen Shot 2013-11-11 at 2.19.33 PM


Smart Objects

Smart objects are great to work with. This blog post, from Mindy Wagner, does an excellent job describing what smart objects can do and how to use them. Like Mindy says in the post, smart objects are great for many uses but I often use them for repeatable content. This process is great for creating content filled mockups like a search result page–create one result then populate the rest of the page with smart objects of that same result.

Screen Shot 2013-11-11 at 2.37.11 PM

Use smart objects when working with icons from Illustrator. When creating a new mockup I usually have Illustrator open on another screen so that when I need I can very quickly copy an icon and paste it in Photoshop as a scalable smart object. Any edits you make in Illustrator will reflect in Photoshop.


I used to use Layer Comps quite a bit. Watch this video from Method and Craft about States and Layer Comps. Layer Comps are an excellent way to demonstrate multiple states and actions within a single page. They aid in visualizing specific actions like navigation states and modals, and are helpful when handing over the file to a developer.

However, I have recently stopped using Layer Comps as often simply because they are a pain to set up. You have to be completely done designing to make a successful Layer Comp, and if not you will most likely forget to save a specific state and have to start over again. I have found it easier to display each possible state on the page simultaneously, sometimes this means copy and pasting each state below the other on a single .psd or creating multiple .psd’s. This is not a perfect mockup, but it is currently the most efficient work flow I have found.


Notes are helpful tools in Photoshop and can be used to explain expected behavior, thoughts you had while creating the page or even notes to yourself. I have found them useful when handing over a large file with many possible actions or taking notes about a mockup while reviewing with a co-worker.

Screen Shot 2013-11-11 at 2.21.16 PM


Saving working is always a difficult process. Saving every single iteration is a little overkill, but if you only save one file then your computer crashes, you’re out of luck. I never really had a strict saving standard until about a year ago, so far I have had decent luck with it.

Screen Shot 2013-11-11 at 2.34.20 PM

Within my project folder (in most cases) I create two folders: Product and Process. As I work, my old iterations go in Process– including inspiration, research and other low priority items. Once I settle on a final design(s) they go in Product. This also works well when naming files by nameoffile-1.psdnameoffile-2.psd and so on, this way I always know the highest iteration number is the most recent. When a project is completed and live I mostly delete the process folder to save space and save the product folder. Backing up this same process on dropbox as well as my Timemachine save a lot of headaches, because I know that I can always update my backups straight from my computer to know they are the most recent files.

Syntax Highlighting for Underscore Template Comments in ST2

28 Oct 2013

If you use Underscore Templates and have been bothered by the lack of syntax highlighting for comments (note lines 7, 11, 19, and 27) :


There is a very simple solution courtesy of Matt York.

You just need to modify one line in HTML.tmLanguage (or HTML5.tmLanguage if you’re using that package).

If you have the HTML5 package installed, go to ‘Browse Packages’ in ST2 and then open Packages/HTML5/Syntaxes/HTML 5.tmLanguage.

Change line 282 to:


If you do not have that package installed, go to ‘Browse Packages’ and then open up Packages/HTML/HTML.tmLanguage

Change line 286 to:


And voila!