selected summaries of my consulting work
University of Virginia
The University of Virginia is a leading public institution and consistently ranks among the top three public universities in the United States. Like all organizations, they have a technical infrastructure that sometimes has trouble keeping up with their growth rate.
In 1984, the University installed a new University-wide analog phone system, which was then partially upgraded again in 1996. Eventually, however, things started to show their age: the network was gradually becoming overloaded and it was tough to expand telephony services for the University’s many power users.
Digital telephony over VOIP was becoming more popular at the time as analog networks were deprecated, and private branch exchanges (PBXs) for large organizations were becoming a de facto requirement for managing telephony services. A PBX is analogous to a network router for phone systems – it services a local phone network and connects it to the broader public switched telephone network, or PSTN. Without a PBX, each phone would need a direct connection to the PSTN, which would be massively expensive and wasteful – most organizational calls are internal and don’t need a direct connection, and most telephony capacity goes unused at any given moment.
I was asked to consult and advise on a private branch exchange (PBX) VOIP system for the University’s Information Technology Services department during an exploratory testing for the next-generation phone system. I built a simple Asterisk VOIP server from scratch capable of routing and connecting up to 6,000 simultaneous SIP calls. Seven years later and after several testing rounds, ITS eventually began phasing out analog systems University-wide and the replacement effort is now underway.
Girl Scouts of America / Little Brownie Bakers
In 2006, Girl Scouts of America (GSA) underwent a series of major structural reorganizations designed to reduce its overhead as a large national organization. But one very important change also happened that had nothing to do with corporate restructuring: the removal of trans fats from all Girl Scout Cookie recipes.
As any baker will tell you, fats are crucial for baking. Solid fats are required for the texture of many baked goods, like pie crusts or cookies – without solid fats, you wind up with crunchy, crispy baked goods instead of moist, chewy ones. Fats are also critical for the Maillaird reactions, a series of chemical processes that brown foods and give them a richer flavor profile. But when medical research revealed that the most popular kind of solid fats, trans fats, were highly correlated with coronary artery disease, GSA sprang into action and asked its bakeries to revise their recipes to eliminate all trans fats. This was a tall order.
The bakeries experimented with a number of recipes and tried taste tests with many focus groups. Little Brownie Bakers, a division of the Keebler Company, took a particularly methodological approach to the focus-group research and asked me to help them find the best cookie recipes among the bunch. Taste is only one part of the equation in the bakery business, so “best” is not a straightforward determination: other considerations include things like shelf stability (how well the ingredients stand up over time), resilience (how well the product resists the wear and tear of shipping), and many other factors.
I was asked to consult on helping to analyze the resulting consumer data from the focus groups, in order to identify which recipes were the best according to a number of systematic criteria. I built a Python web application to let the food testers track and upload data from the focus groups, project estimated costs and production capacity for each recipe, and ultimately produce the best Girl Scout cookie money can buy. It worked: 2007 was a record year for Girl Scout cookies, generating almost a billion dollars in sales, and the trans fats were gone.
Marine transport and defense are vital to the national security and economy of many developed countries. To make the world’s vast network of shipping lanes and naval defense safe and reliable, precise and accurate navigational charts are required. Because of the risk of damage or loss to paper charts, the International Maritime Organization requires certain ship classes to be equipped with mandatory electronic chart display and information systems (ECDIS). Northrop Grumman was working on a project to provide a next-generation digital mapping solution to meet ships that fell under the new IMO restrictions – a piece of hardware that could be installed on the bridge of any compliant ship and provide real-time navigational data.
I was retained to optimize and improve their existing codebase and audit it for technical debt. I was also asked to devise improvements to the accuracy of the wayfinding algorithms already in place, which I did by implementing a complex but precise method for calculating distances called the Vincenty solutions. (Thaddeus Vincenty was a Polish-American geodesist who computed a general method of finding the distance between any two points on an ellipsoid, which is very useful if you happen to be living on something that’s approximately an ellipsoid, like the billions of inhabitants of Earth do.) The result was that the navigational error on long-range waypoints decreased by several orders of magnitude – the difference between being a few kilometers off and a few millimeters off.
The successful end result was Northrop Grumman’s VisionMaster FT ECDIS-E. It’s still in use today and routinely wins contracts, especially on a number of major cruise and transport vessels. The next time you’re on a big ship, see if you can get a tour of the bridge, and look for the ECDIS system - it might be a VisionMaster!
United States Environmental Protection Agency
EIS is the world’s largest emissions inventory database, and is managed by the tireless efforts of the U.S. Environmental Protection Agency. An emissions inventory is a collection of data and metadata about pollution of different kinds. Sometimes this pollution is natural, like ash from forest fires or ozone from lightning strikes; other times it’s the result of manmade activities, like driving cars or generating power. But all of it has to be tracked to get a complete picture of how pollution affects us.
Every three years, an enormous partnership of cities, counties, states, and industries across the US submit accumulated metadata about the emissions under their jurisdiction. The resulting summary dataset constructed by EIS, called the National Emissions Inventory, is used by climatologists, academics, and policymakers to understand how pollution affects all of us, and to develop strategies for minimizing or eliminating its harm. For example, this map shows every location in the US that emits tetrachloroethylene, a toxic carcinogen and organic solvent with widespread industrial applications.
My team built the modern EIS still in use today, transitioning the agency away from a system where sensitive data was reported by shipping CDs around. It’s an enormous Java web application which contains hundreds of data integrity checks and stores terabytes of data, culminating in a system that represents a century’s experience of expert knowledge in the air emissions domain.
The MEDRAD Stellant D is a computed tomography (CT) injection system, a device used for imaging human tissue to diagnose medical conditions. Imaging happens by passing radiation through a patient; the selective absorption of the radiation by different tissues creates a picture that a physician can use to determine if tumors or other medical problems are present.
Because radiation is generally harmful to living things, minimizing the amount of radioactive exposure patients receive is important, so a special dye is also injected that increases the clarity and contrast of the images that are resolved. The software in the injector computes the correct timing and optimal dose of the dye to minimize the radioactive exposure that patients are subjected to.
I was retained to revamp the team’s internal software testing procedures for the new Stellant D product line, construct a suitable test platform for the system, and perform a code review and audit of the existing procedures. Today, the Stellant system is one of the most popular injection systems in use, and has provided care for tens of thousands of patients. Bayer HealthCare’s revenue has more than doubled in the last ten years, to US$20 billion in 2014.
Getaroom.com / Hilton Hotels and Resorts
Getaroom.com is a hotel-booking service that saves travelers money by offering privately negotiated rates below the publicly-advertised rates for hotel rooms. Customers visit the site or call a toll-free number to inquire about availability and book their rooms. The business model proved very popular, and the timing was fortuitous because the 2007-2010 recession was under way; hotel prices are one of the first places to take a hit in recessions.
Getaroom’s architecture was fascinating because they have to serve a lot of different stakeholders: customers that book rooms, employees who must navigate the internals of the booking systems for each hotel chain as well as Getaroom’s systems, and engineers who need to optimize the platform for maximum speed and performance. Building a technology infrastructure that serves many stakeholders is very hard to do correctly and I think it’s still one of the best examples I’ve come across.
I was asked to build and consult on a B2B API for Getaroom in partnership with Hilton Hotels and Resorts, a US$10 billion revenue provider of hotel lodging. Getaroom wanted to interface with Hilton’s large portfolio of about 700,000 rooms, but Hilton’s API at the time was a little clunky for travel-business consumers. We successfully built a much nicer Ruby wrapper around the legacy SOAP/XML stream, and the first Hilton API call was made on August 10, 2010. It gradually grew to become a major part of Getaroom’s revenue stream. As of 2014, Getaroom was projected to earn about US$150M in revenue.
Cardagin Networks is a startup formed with the mission of making it easier for small businesses to own and operate their own loyalty programs. Traditionally, small businesses interested in loyalty programs don’t have a lot of spare time or effort to devote to these platforms.
I was hired as the company’s first CTO after a rocky initial launch, and given carte blanche to initiate a business and technology turnaround. We went from zero to eight technical team members in twelve weeks and built a website, two mobile applications, and an administrative platform for the business customers. At the same time, we built a sales team and the technology infrastructure necessary to support their rollout and equip them for success.
Six months later, thanks to much better revenue numbers and the revamped platform, Cardagin raised US$5.25 million and was on the front page of TechCrunch. After pulling that off, with the major technical challenges behind us and the company on a stable technology trajectory, I stepped down and turned things over to the executive team; Cardagin was eventually sold in 2013.
Charlottesville is very gradually emerging as a regional technology hub in Virginia and more generally in the mid-Atlantic. HackCville, a clubhouse for student entrepreneurs, is great evidence that a solid ecosystem for startups is beginning to form.
HackCville fills a much-needed void between the University’s earnest and commendable but somewhat fragmented efforts to cultivate entrepreneurship, and just blindly striking out towards a business with no experience whatsoever. (Sometimes blindly striking out is a fantastic idea, but if you can articulate neither where you want to go nor where you want to wind up that’s almost always a problem.) It has been wildly successful and has readily emerged as a beacon to students who want experience but aren’t sure where to get it, or how to do it in a structured environment.
During my volunteer office hours at HackCville, I mentor students from all walks of life and experience – each from completely different ages, backgrounds, and goals. My job was to help them explore entrepreneurship and be better at business-making, and to help them avoid the mistakes of my own past and those of others. By far, the most successful mentees all had one attribute in common – English doesn’t have a great word for the exact quality I’m thinking of, but it is a mix of resolve, empathy, and ambition. I’m sure I’ll be seeing them all in the newspaper someday!
UpHex is a business I’m really proud of. My cofounder, Bradley, was formerly the CFO of another company I consulted for, which is how we met. We started UpHex because we were convinced that businesses deserve better than to have the mountains of analytics data they collect relegated to the corporate IT dustbin.
The central premise of UpHex is that web-based businesses collect a lot of implicit metrics data via services like Google Analytics, Facebook, Shopify, and so on, but use very little of it. They are therefore leaving money and information on the table. Most of the time, this failure mode happens because businesses simply don’t have the time or technical expertise to dissect reams of analytics data. That’s where UpHex comes in: in exchange for a monthly subscription fee, your company authorizes UpHex to connect to the data streams currently gathering dust, and we’ll analyze and process them for you in real time.
When we detect that something’s happening that we think you should pay attention to, we’ll notify you so you can elect whether to take immediate action – an opportunity that would otherwise have been lost forever. If you’re not sure what to do (e.g., your web traffic is unexpectedly spiking) UpHex can provide some automated suggestions for actions you can take (e.g., showing you the top recent referrers to identify the new traffic source), or you can engage with an analytics counselor to get a pair of human eyes on your issue with a single click. The number of missed opportunities to improve businesses that are lost each day is truly staggering.
I don’t have a formal title other than cofounder, since UpHex isn’t really big enough to stand on ceremony. But I act as the de facto CTO – hiring and managing our engineering talent, setting the technical direction for the company, and making the big architectural choices that I think set us up for success. Most importantly, I get my hands dirty on a day-to-day basis, and I absolutely love it. As of 2015, UpHex has processed millions of data points, from thousands of different metrics, from hundreds of businesses.
University of Virginia / National Science Foundation
Researchers interested in improving the energy efficiency of homes face a daunting challenge: getting data about real homes requires putting expensive sensor platforms into uncontrolled, hostile environments. Pets, power outages, and children often signal the death knell for getting reliable data from your sensor network in each home until a technician is sent out to fix things – a time-consuming and costly process.
Wouldn’t things be easier if those sensor networks could diagnose themselves, be self-healing, and remotely update, all without human intervention? And wouldn’t research be simpler if those networks reported their own data, rather than waiting for a research technician to come out and collect the data? Now they can! Welcome to the future.
With the invaluable and inimitable Filippo Valsorda and under the auspices of Dr. Kamin Whitehouse, I built Piloteur, a deployment platform for reliable smart-home sensor networks. Piloteur can be instantly and rapidly deployed using nothing but your AWS key and a deployment target – that’s all you need to start a network. I also co-authored a paper about the work which appears in Proceedings of the 1st ACM Conference On Embedded Systems For Energy-Efficient Building (BuildSys 2014); the result was so successful that it won Best Presentation at BuildSys 2014. There’s also a GitHub repository that you can access here.