Why was Predictive Cost Data developed and what challenge does it solve?
Tim: We were getting a tremendous amount of calls and people would ask, “What do you think the data is going to look like next year?” Basically people were planning tomorrow’s project with yesterday’s data. No one’s ever had a solution to this issue.
Dave: I think the long standing problem in the industry has revolved around capital projects, not necessarily the smaller projects, but more the major jobs that take place over the course of multiple years. For example, a college is thinking of building a laboratory—maybe three years pass before they actually begin construction on the project. At that point, how can they feel confident about the numbers if previously they’d been burned by inaccurate estimates? Tim’s point about performing tomorrow’s project with yesterday’s data, that’s exactly the problem that has always weighed down the planning stage of the building cycle.
What is the technology behind Predictive Cost Data?
Noam: First we started with 15 years (60 quarters) of historical RSMeans data. If you break down that data, it is comprised of more than 50,000 materials and 970+ locations, ultimately leaving us with 10 billion data points for the materials. We then added in labor and equipment with corresponding time and location data points. This gave us the confidence that our work was capable of providing statistically defendable data predictions.
We proceeded to marry that dataset with world-class data science and analytics—specifically, external data from public government and private indexes utilizing a data aggregator, Moody’s, that takes into account well over a thousand indexes from across the market. Through a rigorous statistical program we produced a unique algorithm for each material and labor segment such as steel, wood and concrete. We then back-tested the last 10 years and found that our predictions were accurate to a maximum error rate of less than three percent—for a predictive algorithm those are really impressive results.
How do you see the industry leveraging this technology in the immediate future?
Dave: Construction is a boom or bust industry. Recently, we’ve had a good run. Let’s hope this doesn’t happen, but in all likelihood at some point in the next few years there’s a good chance the industry will take a dive. If Predictive Cost Data can help predict that downturn, it will be an extraordinarily valuable tool.
Tim: More specifically, Predictive Cost Data can play a significant role in Value Engineering. Say an architectural group wants to know the impact of using steel versus wood on an aspect of a project. Knowing how much these materials will cost when the actual build is occurring simplifies that decision. Larger owners can also factor in rationality in their decision making. For example, if you know prices are going to be higher in Atlanta during a certain period, it might be more pragmatic to build in New York City during that same time.
Could you expand on the challenge of planning to build a project years in the future with outdated data?
Andrea: All of the other resources we researched usually give only high level, one-year-out tracking broadly tied to commercial construction. Predictive Cost Data has a specificity these previous tools lack. If you want to know how much this hospital wing will cost, you can really drive down to a material level to get a much more reliable estimate. This versatility is game changer.
I foresee owners really benefiting from this—again, especially those with larger projects. Over time material prices are going to change significantly. Simply studying trends or factoring in general inflation to try to account for these differences doesn’t make the cut. With Predictive Cost Data, the algorithm allows for a level of accuracy that has never before been possible.