Pulsenomics Survey -Contrast to Case Shiller

The results of the Pulsenomics home price survey of the forecasts of 100+ economists (including me) for 2016 Q1 were released this week.  As I’ve noted before the Pulsenomics survey (click to link) is one of best, oldest, broadest surveys of home price forecasts.  Survey results include data, the range of outlooks, means and standard deviations.  It’s a great tool to use for seeing what people think.

That said, I think that there may be some issues with surveys, particularly as they line up against live forward markets (in this case the CME Case Shiller futures) that readers might consider.

First, it may pay to be a survey participant with an outlier view.   It seems that the media would much rather pair the extreme bulls for cumulative performance by 2020 (~30%) with the extreme bears (-16%) for a “lively”, “balanced” debate rather than focus on those who are near the middle of the pack, but who’s forecasts have been consistently more accurate.  By contrast, in trading of CME futures, being “wrong” by 1 point (so < 0.5%) on contract prices for 2020, costs $250/point/contract.  Academics have shown that small markets where even small dollars are at risk (the Iowa Electronic Markets is a good example) produce results that compete well with surveys as the participants have an economic stake in their views.

While I can relate to the notion of smaller consulting firms (like mine) having a higher probability of air-time with extreme views, maybe outlier views should come with some cost.  I’ll ask Terry Loebs (Pulsenomics) if he’d consider running the survey like the English Premier League (soccer for those who don’t follow it) where every year they “relegate” the bottom teams to the junior division.  Maybe we could do so with economists who have the worst survey results?!?

Second, my sense is that many in the business of forecasting home prices place more weight (at least than me) on past price moves.  While home price index moves are highly auto-correlated,  I’m betting (with my CME quotes) that prices are going to trend less than in previous years.

That brings me to this quarter’s survey results.  For one of the first times, I find my views deviating from significantly from average, which is only important as my “views” are just extrapolations of CME prices on the Case Shiller futures (with a number of required tweaks).

Zillow v CS Feb 2016


Note in the graph (above to the left) that the ZHVI (Zillow) index (referenced in the Pulsenomics surveys) and the Case Shiller CUS 10-city index (the one referenced in the CME HCI contracts) have generally moved in the same range. (The graph depitcts YOY changes for each index.)   Case Shiller has been more volatile but both ended up in th3 4-5% range for 2015.

The diagram to the right attempts to translate the Pulsenomics survey results (by showing mean and +/- 1 standard deviation) vs. CME futures (using mid-to-mid, and then calendar spread bids and offers.)  (I’ll be happy to go into more detail for any readers looking for a longer explanation.  Pulsenomics  results have blue diamonds that line up on year-ends.  The CME prices have red dots, with all but one, lining up versus September, as I’m using prices from the November contracts (which settle on Sept values).

The CME implied percent gains are clearly lower than the Pulsenomics survey results, and in some cases the CME mid-to-mid % differences approach the minus 1 standard deviation of the Pulsenomics survey.   In viewing this, I can see why my Pulsenomics survey forecasts fell into the lowest third.

Are there inferences to be drawn for others looking at these results?  As I noted YOY gains for Case Shiller have been higher than Zillow for the last 3 years (see left graph).  Has something changed that now finds CME YOY price % gains to be below what 100+ economists now forecast?

Given the lack of liquidity in the CME futures, and given that so many quotes are mine, it’s very possible that I’ve just turned too bearish, particularly as I ponder the impact of a falling stock market on public pension funds.  (Real estate taxes are headed higher).

An alternative to consider though, is that those in the business of forecasting home prices do not see as strong a link between future home price gains and the recent stock market sell-off.  The mean results of ~3.0-3.5% over the next few years look like merely a dampened continuation of the last year’s trend.

I’ve put my money where my mouth is on CME quotes and I’d love to have someone disagree (lift offers) or offer along.  Let’s see where we are a year from now.

As always, feel free to contact me (johnhdolan@homepricefutures.com) if you have questions on any blog, would like to discuss any aspect of hedging home prices, or have a trade you’d like touted.)






One Comment

  1. John,

    Thanks for the input and your fine work. Some thoughts:

    – As the person responsible for managing the panel, I’m afraid that the mere prospect of being demoted to a “junior division” would, understandably, impact some panelists’ enthusiasm for and willingness to participate in the survey. The net effect would likely be lower response rates and/or increased panel turnover.
    – Since inception, Pulsenomics has published every edition of the panelists’ expectations data in an Excel worksheet. We’ve done this to facilitate filtering of survey results however people see fit (i.e., if one believes survey results are contaminated by “extreme” data, a user can simply apply a filter to remove those records and compile a customized survey data set).
    – Every year, Pulsenomics (and our sponsor, Zillow) publicly recognizes those panel members whose prior years’ expectations prove to be the most accurate with a Crystal Ball award.(John, I trust that you use the one you’ve earned every day in your market-making!). We have published performance rankings for every expectation vintage/forecast horizon combination since the survey’s 2010 inception.
    – I understand why the media are sometimes drawn to reporting unconventional views. With that said, the vast majority of media reports concerning the expectations data are focused on panel-wide results (compiled from 100+ respondents).
    – Pulsenomics does track and report summary data pertaining to the most bullish and most bearish quartiles of survey respondents. Each quartile data set reflects the mean of 25+ panelists’ expectations; by design, these data represent neither consensus nor extremes. While we (and others) believe that the spread between these quartiles over time reveals useful information about housing market conditions and sentiment, our “bull/bear” quartiles data have been largely ignored in the media.
    – With few exceptions (i.e., those panelists who are employed by institutions that prohibit disclosure of their survey inputs), panel members elect to share their home price expectations data for all to see (and scrutinize). They are all professionals or respected academics who graciously volunteer their time. Panel members who have “non-consensus” expectations are no less sincere, and no less well-informed, than those with an outlook closer to the prevailing consensus. Although they have no money at stake, more valuable things are: their reputation and integrity. I just don’t think any of our panel members have anything to gain from deliberately altering his/her worldview, or risking his/her hard-earned reputation, to get a headline or stir their twitterverse.
    – Over the past several years, the dispersion of views across all panelists has narrowed considerably. I believe this is the natural result of a relatively calm, recovering market and diminished uncertainty (e.g., less prominent government market intervention, abating foreclosures).
    – Consider the home price expectations held by Bob Shiller more than a decade ago (before the housing bubble burst) when the prevailing conventional wisdom was that “home prices (nationally) never go down”. Shiller and a relatively small handful of other prominent prognosticators and asset managers held a very unpopular view, one that some characterized as “extreme” at the time, but ultimately proved prescient.

    Thanks as always for the feedback, keep up the great work!!


Comments are closed.