IntuitEcon is developing a suite of models to help retail investors make smarter investment decisions.

Existing tools on the web, such as those provided by Vanguard, Morningstar, and Etrade, focus on historical price trends. This is understandable. Historical prices are objective and easy to display. Unfortunately, historical prices don’t provide much insight into whether the current price is a good one. These sites also provide metrics such as PE ratios and dividend yield. That is helpful and objective, but still lacking. In our view, what is missing in these existing tools is a comprehensive way to compare all potential investments in terms of expected profit and loss (P/L) and how they fit together in a portfolio. What is more useful? How well the S&P 500 has performed over the past year, or how the S&P 500 has historically performed during periods that look similar to today (ex. low unemployment rate and high PE ratios? Obviously, historical performance during periods that look similar to today is going to provide investors with more insight than performance over the past year or month. Moreover, we want to know how investments fit into a portfolio. Several investments may look reasonable, but if they are all closely related (ex. all are closely correlated to the overall market – high “Beta”) then the portfolio will be very risky. Here again, existing tools from reputable sites are sorely lacking. We provide estimates of how a portfolio’s expected P/L is impacted by changing the weights of individual investments. This allows investors to think in terms of overall portfolio performance; which is what ultimately matters. Seems like a no brainer to us.


Our approach to investing is consistent with our manifesto. If you have not read this please check it out before reading further.

Investing is easy when stock prices are low. When stock prices get high investing becomes much more complicated. Our investment approach seeks to improve upon the conventional wisdom that says people with long time horizons should put their money in the stock market and forget about it. We generally agree with this, but at some point one has to question any investment strategy that ignores fundamentals. When prices get so high above earnings that the only two times in history with comparable prices were the 1929 stock market crash and Dot-Com bubble…one should question the conventional wisdom. Thus, we attempt to improve upon this conventional wisdom by replacing exposure to stocks when prices become too high to reflect reasonable value with a basket of assets that provide true diversification.

Value investing is the process of identifying assets that provide reasonable expected return and limited downside risk. This process requires an understanding of each asset’s intrinsic value and margin of safety, two concepts that essentially require estimating the assets’ profit and loss (P/L) distribution. Assets that do not have a stable and measurable intrinsic value should be characterized as Speculations rather than Investments. Speculations do not fall into the scope of this approach. We tend to avoid speculations unless they have positive skew and we have conviction that we actually know a lot about the particular investment (which is rare).

Value investing also requires evaluating each asset in the context of the whole portfolio. A simple Capital Asset Pricing Model (CAPM) or correlation analysis is insufficient for our purpose. CAPM assumes that there is only one “market” risk factor. Correlations are unstable and can even change signs. Instead, each asset’s P/L should be defined in terms of stable relationships to undiversifiable influences also known as Risk Factors (RF). This can be done using Arbitrage Pricing Theory (APT) which models changes in assets returns on all relevant risk factors. These Sensitivity Models  (SMs) provide the backbone of the strategy giving insight into the drivers of each asset’s returns, the degree to which returns cannot be explained (idiosyncratic risk), and measurement of the portfolio’s diversification across risk factors.

True diversification is achieved by having a balance of exposures to all relevant RFs. An Agnostic Approach to this would be to have a perfect balance across risk factors. However, a Mean Reversion Approach would suggest putting greater weight on RFs more likely to move favorably. For example, when stock prices are low it would seem appropriate to give this “market” RF greater weight. The more uncertain we are the more the portfolio should reflect balance between RF exposures.

Historical Simulations (HSs) provide a simple way of estimating the portfolio’s P/L using the SMs. HSs should include Segmentation so that forecasts reflect relevant macroeconomic and geopolitical environments. HSs will allow us to optimize the portfolio to achieve the highest risk adjusted expected return. For our purposes, risk is defined as expected shortfall over a year long time horizon. This measure of risk gives greater weight to tail risk and completely avoids problematic measures of risk like standard deviation. A one year time horizon provides a tax advantage and allows us to take advantage of the Shortermism bias in the marketplace.

We re-balance quarterly. However, execution strategy may vary from asset to asset. For example, indexes of stocks, bonds, and commodities generally exhibit momentum while individual stocks do not. Thus, if we want to exit an index asset that has an upward trend we generally wait, but if in a downward trend we generally exit immediately.

Risk Factors

Each asset’s P/L should be defined in terms of stable relationships to undiversifiable influences also known as Risk Factors (RF). We reviewed the literature to identify the most appropriate RFs. Not all will be relevant to each asset, but all are relevant to at least one. The most diversified the exposure to RFs the more diversified the portfolio. Notice that an investment in the S&P 500 is not truly diversified (by our definition of diversified) because the entire portfolio is sensitive to the “Equity Risk” RF. The RF we seek to measure is listed first followed by proxy we use to measure it in ().

Primary RFs

  1. Credit spread risk (Corporate bond yield over Treasuries of the same maturity)
  2. Equity risk (SPY)
  3. Commodity risk (Commodity Index)
  4. Foreign exchange risk (Dollar Major/Broad)

Secondary Risk Factors (see Chen, Roll and Ross (1986))

  1. Surprises in inflation (change in breakeven)
  2. Surprises in GNP (Industrial production index)
  3. Surprises in investor confidence (change in corporate bond spread);
  4. Surprise steepness in yield curve (10-1y Treasury Yield)
  5. Surprise shift in yield curve (5yr Treasury Yield)

Specialty Risk Factors

  1. Oil prices
  2. Housing (Starts, P/Inc, P/Rent)
  3. Specific Currency Risk (Specific Country Exchange rate to Dollar)
  4. VIX (Realized/1yr Realized)

Sensitivity Model (SMs)

The backbone of our approach are the Sensitivity Models  (SMs). Each asset has its own SM which identifies and measures the sensitivity of the investment to each RF. In order words, the SM gives insight into the drivers of each asset’s returns and the degree to which returns cannot be explained (idiosyncratic risk). This approach is based on the Arbitrage Pricing Theory (APT).

Each SM uses a simple statistical approach called ordinary least squares to estimate the relationship between the percent change (%Chg) asset price with the percent change in relevant RFs. The model estimates “Betas” which are calculated by minimizing the squared differences between actual %Chg in price with %Chg in RFs. That is why it is called “ordinary least squares”. The formula below is pretty standard as far as statistical models go.

%Chg Price = Intercept + Beta_1 x %Chg RF_1 + … + Beta_z x %Chg RF_z + Error

We test all RFs for each asset. Those that are significant and stable over time are included. Betas can be negative or positive. The larger the Beta the more sensitive the asset is to the RF. For example, a Beta of one means that a 1% increase (decrease) in the RF generally corresponds with a 1% increase (decrease) in the asset’s price. A specific example might be a gold mining company ETF. Suppose that one of the ETF’s factors is the S&P 500 and that the Beta is -0.2. In this case, we should expect that a 1% rise in the S&P 500 will result in a 0.2% fall in the gold mining ETF.

We include multiple SMs for each asset differentiated by look back period (LBPs)and time interval (TI). Relationships between assets and RFs are notoriously unstable. The reason for this is that the economy is infinitely complex and so we should expect relationships to change over time. The purpose of having multiple LBPs is to provide a range of possible Betas. Building on our previous example, suppose when using historical prices going back 1 year the Beta equals -0.2, but when we use ten years the Beta changes to -0.4. Which one is correct? The answer is that we don’t know and won’t know until the future has come to pass. However, this uncertainly is important. By measuring this uncertainty we can build it into our expected P/L. This helps us to avoid building too much certainty into our estimations. We also differentiate by time interval (TI) including daily and monthly. Those savvy with derivatives will recognize that a Beta estimated using a daily SM will be similar to the concept “Delta”. A Delta is the sensitivity to small changes in the RF. Price movements in a single day are generally not very large. Thus, our Betas will only be reliable for small change in the RF. But what happens if a RF makes a major move over a week, or a month? Using a daily TI will only provide a reasonable estimate of the Beta across any size movement in the RF if the relationship is always linear. In reality, relationships between asset prices and RFs are never perfectly linear and are often quite non-linear. For example, small movements in the S&P 500 may correspond with a -0.2 Beta for our gold mining ETF, but a 10% drop in the S&P 500 over a week might more accurately correspond with a 5% rise (-0.5 Beta) in our gold mining ETF than 2% rise (-0.2 Beta). We are generally longer term investors so having an TI more closely aligned with our time frame for re-balancing is more appropriate. Ideally, we would use a quarterly TI, but this requires losing 2/3 of our data compared to a monthly model so one month provides a reasonable compromise. For those familiar with derivatives, the monthly TI allows us to capture a good chunk of the  Delta and “curvature” risk in one simple approach.

Historical Simulations (HSs)

SMs are useful in providing insight into the RFs driving each asset’s price. However, by themselves they provide little insight into P/L expectations. In our view, a prudent investor needs to know both. Knowing what risks are associated with an investment is important, but so is some idea of how the investment will perform. We don’t believe that last month’s or last year’s performance provides any real value. What we care about is how an asset has historically performed during period’s similar to today, and how this expectation fits into our overall portfolio. That is where the Historical Simulations (HSs) come in.

HSs estimate individual asset and entire portfolio’s P/L by simply aggregating historical performance over similar time periods. The hard part is determining which variables to use to determine “similar”. These variables are used to create Segmentation scheme that breaks up history into similar groups. Each segment should reflect relevant macroeconomic environments. We segments we use to define the macroeconomic state of the world.

  1. Measures of equity price level like CAPE and total return over past five years (valuation cycle)
  2. Unemployment(UE) change, UE relative level, and household debt service (business cycle)
  3. Interest rate level (long term debt cycle) and
  4. Interest rate change over the past year (tightening cycle)

These four cycles provide what we consider to be sufficient granular to produce similar groups while still giving us enough data points in each segment to produce meaningful P/L distributions. We generate simulations at a monthly, quarterly, and yearly frequency.

There are many weaknesses to using historical simulations. The basic problem is that the world is constantly changing and so even after segmentation, historical performance can only provide some rough insight into the future. Here is an incomplete list of weaknesses. There are probably many more:

  1. Ignores long run downward trend in productivity
  2. Ignores long term debt cycle and associated tail risks
  3. Ignores changes in the underlying country such as demographic changes
  4. Ignores improvements in monetary policy and the impact of fiat currency on dynamic changes in interest rates
  5. Ignores systemic risk measures

This is a work in progress. As we produce the models specified we will post them for your benefit. Use at your own risk.