Change or be Changed

Advisor Perspectives welcomes guest contributions.  The views presented here do not necessarily represent those of Advisor Perspectives.

 

Without a doubt, 2008 exposed major weaknesses in the financial services industry.   Spectacular abuses like the Bernard Madoff fraud captured the public’s attention, but the loss of 25% of all retirement and other assets was noteworthy too.  Given the current legislative environment, extensive regulatory changes are all but certain.  No one knows what these new standards will be, but now is an excellent time for advisors and analysts to review their current processes to make sure they are generating the best results for their investors.

Consider, for example, mutual fund evaluation and monitoring.  Over the past two decades, screening has been at the core of most mutual fund evaluation processes.   The advisor picks the criteria, sets a minimum or maximum level for each, and comes up with a list of funds that survive all screens.  There are several inherent flaws to this process:

  • If a fund fails any one criterion by even a small amount, it is no longer considered acceptable.  It may pass all other screens by the widest of margins, but as long as it fails just one, it is eliminated from consideration.
  • For the funds that do pass all of the criteria, there is no degree of passing.  The fund that passes all screens by a wide margin looks no different from the one that barely squeaked by. This simple pass/fail grading system provides too little guidance.
  • All criteria have equal importance. Not only is that counterintuitive, but research clearly shows it to be inappropriate.  For example, research suggests lower expenses are a big determinant of future performance, so shouldn’t low expenses carry more weight in the evaluation?  Why should an advisor assume all factors are equally predictive?  More on this later.
  • Once the acceptable funds are identified, the advisor has a long list with no particular order of best to worst.  There is no way of ranking them because at no point in the process were they compared to one another.

To get around this last issue, some advisors have created “scoring” processes.  These are actually derivatives of basic screening that simply assign a value based on the number of criteria passed by each fund.  Funds that pass eighteen of twenty screens score higher than those that only pass seventeen or less.  While the final list of funds can be sorted based on the number of criteria passed, there is still no differentiation between attributes of different importance and no way of knowing which funds barely passed a given test.  Even if the factors are weighted for importance, failure to measure the magnitude of passing renders the results much less useful than a multi-factor scoring model that includes both.