# A problem With Trading Position Risk Estimation

 Date conversion 24.02.2016 Size 15.57 Kb.
A Problem With Trading Position Risk Estimation

There is a serious flaw in most statistical risk estimation methodologies as they are applied to assessing the risk of speculative trading positions. In general, trading risk estimation uses some sort of sample of trailing market behavior to make inferences about the future behavior of the market, mainly its likely near-term variability. In that approach, a trailing sample is often treated as though it were a random sample, and stochastic statistical techniques are used to massage the sample data to generate inferences about the population of trading days. The principal assumptions underlying this procedure are the normality (or some known shape) and stationarity of the distribution (where “stationarity” means parameter invariance across time translations), and (usually) zero mean return. The sample standard deviation is typically used as an estimator of the population parameter and a normal/lognormal probability density function is assumed in order to estimate the probability associated with a specified value in the distribution. Sometimes the distribution is tweaked, as in extreme value theory.
A more subtle assumption of this general procedure, a corollary of the stationarity assumption, is that the day on which the estimate is made is a day like any other day: it is assumed that it is similar in all relevant respects to any other day in the universe of all days. However, if the program has not traded on the days forming the estimation sample, or if it held a different position on those days, then those days are different in an obvious respect from the days on which trades are made. Thus it is likely that the properties of the distribution of the population from which the estimation sample was drawn are different from the properties of the population of days on which trades are made. Hence the validity of estimates based on a trailing sample of days on which trades were not made is compromised.
Consider an analogy. Suppose that one has a universe of entities, U, illustrated by the large circle. Suppose further that the entities in U have some property p such that p varies from -1.0 to +1.0 across instances. Using the values of p one can partition the entities in U into three subsets, A, B, and C.

(p < -0.33) (-0.33 < p < +0.33) (p > +0.33)

-1.0 < p <+1.0

U A B C

Now suppose that the entities in U have a second property q in addition to p. Suppose further that one has grounds to believe that p and q are correlated, but one doesn’t know the value of the correlation. In order to estimate some property of the distribution of values of q in A, from which of the four sets – U, A, B, or C – should one draw the sample that enters one’s estimation procedure? The answer is obvious.

The parallel case is this: IntelliTrade’s programs partition the set of all days into just such subsets as A, B, and C using a variable p, a forecast, that is correlated with the future behavior of the market (the second variable, q). The days on which our programs trade are purposefully non-randomly selected from the set of all days. I have not seen a successful or even vaguely plausible defense of the proposition that a random sample or a trailing sample can support valid inferences about days that have been systematically non-randomly chosen from the population of all days. The only valid sample on which one can base inferences about trade days must be drawn from other days in the past on which similar trades were made. They share the principal relevant property of the day for which the risk estimate is required: the program traded on them.
The 3-dimensional frequency distributions below were constructed by taking successive 300-trading-day segments in a moving window for the Long Gilt near-month futures contract. The dimensions of the graph are time from left to right, market points from bottom to top, and frequency of occurrence as the Z-axis, color-coded height above the time-price plane. Each successive moving window segment was overlaid on the graph, all of the prices of the segment shifted up or down so that the interval between the 150th and 151st prices crossed the center axis (a yellow line) of the graph. A cell in the points/time plane was incremented if a non-zero value fell on it. On the graphs, the vertical axis is points greater or less than the center price; the horizontal axis is time, 300 days total, with the past to the left of the center and the future to the right; and height above the price-time plane is the color-coded frequency of occurrence of cell entries.
As is obvious on the figures, the three subsets of the universe of days partitioned on p, the forecasts, differ considerably in their distributions of future prices, q, in central tendency, skew, and probably also in kurtosis though that is not so apparent to the unaided eye (but see the last graph). If one has a sequence of days on which no trade is indicated and then hits a day on which a Short trade is indicated, the trailing sample will be from the NEUTRAL subset but the day for which the risk estimate is to be made is from the SHORT TRADE subset. Is that a valid sample from which to estimate trading risk? Not that I can see.

## NEUTRAL SUBSET

### Frequency Distribution of 10-Day Returns for Three Forecast Subsets

This graph shows “slices” – vertical sections – from the Neutral, Long, and Short Trade distributions above, the sections being made at +10 days from the center of those distributions. The curves are smoothed as noted. As the graph plainly shows, the distributions differ in central tendency, skew, and kurtosis. Samples drawn from the Neutral distribution provide seriously misleading descriptions of the risk associated with the Long and Short subsets.

Risk Estimation Flaw