Support Board
Date/Time: Sun, 24 Nov 2024 02:57:06 +0000
Post From: Offering To The Community: Klinger Volume Oscillator With Filters
[2015-07-13 04:05:48] |
bjohnson777 (Brett Johnson) - Posts: 284 |
I'm creating this extra post for the archives so it can be found by web searches. I should have been finished programming the KVO in about an hour if everyone else would have done their research correctly. Reading my "Programming Rant" from above, it's not hard to feel my frustration. If you're trying to fix or program your own KVO, you're in luck. This is the only version that's properly implemented from the start to finish. This is also the only version with extensive smoothing to clean up the signals... and it's open sourced with easy to understand documentation. The various versions have a lot of little screw up in the peripheral setup code, but the worst has to do with the Volume Force Scaling Factor. I've documented the problems in the source code. Here's the main piece of that section. //Do the KVO calculations. //Typical Price handled by In_InputData and Price[] already. Using "High+Low+Close" without the "divide by 3" can create false movements. //Variable names cleaned up for clarity and understanding: //DM[] or Daily Measurement = HighLow[] //CM or Cumulative Measurement = HighLowSum //ScalingFactor = that part of the VolumeForce equation where a lot of people screw up. Watch the parenthesis everyone. Order of operation is very important here. The absolute value of ScalingFactor is designed to be above 1.0 most of the time. The longer the run, the higher ScalingFactor gets to 2.0. //The WRONG Scaling Factor: abs(2.0 * (HighLow[sc.Index]/HighLowSum) -1.0); The range on this one is mostly 0.0 to 1.0. A value of 0.0 in a running average will whiplash the MA line. 0 vales are generally useless to everyone. //The right Scaling Factor: abs(2.0 * ((HighLow[sc.Index]/HighLowSum) -1.0)); The range on this one is mostly 1.0 to 2.0. This means the values are passed through ranging from untouched to doubled. The whiplash potential is much less and the signal will be cleaner. //Load HighLow array. HighLow[sc.Index] = sc.High[sc.Index] - sc.Low[sc.Index]; //not enough data yet to finish the calculation. if(sc.Index < 1) {return;} //Determine if the current trend direction. if(Price[sc.Index] == Price[sc.Index-1]) {Trend = TrendPrevious;} //no trend, leave it unchanged else if(Price[sc.Index] > Price[sc.Index-1]) {Trend = 1;} //up trend else {Trend = -1;} //down trend //Handle price summation. if(Trend == TrendPrevious) {HighLowSum += HighLow[sc.Index];} //continue trending else { //trend reversal HighLowSum = HighLow[sc.Index-1] + HighLow[sc.Index]; //initialize with the previous and current values TrendPrevious = Trend; //register trend change } /* Scaling Factor divide by 0 problems. These should be rare but can happen on low volume securities. This is poorly addressed in all the documentation, so it needs to be analyzed in depth. Under normal circumstances, HighLowSum is initialized with 2 bars of HighLow[] and should keep increasing with each additional bar until reversal. The HighLow[i]/HighLowSum number is a fraction that should always be getting smaller until the next reversal because of the constantly increasing denominator. Real world data shows the Scaling Factor range to mostly be between 1.0 and 2.0 with the number approaching 2.0 the longer it goes without a reversal. If HighLowSum is 0.0, then there was probably a recent reversal reset on junk data. To find out what the Scaling Factor would be in general after a reversal, set up a test with boring data, in this case 1.0 for the 2 previous HighLow[] bars: ScalingFactor = abs(2.0 * ((1.0/(1.0+1.0)) - 1.0)) = 1.0 We already know the range is mostly 1.0-2.0 and generally starts out low. The value of 1.0 is a good choice for "pass through and don't mess with it". To find out what the Scaling Factor would be with a whiplash reset, set up another test with 1.0 and 0.1: ScalingFactor = abs(2.0 * ((0.1/(1.0+0.1)) - 1.0)) = 1.8 This starts out very high in the range. A near full boost starting out on garbage amplifying a bad signal is a bad idea. To find out what the Scaling Factor would be with the previous whiplashed flipped around, set up another test with 0.1 and 1.0: ScalingFactor = abs(2.0 * ((1.0/(0.1+1.0)) - 1.0)) = 0.18 This damps the output and would bring the final equation value down close to 0.0. Unnecessary zeros in a moving average cause whiplash. This isn't a very good choice either. If we take the middle of the last 2 extremes, we get 0.8. This isn't far from 1.0 in the boring example. In general, it looks like setting Scaling Factor to 1.0 on bad data and just letting it pass through would probably be the most logical choice. Keep in mind there's always the possibility volume will be 0, thus zeroing out the equation and handling the problem by itself. */ //Calculate a proper Scaling Factor if(HighLowSum == 0.0) {ScalingFactor = 1.0;} //handle any divide by 0 problems else { ScalingFactor = 2.0 * ((HighLow[sc.Index]/HighLowSum) -1.0); if(ScalingFactor < 0.0) {ScalingFactor *= -1.0;} //ScalingFactor must be positive. } //Do the VolumeForce Calculation VolumeForce[sc.Index] = VolumeFiltered[sc.Index] * ScalingFactor * Trend * In_OutputMultiplier.GetFloat(); |