Login Page - Create Account

Support Board


Date/Time: Thu, 21 Nov 2024 13:05:01 +0000



[Programming Help] - Optimized ACSIL Studies: Strategies for Combining Studies and Reducing Manual GUI Effort?

View Count: 110

[2024-11-07 02:51:58]
User133994 - Posts: 80
Experts,

I have numerous custom ACSIL studies, labeled as a, b, c, d, etc., totaling hundreds. I can currently apply studies a, b, and c to a chart independently. However, as the number of these studies grows, it would be more efficient to combine a, b, and c into a single study. The objectives are to: 1) reduce the number of custom studies needed on the chart to achieve the same output; 2) ensure each custom study operates independently, not requiring any existing study on the chart; 3) maintain the ability to use and update a, b, c separately as needed, and then integrate any updates into the combined version.

Assume study c relies on studies a and b. I understand that I can integrate the code from studies a and b into c, but this integration requires adjusting the output/input indices to prevent overlap (e.g., Output #1 is shared among a, b, and c, so it would need to be reassigned to Subgraph 1, then 2, then 3).

An alternative to avoid this re-indexing is to use .Arrays[] for a subgraph. This approach allows me to directly remap the outputs into the additional arrays tied to an output subgraph, like this:


SCFloatArrayRef o_s_Shaved_UP = E_Subgraph_ExtraArrays.Arrays[2];
SCFloatArrayRef o_s_Shaved_DOWN = E_Subgraph_ExtraArrays.Arrays[3];

instead of
//{{{ o_s_Shaved_UP
SCSubgraphRef o_s_Shaved_UP = sc.Subgraph[o_s++];
if (sc.SetDefaults)
{
o_s_Shaved_UP.Name = "o_s_Shaved_UP";
o_s_Shaved_UP.DrawStyle = DRAWSTYLE_COLOR_BAR_CANDLE_FILL;
o_s_Shaved_UP.PrimaryColor = COLOR_AQUA;
o_s_Shaved_UP.LineWidth = 5;
o_s_Shaved_UP.DrawZeros = false;
}
//}}}

//{{{ o_s_Shaved_DOWN
SCSubgraphRef o_s_Shaved_DOWN = sc.Subgraph[o_s++];
if (sc.SetDefaults)
{
o_s_Shaved_DOWN.Name = "o_s_Shaved_DOWN";
o_s_Shaved_DOWN.DrawStyle = DRAWSTYLE_COLOR_BAR_CANDLE_FILL;
o_s_Shaved_DOWN.PrimaryColor = COLOR_PINK;
o_s_Shaved_DOWN.LineWidth = 5;
o_s_Shaved_DOWN.DrawZeros = false;
}
//}}}



This method allows me to maintain the existing logic that assigns values to o_s_Shaved_UP[sc.Index] (or DOWN) because I can keep using the same variable names but redirect them to an extra array.

Assuming these outputs originate from custom study 'a', I want to reuse this logic in a new custom study as is. I can copy and paste the logic blocks, remap the outputs to the extra arrays, and voila, I have the outputs of 'a' accessible in 'b' without needing to add 'a' as an 'input study'.

However, I have noticed differences in the behavior of study b (or c, or d) when using 'a' as a separate compiled input study versus integrating its logic into a new study and using extra arrays.

It seems there might be a difference in how Sierra Chart processes logic structures designated as outputs versus those designated as extra arrays. For example, outputs can be set to hidden, ignore, or other statuses, while an extra array is never visible in the GUI, potentially affecting its execution.

Does anyone have insights or examples of using extra arrays instead of output subgraphs to maintain identical logic? Perhaps Sierra Chart has a mechanism I'm unaware of that facilitates this approach without requiring multiple studies to be compiled and connected in a specific order through the GUI.

Any thoughts or working examples that align with these objectives would be greatly appreciated.

I'd like to avoid utilizing `sc.Input[]` to access the logic of other custom studies, as this method necessitates compiling and connecting numerous studies in a precise sequence through the GUI. This results in significant manual effort, which grows exponentially, particularly when dealing with hundreds of custom studies, each potentially having multiple inputs and outputs.

All of the questions above are seeking help in managing the substantial manual effort required for integrating hundreds of studies. Is there an entirely different approach within ACSIL that could alleviate these exponentially growing manual efforts, which currently can only be managed through the GUI without introducing inconsistent behavior or necessitating extensive refactoring?

Thanks in advance for your input and any working examples.
[2024-11-15 04:06:04]
User133994 - Posts: 80
For those interested in the solution:

Likely the issue with using extra arrays (ACSIL Interface Members - sc.Subgraph Array: sc.Subgraph[].Arrays[][]) is that some of these extra arrays aren't treated 'equal' with SubgraphArrayRefs, especially when using built-in functions that assume there are some extra arrays available for intermediate calculations. I didn't track which extra arrays I used vs. what internal sierra chart functions I was calling..so it was possible that some function calls using the extra arrays or Subgraphs were actually overwriting data that I already had written in the extra arrays. Nonetheless...I was using the extra arrays like they were always, reliably available, but that isn't the case. Something I never did solve was the processing of SCSubgraphRef outputs vs extra arrays. The processing is also not equal. In some cases I can get the same result by using an extra array, but other times, I must use SCSubgraphRef for the calculations to complete as expected. Bottom line..don't use extra arrays for extra storage...these require special attention and appear to be inferior to the 60 output subgraphs available per custom study.

Solution? Build my own data structure and have an 'infinite' number of extra storage available. This allows me to store calculations (persistent yes) way beyond the 60 count, and not have to deal with the unsolvable inconsistencies that I described above by using SC built-in extra arrays.

One simple solution example for you to consider is as follows:

First create the storage mechanism:
struct Deque_mgr {
//{{{ sliding window storage add scIndex
/* think of this as a sliding window on the price data, we only typically look at the last x values
* and we dont' want to store all 100k bars...so just the last 3k bars is reasonable.*/
std::deque<float> window;
int max_size;
int last_index = -1; // Initialize with an impossible index
float dummy = 0.0f; // default value when no entries exist

Deque_mgr(int size = 10, int li = -1) : max_size(size), last_index(li), dummy(0.0f) {}

bool add(float value, int scIndex) {
//{{{
if (last_index == scIndex) {
return false; // No update needed, index already processed
}

// Update the last processed index
last_index = scIndex;

// Manage the deque size
if (window.size() >= max_size) {
window.pop_front();
}

window.push_back(value);
return true;
//}}}
}

float& operator[](int index) {
//{{{ Access elements with a negative index, e.g., -1 for the last element
if (window.empty() ) {
return dummy; /* Return a reference to a dummy value if out of range or empty */
}

if (index < 0) {
index = (int)window.size() + index; // Convert negative index to positive
}

// Clamp index to valid range to avoid out of range errors
index = std::clamp(index, 0, static_cast<int>(window.size()) - 1);
return window[index];
//}}}
}

const float& operator[](int index) const {
//{{{ For const objects, provide a const version of the operator[]
static float dummy = 0.0f; // Static to ensure the reference remains valid
if (window.empty() ) {
return dummy; // Return a reference to a dummy value if out of range or empty
}

if (index < 0) {
index = (int)window.size() + index; // Convert negative index to positive
}
// Clamp index to valid range to avoid out of range errors
index = std::clamp(index, 0, static_cast<int>(window.size()) - 1);
return window[index];
//}}}
}

//}}}
};



next, create the manager to allow for 'custom' naming and create as many additional deques as needed:
struct p_deque_mgr {
//{{{
std::unordered_map<std::string, Deque_mgr> mw;

p_deque_mgr() {
mw.reserve(100); // Reserve space for 100 memory windows(elements) to minimize rehashing
}

// Add a new manager with a custom name and size
void add_memWindow(const std::string& name, int size) {
mw[name] = Deque_mgr(size);
}

// Access a manager by name
Deque_mgr& operator[](const std::string& name) {
return mw[name];
}


void reset() {
//{{{ Reset all Deque_mgr instances, clearing their content
for (auto& pair : mw) {
pair.second.window.clear(); // Clear each deque
}
//}}}
}


void example_use() {
//{{{ Example of use
add_memWindow("customName1", 10);
add_memWindow("customName2", 15);

mw["customName1"].add(10.0f, 1);
std::cout << "customName1[-1]: " << mw["customName1"][-1] << std::endl;

mw["customName2"].add(20.0f, 1);
mw["customName2"].add(30.0f, 1);
std::cout << "customName2[-2]: " << mw["customName2"][-2] << std::endl;
//}}}
}


//}}}
};

const std::int32_t p_deque_mgr_pkey = 222282;


then simply create a pointer to this manager:
p_deque_mgr* dmm = reinterpret_cast<p_deque_mgr*>(sc.GetPersistentPointer(p_deque_mgr_pkey));

if (dmm == nullptr)
sc.SetPersistentPointer(p_deque_mgr_pkey, new p_deque_mgr()); // Set the new instance as a persistent pointer

//DON'T forget to delete on last calculation
if (sc.LastCallToFunction){


if (dmm == nullptr)
//do nothing
int a = 1;
else
delete dmm;
}


now it is time to use the deque storage in the ACSIL custom study:

if(sc.Index == 0){
//{{{ setup other equivalent output subgraphs --- but can't be used in a 'calculate' function that expects a subgraph
dmm->reset();
dmm->add_memWindow("o_Subgraph_RDA_ATR", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_waveTop", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_waveBottom", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_lastWaveTopPrice", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_lastWaveBottomPrice", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_waveTopHH", MAX_WINDOW_SIZE);
dmm->add_memWindow("o_s_waveBottomLL", MAX_WINDOW_SIZE);
//}}}
}

Now instead of actually assigning output subgraphs values, assign them to these deque memory structures like this:

dmm->mw["o_s_waveTop"].add(waveTop_ID310,sc.Index);

And, then when you need to access the 'sliding window' (up to MAX_WINDOW_SIZE, currently I use the last 300 bars) do this:

float curValue= dmm->mw["o_s_waveTop"][-1];
float priorBarValue= dmm->mw["o_s_waveTop"][-2]; //can do upto -300; and even more...these are safe functions that clamp to the extremes if the value is beyond the limits


Hope that helps anyone trying to the same thing. Works wonderfully!

If anyone has a more streamlined, elegant solution I'd still be interested in seeing how it works.

To post a message in this thread, you need to log in with your Sierra Chart account:

Login

Login Page - Create Account