Support Board
Date/Time: Sun, 24 Nov 2024 23:17:57 +0000
Spreadsheets and System Performance...last annoying question I promise:)
View Count: 1417
[2013-12-12 02:13:55] |
enemyspy - Posts: 306 |
I have been scratching my head trying to figure out a way to give a spreadsheet system that is too demanding more rows so to speak without crashing my computer, and I am wondering if the Following scheme would work in theory...not can it be done but would it work in theory: -limit all of the spreadsheet studies to say 100 rows, -make a standalone spreadsheet that has a hypothetical autopopulate feature, where every 60 rows of time, it automatically copies the data from the 100 row spreadsheet study, stores it and arranges it all in order. the original 100 row spreadsheet study contains a provision in the formula of each cell that indexes matching time stamps to the standalone spreadsheet, where they exist, and references those values, and where they do not exist draws from the original study formulas. So that in theory if these 2 spreadsheets made a brain, the 100 row study would be the short term memory, that can call up information from the standalone study serving as the long term memory, and alleviate the burden on the ram/cpu? And the important thing here is freeing up memory so that it has to buffer less, that is what I am wonder this hypothetical scheme woule successfully achieve Last question for a while I promise Date Time Of Last Edit: 2013-12-12 02:21:45
|
[2013-12-12 19:04:46] |
vegasfoster - Posts: 444 |
Maybe I don't fully understand your concept, but if your study/system requires x number of rows, then putting some of those rows in one place and some in another place and then having to copy data back and forth seems to me would only require more resources, not less. Wouldn't simply limiting the rows to the number of rows required by the system be the most efficient?
|
[2013-12-12 21:39:35] |
enemyspy - Posts: 306 |
So yes that suggestion makes sense, the reasoning behind why I thought this might work is that, I read somewhere either on the forums or in the manual, that every time a new row is created as the charts update, sierra chart has to recalculate all of the formulas in each row of the spreadsheet. That is why I wondered if instead of having say 100 000 rows that all have the formulas in them calculating eveything all the time, if most of the previously calculated values from say 3 weeks ago up until 20-60 minutes ago are already pre-calculated on another sheet that only updates once an hour, and then in order for the trading sheets to make new calculations based on old calculations it would only have to take into account the first few rows of the longer term spreadsheet. That is why I was wondering if itwould lighten the processing load for the rest of the time when the historical values are not updating? |
[2013-12-12 22:08:31] |
enemyspy - Posts: 306 |
so I think the question that i am trying to understand which I have been having trouble articulating properly is the Following: Is it the constant calculations that have to be made in the updating of rows that hog the most resources? Or would the simple storage and buffering of unchanging values hog the resources just as much? Date Time Of Last Edit: 2013-12-12 22:09:52
|
[2013-12-13 22:37:06] |
vegasfoster - Posts: 444 |
IC what you mean now, I don't know the answer really, I am assuming that both require resources. Regardless, I think it is a good idea, but it would require a major reworking of spreadsheets as we've known them since the dawn of time. :-)
|
[2013-12-13 23:31:43] |
enemyspy - Posts: 306 |
I see, while thank you for your help....I think maybe it is time to start learning about c++
|
To post a message in this thread, you need to log in with your Sierra Chart account: