Login Page - Create Account

Support Board


Date/Time: Sat, 23 Nov 2024 12:35:22 +0000



Post From: Data Feed Stopping/Lagging Issue (non-Denali specific)

[2020-09-12 22:32:23]
Sierra Chart Engineering - Posts: 104368
The multiple overlapped receive buffers for network connections has been something we have been working on over a period of months and has been extensively tested and tested for performance and it uses a very smart design with buffer size management to ensure the buffers grow when needed but we do not make them too large at the beginning. We are also implementing a separate thread for setting the buffer sizes, in our real-time server process for maximum performance.

We ran numerous tests of the new code, and performance for the downloading of historical data which is one of the best tests because it involves so much data transfer over a short time, and the performance was identical to the old method.

Very precise tests in our server environment shows performance gains. These performance gains are in the microsecond/millisecond range but this is very important for high server performance. We still are using this functionality on the server side.

We also want to explain that Sierra Chart uses 100% developed by us, network connectivity which uses overlapped I/O and I/O completion ports with dedicated threads for this processing. This provides extremely fast network data handling. This is not something you are going to find in the typical program. This is very high performance architecture. All aspects of this is highly optimized.

----
Now why there is a problem we are not totally sure. We can only speculate that maybe there is a case where there are short moments where there are no buffers given to the OS, which should not be the case and we did fix one bug related to this but we think is unlikely to have been encountered , and initially at the open when there is a large burst of data the buffer sizes are just too small and the window size may have been too small and the acknowledgments back to the server were not getting there timely, which we have seen from your own logs,

So when we fault network connectivity, we are correct with what we say. That is abundantly clear especially based on numerous tests we have done over the last few days. But we also consistently say at the same time there are things we can do to help.

Sierra Chart is not always at fault when there are problems. And problems are not always like they seem. There is often the perception that something is wrong on the server or it is overloaded or the server is messed up when the data feed slows down or it stops.

When in fact what is actually happening, is a standard process based upon the Internet transmission control protocol, that the operating system on the server begins to slow the transmission of data or stops the flow of data to a particular connection, due to the given window size, and/or the lack of acknowledgments. TCP/IP is a two-way flow of information.

From Wikipedia article on Transmission Control Protocol:
Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received (controlled by the sliding window). When the receiving host's buffer fills, the next acknowledgment contains a 0 in the window size, to stop transfer and allow the data in the buffer to be processed


Take this most recent issue involving a stack corruption turns out not to be a Sierra Chart issue but all evidence is that it points to a Windows a bug. Which we have worked around. If it really were a problem on our side, users would continue to see it and we would be able to reproduce it ourselves.

Anyway, 2165 and higher, has a 2 MB buffer at the OS level for receiving data for network connections. So the buffer is always present and of our reasoning as to the cause of the problem is true, there should no longer be a problem. And there is a dedicated thread, reading from that buffer into another 2 MB buffer. And this secondary 2 megabyte buffer is quickly transferred to another buffer and given back to the OS. So the likelihood of running out of buffer space is next to zero. Unless Sierra Chart is in a frozen state, but that is not going to be a problem for DTC connections because those are on a separate thread.

Now on our server side, we are still using a more optimized design than what we described above because what we described above is still not most optimal. But is a perfectly acceptable for user connections. We want to use the more advanced method but we have feedback that it does not work right on Linux.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2020-09-12 22:47:09