Support Board
Date/Time: Sat, 23 Nov 2024 08:27:14 +0000
[Locked] - Data Feed Stopping/Lagging Issue (non-Denali specific)
View Count: 2520
[2020-09-12 02:22:34] |
Sierra Chart Engineering - Posts: 104368 |
We have had several reports over the last few days of when the market is very busy, that the data feed stops or is lagging. A lot of these come from Denali data feed users. We did look into the issue and did not identify any problem on the server/application level. We noticed no problem at all at that level. After multiple tests and observations on different days at the 9:30 AM US Eastern Time watching the US stock index futures. The issue very much looks like a network connectivity issue related to TCP flow control. That still does appear to be the cause but we have a probable reason as to why this is happening. So we were definitely correct in our thinking but the trigger for the issue is on the Sierra Chart client-side and is non-Denali data feed specific. This issue is not specific to the Denali data feed, but to newer versions of Sierra Chart (Versions 2162 through 2164). It can affect all data feeds including CQG. Since they all use the same underlying network socket connectivity. It relates to multiple overlapped receive buffer architecture for processing network received data. We are going to be restoring the old functionality of relying upon the underlying network socket buffer from the operating system and just using a single overlapped receive buffer. Internally we will be using multiple overlapped receive buffers but for users we will just use the older method because the newer method has compatibility issues on Linux. We are going to be releasing Version 2165 later today. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-12 22:27:46
|
[2020-09-12 04:48:58] |
User94606 - Posts: 327 |
Fyi - I was having this issue with version 2159. I have upgraded to version 2164 on Sep 11, 2020
|
[2020-09-12 11:44:24] |
User495158 - Posts: 106 |
I too am having daily lag issues with CQG/ CME from the end of August/ beginning of September. Not only at the open but throughout the day. The lags are short pauses of a second or two. Never before had I these issues. Some occasional disconnects would happen (once or twice in a week), but not lag. This did not happen even during the ultra high volatility days of February/ March 2020. So I doubt the problem is connectivity issue on the client side, since so many clients are reporting the problems and they live all over the world. And I have probably one of the fastest retail connections available, 200mbit download/upload fibre connection. And just one other thing I wanted to mention. Sierra Chart is a great platform and that's why we use it. Especially to scalpers any kind of lag is matter of life and death. A second or two lag can mean the difference between a win or a loss. And it can be a major loss if you can't close your position. So it is a big issue. I hope you will find a solution for this. Date Time Of Last Edit: 2020-09-12 12:16:54
|
[2020-09-12 22:24:37] |
Sierra Chart Engineering - Posts: 104368 |
Fyi - I was having this issue with version 2159. I have upgraded to version 2164 on Sep 11, 2020 You actually need prerelease 2165. We are not sure if 2159 had the newer network socket receive method implemented. It might have. But we want you to test 2165. Update with Help >> Download Prerelease. This is important.This also applies to the post at post #3. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-12 22:25:47
|
[2020-09-12 22:32:23] |
Sierra Chart Engineering - Posts: 104368 |
The multiple overlapped receive buffers for network connections has been something we have been working on over a period of months and has been extensively tested and tested for performance and it uses a very smart design with buffer size management to ensure the buffers grow when needed but we do not make them too large at the beginning. We are also implementing a separate thread for setting the buffer sizes, in our real-time server process for maximum performance. We ran numerous tests of the new code, and performance for the downloading of historical data which is one of the best tests because it involves so much data transfer over a short time, and the performance was identical to the old method. Very precise tests in our server environment shows performance gains. These performance gains are in the microsecond/millisecond range but this is very important for high server performance. We still are using this functionality on the server side. We also want to explain that Sierra Chart uses 100% developed by us, network connectivity which uses overlapped I/O and I/O completion ports with dedicated threads for this processing. This provides extremely fast network data handling. This is not something you are going to find in the typical program. This is very high performance architecture. All aspects of this is highly optimized. ---- Now why there is a problem we are not totally sure. We can only speculate that maybe there is a case where there are short moments where there are no buffers given to the OS, which should not be the case and we did fix one bug related to this but we think is unlikely to have been encountered , and initially at the open when there is a large burst of data the buffer sizes are just too small and the window size may have been too small and the acknowledgments back to the server were not getting there timely, which we have seen from your own logs, So when we fault network connectivity, we are correct with what we say. That is abundantly clear especially based on numerous tests we have done over the last few days. But we also consistently say at the same time there are things we can do to help. Sierra Chart is not always at fault when there are problems. And problems are not always like they seem. There is often the perception that something is wrong on the server or it is overloaded or the server is messed up when the data feed slows down or it stops. When in fact what is actually happening, is a standard process based upon the Internet transmission control protocol, that the operating system on the server begins to slow the transmission of data or stops the flow of data to a particular connection, due to the given window size, and/or the lack of acknowledgments. TCP/IP is a two-way flow of information. From Wikipedia article on Transmission Control Protocol: Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received (controlled by the sliding window). When the receiving host's buffer fills, the next acknowledgment contains a 0 in the window size, to stop transfer and allow the data in the buffer to be processed Take this most recent issue involving a stack corruption turns out not to be a Sierra Chart issue but all evidence is that it points to a Windows a bug. Which we have worked around. If it really were a problem on our side, users would continue to see it and we would be able to reproduce it ourselves. Anyway, 2165 and higher, has a 2 MB buffer at the OS level for receiving data for network connections. So the buffer is always present and of our reasoning as to the cause of the problem is true, there should no longer be a problem. And there is a dedicated thread, reading from that buffer into another 2 MB buffer. And this secondary 2 megabyte buffer is quickly transferred to another buffer and given back to the OS. So the likelihood of running out of buffer space is next to zero. Unless Sierra Chart is in a frozen state, but that is not going to be a problem for DTC connections because those are on a separate thread. Now on our server side, we are still using a more optimized design than what we described above because what we described above is still not most optimal. But is a perfectly acceptable for user connections. We want to use the more advanced method but we have feedback that it does not work right on Linux. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-12 22:47:09
|
[2020-09-13 02:16:53] |
Sierra Chart Engineering - Posts: 104368 |
Now one final thing we want to say, is that we are disappointed when problems like this occur and when users have issues. Although an issue like this is impossible to predict because there are so many variables going on with network connectivity that we do not have control over or can fully understand ahead of time. However, the end result of this, is that there are improvements at various locations because we have done a full comprehensive review of the problem. There are improvements at the server side as well, probably that do not make much difference, but every little thing helps. But we do want to make it clear that this was not a server-side issue and not a Denali data feed issue. It relates to the TCP protocol, and client-side buffer sizes and available buffers. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing |
[2020-09-13 03:33:25] |
Sierra Chart Engineering - Posts: 104368 |
The next step is for us to implement compression . Most likely we will do this at the network communication level with each block of data to be sent out. We are looking at this method: http://fastcompression.blogspot.com/p/lz4.html This will be something optional that users can choose to use, assuming we implement this. Users will also be able to choose the timing for buffering data before sending data. So you could choose to for example send data out in 500 ms increments and compressed. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing |
[2020-09-19 14:25:38] |
Sierra_Chart Engineering - Posts: 17145 |
Regarding compressing real-time data from Sierra Chart data feeds, we think we could potentially have this working and available this coming week assuming there are no technical complications. We will at least begin testing, this week. Probably even as soon as this weekend. We do not foresee any difficulties with compressing real-time streaming data using LZ4 because this is a high-performance algorithm. Compression will be done on dedicated threads and we do not expect this to introduce any perceivable latency. Our testing shows that we can compress about 1000 bytes in a microsecond (Effectively 1 GB per second throughput). So as you can see there is no practical latency introduced with compression. Regarding the issue that some users see with stopping data or lagging data, based on feedback, this issue is resolved for most users. We have been doing monitoring of our data processing and real-time server programs and processes to ensure there are no problems with those. And there are no issues with delays or overload with those processes. They work very efficiently and are stable. We also observe no problem at the operating system level on the servers which could cause lagging data. We also observe no problem with the local network switching equipment. Our servers are very high-performance, and operate nearly all of the time with 90+% excess capacity. We have always maintained that the issue is network connectivity related for affected users and that is specifically what it is. We are not saying we are faulting specific users Internet connections, although in some cases this is exactly the reason for the problem. Instead we are saying if you have a problem this is network connectivity related. The problem can be anywhere along the path of communication between your computer and the server. This path of communication, is two ways. You have to consider the return path for packet acknowledgments. At the same time we also say that there are things that we can do to help. What we can do to help would be increasing network capacity at the server side, increasing receive side buffer sizes and dynamically increasing them as necessary, reducing the number of IP packets (using larger packets), and using compression. We have contacted our infrastructure provider for the Denali data feed about the issue that some users see and they have assured us with their network monitoring, that we are not exceeding any thresholds that they see which could lead to this problem. Nevertheless we have increased the available assured bandwidth that we have to us in the Aurora data center by five times (very costly by the way). And we have increased the number of users on that server to utilize that bandwidth and also use that server for any relaying. Our infrastructure provider through their own monitoring of their network infrastructure says there are no issues with packet loss or buffering of data. But they always say at the same time they make no assurance as to what happens over the public Internet. They refer to the public Internet as the "public toilet". We are also going to be setting up a relay on the West Coast of the US and once we have compression in place we will set up a relay in Germany. We have one in London. With data compression these relays should work quite well. Finally we want to reiterate, that the data compression is going to reduce latency and be a major improvement with Sierra Chart provided data feeds. The data to be transmitted will be reduced by about 70%. This data reduction is going to be a definite solution to lagging or stopping data for anyone affected by this. We are going to try some different modes of compression and we should be able to achieve a higher compression. Maybe 90%. So if we can compress the data by 90%, and have this out in less than a few microseconds, this is a dramatic improvement! This is going to put the Sierra Chart data feeds including the Denali Exchange Data Feed and the Sierra Chart Exchange Data Feed in a class, unmatched by anyone. This will be a definitive resolution to any latency issues. We will then be able to remove the "Low Bandwidth" option we have in Sierra Chart. Since there will no longer be a need for it. And then we should also theoretically be able to transmit market by order data without any practical issue. But we do not expect that to come until later. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, use the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-19 15:40:42
|
[2020-09-19 15:17:15] |
Sierra_Chart Engineering - Posts: 17145 |
We should have the compression actually working this weekend. So we were estimating 2 to 3 weeks before, but we think we can have this out, at least for testing by users on Monday.
Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, use the Teton service: Sierra Chart Teton Futures Order Routing |
[2020-09-20 02:18:16] |
Sierra Chart Engineering - Posts: 104368 |
With further testing, we will be able to reduce the data size by 50%. Not more. We will look at more later on but not right now. We are doubtful about a 90% compression for real-time data compression. And with small packets, the benefit will be less. For example, if we are trying to compress 20 bytes, the result is just going to be about 20 bytes or a little more. Compressing 560 bytes would give about 350 bytes. The larger the amount of data to compress, the better the ratio. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing |
[2020-09-20 21:17:10] |
Sierra Chart Engineering - Posts: 104368 |
And also remember, about all of the connectivity issues that Sierra Chart users using CQG had and have. When we had more users on CQG, we would be seeing every day or nearly everyday postings on this board about lost connections and there was a period of time where people were bringing up lagging data issues. We came to the conclusion, that the ISPs that CQG may be using, or the path of connectivity just simply had a problem related to packet loss. For example ISPs can be doing filtering related to denial of service attacks. That could be filtering some of the heartbeat messages that come through from users and that causes the server to close the connection. And according to a recent message from CQG our users, have far less problem than other programs using CQG: I’ll let you know if we come up with anything more useful on our end as we continue to look at increasing performance. FWIW, your affected population is *tiny* by comparison to some of the other vendors. And this is due to the high quality network communications core that Sierra Chart has. This is super engineered. There is no other program in the class of Sierra Chart that has the high-performance communication core that it does. And when you have a connectivity problem and when we say it is due to connectivity this is a basic fact. This is why our infrastructure provider, for our order routing service, and the Denali data feed refers to the Internet as the "public toilet". They have repeatedly told us that. We know these issues are related to the path of connectivity. But we always qualify it by saying that there are things that we can do to help. There may also be at times, an issue with the connectivity on the server-side, but that is much less likely of an issue based upon our testing. Our testing has shown 0% problem on the server side with connectivity or delays. Nevertheless, we are continuing to make improvements there to make everything extremely efficient and optimal. Compression is one of these things. We should have that out this Monday. This is going to be our initial release, and then we will make further improvements with it in the coming weeks. We do not expect any transmission delays with compression because it is extremely fast and will run on multiple background threads. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-20 21:20:12
|
[2020-09-21 03:59:46] |
Sierra Chart Engineering - Posts: 104368 |
Further testing shows that we are able to compress data to about 36% of the original size. And we should even be able to do better than this potentially by using a predefined dictionary. We are going to do an initial release this evening. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing |
[2020-09-22 06:00:42] |
Sierra Chart Engineering - Posts: 104368 |
We are now releasing version 2172 which supports data feed compression. This has only been released for the Denali data feed. It has been well tested. This will reduce bandwidth by 50% or more. If performance is good we can work on higher compression ratios. The performance is good and you can see what it is through the detailed heartbeat logging. More information about this: Prices / Data Falling Behind: 4.16 - Monitoring Sierra Chart Data Feed Performance from Perspective of Server Here is an example: SC Data - All Services | Heartbeat from server | ServerReceivedClientHeartbeatSecondsAgo=9, NumberOfOutstandingSendBuffers=1, TransmissionDelayInMilliseconds=20, ServerSendBufferSizeInBytes=192, ActualMessageDelay=0.2 seconds, DataCompRatio=2.52, UncompressedBytes=322032, CompressionTime=0.004706, NumCompressions=538 | 2020-09-22 01:58:39.491 In testing, we see performance of compressing over 6 MB as 2000 separate compressions for a total time of 9 ms. We do see higher compression times above. We will look into how this can be improved upon but still this is acceptable and will not introduce any lag because the time per compression is 8 µs approximately. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-22 09:36:04
|
[2020-09-23 20:44:22] |
Sierra Chart Engineering - Posts: 104368 |
Information about the new data feed compression feature: Denali Data Feed Compression Available This is currently available with the Denali data feed. We will be releasing some further performance improvements with it this evening and be releasing it across all of our data feeds by next week. It is an optional feature that can be used. Now we are still aware, that at the market open at 9:30 AM (US Eastern) some users are still noticing the data feed stop for a short time. We are able to reproduce this under certain conditions and we are looking at the cause. We should have more information about it tomorrow. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2020-09-23 20:53:44
|
To post a message in this thread, you need to log in with your Sierra Chart account: