Support Board
Date/Time: Sun, 24 Nov 2024 02:29:21 +0000
Post From: US Equities/CFE Data Early Morning Weekday Carrier Maintenance Notification
[2024-11-16 19:01:08] |
Sierra_Chart Engineering - Posts: 17154 |
We want to provide some background information, and philosophy, regarding, all of our, hardware and software, and connectivity for our US Equities, CFE, and index data feeds. Sierra Chart has a very strong philosophy, of independence, and doing everything low-level, and doing things ourselves, and doing things right and using superior engineering. Even if we do not get things right in the very beginning, we work towards doing that. First, Sierra Chart, is a direct market data vendor, with NYSE, NASDAQ, and CBOE. We report to these exchanges, every month. And we are subject to audits. This puts us in a class, unlike, competitors. They are not direct vendors. We obtain, through, through a service provider, the raw data feeds, from these exchanges. In the case of US equities, these are called the CTA and UTP data feeds. Raw data feeds are always an advantage, because we can process these data feeds properly. For example, among other things, being able to deliver accurate Bid Volume and Ask Volume. So it is a technical advantage, to work with the raw data feeds. This puts as above and beyond competitors. When working with raw data feeds, we have a choice of many service providers providing data from exchanges, both in Chicago CH1/CH2 and the Secaucus New Jersey NY4 data centers. We do not have to rely on any one provider. We can decide to switch providers at any point in time (subject to contractual obligations though). Each of these providers, are very good and provide full redundancy. There is no need to work with two of them. One of them is enough. But we have the flexibility to move providers. Whereas if we used some API normalizing these data feeds, Then we are locked into that particular provider and it makes it difficult to transition. Whereas now we can just say we want to work with a different provider, and we simply get a cross connect to that provider in the data center using our existing connectivity circuits. It is not something that can be done in one day, but it is a transition that probably can be done within 30 days. We have all of our own networking equipment. We use two of these switches at our Michigan data center location: https://www.ipinfusion.com/wp-content/uploads/2020/09/EdgeCore-AS5912-54X.pdf (the particular switches we use though does not appear to be exactly what is shown in this PDF because the ones we use, use Intel Atom processors). Otherwise, the specifications are the same. This switch does have, deep packet buffers which we wanted to ensure that there is not packet loss. The operating system we use on the switches is: OcNOS from IP Infusion. These are very expensive and good quality network switches. We have looked at other network switches, and these are very good. For example, the operating temperatures of the optical transceivers, inserted into these switches is maintained at lower temperatures compared to others. By having our own Network Equipment, this gives us independence, and allows us to work with any fiber carrier we want to, and any service provider we decide to connect into. The other component, of what we need, is the physical connectivity, between our data center location, and the Chicago, or Secaucus New Jersey data centers. If we are located in Chicago or Secaucus New Jersey, then we would just use cross connects within the data center that would cost about $300 a month. The problem is, we do not want to operate our equipment out of those data centers. Instead, we chose Southfield Michigan. Which is strategically located between, Illinois, and New York. It is a good location. At our data center location, we have 4 Internet connections. (One of them is still pending from Lumen). We have a lot of capability there, and plenty of power, and very good power redundancy. And they use very good methods of cooling for the data center which would not result in the problem we saw, at Equinix in Chicago, at the beginning of this year. Or was it December 2023? We are not sure. But that data center had a cooling failure, and overheated and got over 120°. It was a complete and very serious disaster. So we have complete flexibility and freedom to make changes. So we have flexibility to choose, the service provider providing the raw data feeds, and the fiber carriers, and we can choose between 2 data centers (CH1/CH2, NY4) for connectivity. This is the philosophy behind what we are doing. We are not locking ourselves, into a particular API, a particular data center, a particular service provider. We have the flexibility to make changes and adapt. We had this outage, on Friday, and we are taking steps to remedy that it will not happen again. This was already in progress in the weeks before! We were and are actively working on that, in the weeks before because when we saw all of these maintenance notices from Crown Castle we were worried about a problem. And it did happen! And we complained very forcefully to Crown Castle ahead of time not to do this maintenance on a weekday. They ignored us, because there were partner carriers involved. And we were told to ask them ahead of time do they control the entire length of fiber between Chicago to Southfield Michigan. They told us they do. That was incorrect. If they control the entire length of fiber, then they could choose to do the maintenance, when they want like on a weekend. So this is a learning experience for us, that we have to be aware that fiber carriers will not disclose, relevant information and we have to be able to figure out the actual truth independently. And it is not just us, being affected by this. There is is a redundancy situation, at the CME Aurora data center, we became aware of about a year ago where a fiber cut which knocked out both Zayo and Cogent. It was not known that these two carriers, are working on the same fiber cable. None of us knew that including our service provider at the CME data center. And Cogent is not going to offer the information that in the background, that they in part are relying on Zayo in that data center. Our service provider, is taking steps to add redundancy. And we reminded them about that yesterday. The reason it is taking a long time, is the particular fiber carrier they are planning to use, is taking a long time, with the installation of the circuit. We have that same frustration with Lumen. They have taken a very long time with the installation of a new Internet connection, at our Southfield Michigan data center location. They had to, add additional capacity. One of the main issues here, is our inexperience working with fiber carriers providing ethernet point-to-point circuits and Wavelength circuits. We have since learned, now that we have seen all of these maintenance announcements from Crown Castle, and FCC interstate fees for these types of circuits, that fiber carriers cannot be trusted. This is something that we did not understand previously. We trusted what we were being told but none of it is true. We now know better. For example, our service provider providing the data feeds, as we understand has multiple circuits (possibly as many as 4), crossing the Atlantic between the United States and Europe. Since in the case of submarine cables, if there is a fault, that is going to be out until it can be repaired which obviously involves sending a ship out and specialized repair equipment and people. This could take months. So what we are going to do, is add another circuit over to NY4 for these data feeds. This is going to be a really good set up. We have a data center location in Michigan with connectivity over to Chicago and connectivity over to Secaucus New Jersey. These are completely diverse fiber paths. Both of these connections will be always on and active at the same time with distribution of feeds over both. If there is a problem with one, that the remaining feeds can be switched automatically over to the other. We have multiple servers, at our Michigan data center location, with 100 Gb connections into our Edge Core network switches. Currently we have 4 servers processing these data feeds, each with two 100 Gb connections. One into each of the Edge Core switches. One switch, is primary currently going to Chicago. The second switch, is on standby and we intend to connect it to NY4. So each of the servers has the ability to access the data both from CH1/CH2 and NY4 simultaneously. All of this, is the current set up. The one thing just missing is this additional connection to NY4. We have an abundance of servers and networking gear there, with easy access for maintenance. All of this, gives us, the freedom and flexibility, that we want. Here is some reference material about wavelength circuits: https://www.cogentco.com/en/products-and-services/transport/optical-wavelengths https://www.cogentco.com/files/docs/network/performance/global_sla.pdf (Cogent claims 100% uptime) When we queried Cogent in-depth, in the weeks before this incident because were planning on, using a circuit from Cogent, they say they have had zero faults or maintenance activities, on the route between Chicago and Southfield Michigan. So that is good. The question is whether it is true. Not sure how we could independently verify this. Although we do have experience, using Cogent connections between Illinois and Michigan, and the uptime has been really what seems to be 100%. The reason we chose a Wavelength type of circuit is because we wanted to ensure that there is no packet loss. As we understand these types of circuits are better suited to multicast type of data which we get from the exchanges. And the reason we went to 100 Gb rather than a lower speed is because with a wavelength the speeds are only 10 Gb and then 100 Gb. 10 Gb was not enough for all of the data feeds that we are pulling simultaneously. And we are being told that at this time, really with longer haul connections that the speed has to be at 100 Gb. Sierra Chart Support - Engineering Level Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy: https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation For the most reliable, advanced, and zero cost futures order routing, use the Teton service: Sierra Chart Teton Futures Order Routing Date Time Of Last Edit: 2024-11-17 18:04:56
|