The race to zero, the speed at which financial market transactions are executed, is being challenged by Argon Design Ltd, a technology company with more than a degree of experience in systems development despite its relatively new focus on trading.
Latency, a term used to describe the time it takes to execute electronic transactions across a network is increasingly becoming smaller and which can equate to millions of dollars over the course of an hour, and trillions over the course of a day in markets, such as FX with average daily volumes of $5.3 trillion (volumes as of the latest BIS report).
On this basis, Argon Design, a company notable for high performance software and hardware solutions, announced last week that it had developed its own high performance trading system using a disparate mix of technologies which have led to a latency of 170 nanoseconds from triggering packet input, to completion of outgoing order packet.
Accordingly, latency reducing solutions produced for and utilized by high technology industries are continually in order to save on costs, offer more competitive pricing and discover trading and market-making opportunities. This is increasingly important as and liquidity and internalization facilitation engines. Whereas OTC markets that are becoming centralized with standardized methods of execution and price discovery are also in need of these advancements in latency reduction.
Often the success of game-changing innovations can be as simple as a revised approach from the ground up, such as re-writing entire system codes and scrapping old codes or stripping Linux kernels and programming in non-application specific higher object language from scratch.
Furthermore, accessing hardware in faster and more efficient ways in order to execute complex operations must be planned strategically with numerous end goals in mind simultaneously. This creates both a technological, as well as intellectual challenge that Argon Design Ltd has claimed to have made groundbreaking progress upon, with regards to its discovery.
Argon Design engineers develop product designs which are embedded across a wide range of technology products across a wide range of industries, including medical, bio-technology, military, aerospace, retail, computing and mobile, automotive, image processing and networking systems. Clearly, this is a very heterogeneous mix of industries, demonstrative of the applicability of Argon’s design methods.
Cutting Edge Hardware For The Task
The hardware for the initial approach included an application switch from Arista Networks, a provider of software and cloud networking solutions for large data centers and high-performance computing environments, which included an Altera FPGA (Field Programmable Logic Arrays) with hardware level access to a third of the twenty-four available 10GB Ethernet Ports, and an x86 domain-based on Intel’s Xeon processors.
Steve Barlow, CTO of Argon Design added to the corporate statement: “We recognise the Arista 7124FX as cutting edge. By levering the work of the Finteligent community we have been able to demonstrate the application of this technology to low latency trading. Critical here has been the use of a mix of technologies and the selection of appropriate tasks best suited to each processor architecture – combining both software and hardware programming.”
According to the press release from Argon Design, the overall effect is a dramatic reduction in latency to close to the minimum that is theoretically possible.
Paul Goodridge, Regional Director for Financial Services at Arista comments, “this is exactly the kind of practical application we are looking to see on the market with our 7124FX product, and we are delighted and impressed with Argon Design’s commitment and approach. This joint venture exemplifies Arista’s innovation and further highlights the real value of Arista’s EOS (Extensible Operating System) and its ability to take programmability to the Ethernet switching market.”
Can The Race To Zero Ever Finish?
While the race to zero won’t likely approach the Planck time scale at any point in the near future, the standardization of measurement methods and uncertainty behind such measurements will become increasingly important, as the drivers of decision making will affect technology choice made by financial services firms when comparing available vendors or developed in-house. The famous MIT professor, Walter Lewin has often said, “any measurement you make without the knowledge of its uncertainty is completely meaningless.” The parallels of these increasingly shrinking time frames are reminiscent of the Dichotomy paradox (Zeno) noted by ancient Greeks where a number can be continually halved nearly an infinite amount of times.
The Artista Designs’ HFT platform prototype, which appears astonishingly fast, will be interesting to follow as high- frequency trading accounts for significant portions of global market volumes in both equities and foreign exchange, as well as many other electronically traded liquid asset classes. Any recent slow down due to regulatory concerns surrounding HFT can be looked at as a required pivot in order for the market to further evolve as participants continue to adapt to technology changes and based on customer demands and industry feedback.
Time is Money, Now More Than Ever
The often used cliche “time is money”, which is heard across disparate fields such as finance in which time affects the value of goods or information, could not be more true in the context of latency and financial markets. However, before going over an example, lets take the nano second, which there are a billion of occurring with every passing second of time and consider these are the units that are being used to measured latency (in the case Argon Design).
Whilst the millisecond (1 thousandth of a second) and microsecond (1 millionth of a second) have been used quite often to describe the smaller quantities of measured latency within trading systems in recent years, the concept of a nano second (1 billionth of a second) is even more mind-boggling as the end of the race (to zero) is increasingly becoming more distant. To put these time-units in perspective, 1 nano second is as to 1 second as 1 second is to nearly 32 years (or one billion seconds). Therefore, while the single nano-second unit appears almost meaningless, since there are so many of them happening in each second (a billion), the effects of even the smallest amounts (of money) are multiplied many times over time.
In the US where executing venues (broker-dealers) can be principal to a trade or execute on an agency basis (depending on their regulatory status and capabilities), the saved time from reduced latency can equate to anywhere from a few cents per share traded to as little as 1/20 of a cent and less for larger size trades (which over the course of millions of shares traded can add up to serious revenue).
A Forex Nano-Second
In Foreign Exchange comparing the total amounts traded using the daily average within a 24 hour period, equates to $220 billon an hour, $3.6 billion a minute, $61 million per second, $61,342 per millisecond, $61.34 per microsecond and $.06 per nano second.
While trading with reduced latency below a specific threshold may not provide any added advantage in price if the best price is already achieved, it could still be something of great value and incredibly useful as collectively there are prices leading and lagging the FX market’s true mid-point, and using an HFT platform to deduce price origination can be a valuable instrument in determining predictive price drivers. However, there can be numerous points of latency for every point of connection between systems; therefore, the entire map must be taken into consideration.
Moore’s law states that roughly every two years, chip processing speeds should double as a result of more transistors and their being faster which enables the cycle to repeat – and thus lead to changes we see in everyday life technologies (which continues to be increasingly both faster and with greater memory capacities).
Since according to Arista Design the reduction they achieved is close to the theoretical limit, one can only wonder what the next breakthrough could be with regards to latency. What new methods will be used, if any, whether via quantum computers or other innovative approach in order to reach the next plateau.
Be First to Comment