> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
tzs 21 minutes ago [-]
Is that better liquidity, etc., actually needed?
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
biomcgary 1 hours ago [-]
Thank you, it is nice to see an empirical observation of before and after the transition to continuous trading.
JumpCrisscross 56 minutes ago [-]
Note that American exchanges open and close with a batched cross. This hybrid approach is why most objections to intraday continuous trading is misplaced.
1 hours ago [-]
usefulcat 2 hours ago [-]
If you're talking about something like having an auction (per security) every N seconds, I don't see how that addresses the underlying issue, which is how to determine order priority.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
toast0 7 minutes ago [-]
If you're doing batches to reduce the advantage of being fast, you'd have to treat all orders that come in during a batch tick as simultaneous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
eterm 44 minutes ago [-]
You could partially fulfil both resting orders, weighted by their (remaining) order size.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
jjallen 16 minutes ago [-]
You fill the orders proportional to the order quantity for everyone
ssivark 2 hours ago [-]
How about along a randomized delay (0-T time) to each order? For T=30s it will largely nullify millisecond latency advantages.
JumpCrisscross 1 hours ago [-]
> How about along a randomized delay (0-T time) to each order?
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
api 53 minutes ago [-]
Randomize orders using a cryptographic hash of the order, client info, and all other fields plus a random salt added when the order is submitted.
Sort by hash. Impossible to game unless you can break the hash function.
mhmmmmmm 49 minutes ago [-]
So now I probabilistically spam a ton of different orders to on average get my desired fill...
This just turns it into a "whoever is best at DoS'ing the exchange" game.
As the orderbook fills with competitor orders it makes sense for yourself to also spam orders so each of your orders maintains the same probability of being filled
api 49 minutes ago [-]
Impose a small order fee.
usefulcat 4 minutes ago [-]
That will tend to discriminate against smaller traders, like 'retail' traders.
mikewarot 2 hours ago [-]
I've argued in the past that we should have batch settlements every 30 seconds, instead of in real time. We don't really need microsecond based skimming/front running.
Loughla 2 hours ago [-]
I've read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
JumpCrisscross 1 hours ago [-]
> read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
mhmmmmmm 1 hours ago [-]
CLOB's force market participants to compete on pricing (which is only indirectly related to latency, since you can quote tighter if you know your orders won't get picked off by other, faster, traders)
Taiwan used to have Batching style auction and it ultimately led to worse prices: https://focus.world-exchanges.org/articles/citadel-trading-a...
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
usefulcat 1 hours ago [-]
If there are multiple orders at the same price on the same side, how should we determine which ones are filled first?
Or put another way, how should we determine which orders are least likely to get filled?
hcks 1 hours ago [-]
Well either volume weighted or randomised then
harry8 2 hours ago [-]
so now the race is to get the order in (or out) @ 29.999999985 seconds or 15nS before the batch deadline. Interesting twist on the game. Unlikely to change who wins it, could it be worse for retail punters?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
anonymoushn 1 hours ago [-]
Batching can greatly lower the returns to speed, which would be sufficient to get participants to invest less in speed. It doesn't need to reduce the returns to speed to 0, and indeed reducing the returns to speed to 0 is sort of an incoherent idea to begin with.
aeries 2 hours ago [-]
You could randomize the batching deadline.
harry8 2 hours ago [-]
and it won't help retail investors either.
1 hours ago [-]
biomcgary 2 hours ago [-]
30 seconds seems reasonable. Don't the markets themselves make a fair amount of money off of providing fast access to the HFTs? Is that the primary perverse incentive?
infecto 1 hours ago [-]
Why not 1 minute then?
You have ignored the whole issue of how are you then ordering those contracts in 30second batches?
anonymoushn 1 hours ago [-]
The non-terrible version of this proposal is called Frequent Batch Auctions. I've read the paper and it seems like a decent idea to me.
I have heard that some real-life venues have implemented the terrible version of this proposal instead though.
infecto 1 hours ago [-]
There are cases to be made that you get tighter spreads.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
theturtletalks 1 hours ago [-]
Skywave has a point, they were through regulatory oversight to get their microwave working whereas these other firms went behind the FCC’s back and profited by not doing so. The fine is likely a lot lower than the profits they made so what incentive would future companies have to go through the proper channels?
why would a radiowave that is reflected off the atmosphere (and therefore taking the longer route) be faster than a direct fibre cable?
scrlk 2 hours ago [-]
Radio waves travel at nearly the speed of light, whereas light in an fiber optic cable travels at ~67% of the speed of light due to the refractive index of glass.
stephen_g 30 minutes ago [-]
I worked for three years designing custom low-latency point-to-point microwave radios for HFT for this very reason. They didn't need very high bandwidths (their long-haul network was less than 200 Mbit, whereas in New York/New Jersey we had about 5 Gbps because the hops were much shorter and they had licenses for more RF bandwidth at a higher frequency).
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
scrlk 15 minutes ago [-]
I somewhat regret not specialising in RF/comms in my EE degree - this side of HFT sounds like a fascinating line of work (Trading at the Speed of Light was a great read).
cypherpunks01 2 hours ago [-]
Ericsson blog wrote:
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
JumpCrisscross 1 hours ago [-]
More bluntly: light in a fibre is still bouncing around a lot.
indiosmo 31 minutes ago [-]
In addition to the radio signal being faster as noted by the other commenters, for long distances the radiowave is actually the shorter route.
If you take one of the routes in the article, Chicago to Sao Paulo.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
FredFS456 2 hours ago [-]
Speed of light in an optical fibre is about 2/3 that of the speed in air
2 hours ago [-]
_qua 2 hours ago [-]
Light doesn’t go at light speed through optical fiber.
nimish 2 hours ago [-]
Sure it does. It's just that the speed of light in non-hollow optical fiber is slower than light in a vacuum.
Microsoft bought a hollow optical fiber company for a reason.
The immediate allure of hollow-core fibers is that light travels through the air inside them at 300,000 km-per-second, 50 percent faster than the 200,000 km-per-second in solid glass, cutting latency in communications. Last year, euNetworks installed the world’s first commercial hollow-core cable from Lumenisity, a commercial spinoff of Southampton, to carry traffic to the London Stock Exchange. This year, Comcast installed a 40-km hybrid cable, including both hollow-core and solid-core fiber, in Philadelphia, the first in North America. Hollow-core fiber also looks good for delivering high laser power over longer distances for precision machining and other applications.
cycomanic 1 hours ago [-]
Yes, funnily enough Microsofts reason was not HFT but AI. Essentially inter-datacentre training is limited by latency between the datacentres.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
3eb7988a1663 47 minutes ago [-]
That feels somewhat implausible. I assume a Microsoft sized data center starts at over $100 million. Moving the footprint X miles away might be cheaper, but is probably a drop in the bucket given everything else required for a build out. I would further assume that they were already some distance away from the top tier expensive real estate to accommodate the size of the facility.
AStonesThrow 2 hours ago [-]
By definition, it does, because the maximum speed is qualified by "the speed of light in a vacuum", so the speed of light [in other media] is simply a function of how much the medium slows it down, yet it is still the speed of light. Funny how that works!
spirobelv2 2 hours ago [-]
mev but for tradfi
throwaway2037 2 days ago [-]
FT Alphaville: High frequency trading
Skywave Networks accuses Wall Street titans of ‘continuous racketeering and conspiracy’
FT/Alphaville is blog attached to The Financial Times newspaper. It free to sign-up for an account.
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
Sort by hash. Impossible to game unless you can break the hash function.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
Or put another way, how should we determine which orders are least likely to get filled?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
You have ignored the whole issue of how are you then ordering those contracts in 30second batches?
I have heard that some real-life venues have implemented the terrible version of this proposal instead though.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
HFT in My Backyard
https://news.ycombinator.com/item?id=8354278
https://news.ycombinator.com/item?id=8371852
It’s the only site I know of that has posts like it. Sadly, he hasn’t posted in awhile.
Shortwave Trading | Part I | The West Chicago Tower Mystery
https://sniperinmahwah.wordpress.com/2018/05/07/shortwave-tr...
SHORTWAVE TRADING | PART II | FAQ AND OTHER CHICAGO AREA SITES
https://sniperinmahwah.wordpress.com/2018/06/07/shortwave-tr...
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
If you take one of the routes in the article, Chicago to Sao Paulo.
The distance is about 8,400km in a straight line.
According to https://en.wikipedia.org/wiki/Skywave a single shortwave hop can reach 3,500km, so 3 hops are required, or about 30ms.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
Microsoft bought a hollow optical fiber company for a reason.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
Skywave Networks accuses Wall Street titans of ‘continuous racketeering and conspiracy’
FT/Alphaville is blog attached to The Financial Times newspaper. It free to sign-up for an account.