NHacker Next
login
▲The Training Example Lie Bracketpbement.com
25 points by pb1729 7 hours ago | 12 comments
Loading comments...
eden-u4 10 minutes ago [-]
I don't understand the RMS table, shouldn't it be non commutative? Eg "example 0 vs 1"'s RMS != "example 1 vs 0"'s RMS? Which doesn't seem the case for the checkpoints I checked.
Majromax 2 hours ago [-]
Wait a second, they define the induced vector field (and consequently Lie bracket) in terms of batch-size 1 SGD:

> In particular, if x is a training example and L(x) is the per-example loss for the training example x, then this vector field is: v^(x)(θ) = -∇_θ L(x). In other words, for a specific training example, the arrows of the resulting vector field point in the direction that the parameters should be updated.

but for the MXResNet example:

> The optimizer is Adam, with the following parameters: lr = 5e-3, betas = (0.8, 0.999)

This changes the direction of the updates, such that I'm not completely sure the intuitive equivalence holds.

If it were just SGD with momentum, then the measured update directions would be a combination of the momentum vector and v1/v2, so {M + v1, M + v2} = {v1, M} + {M, v2} + {v1, v2}. The Lie bracket is no longer "just" a function of the model parameters and the training examples; it's now inherently path dependent.

For Adam, the parameter-wise normalization by the second norm will also slightly change the directions of the updates in a nonlinear way (thanks to the β2 term).

The interpretation is also strained with fancier optimizers like Muon; this uses both momentum and (approximate) SVD normalization, so I'm really not sure what to expect.

willrshansen 4 hours ago [-]
Was hoping for a tournament bracket of best lies found in training data :(
E-Reverance 5 hours ago [-]
Could this be used for batch filtering?
measurablefunc 5 hours ago [-]
Lie brackets are bi-linear so whatever you do per example automatically carries over to sums, the bracket for the batch is just the pairwise brackets for elements in the batch, i.e. {a + b + c, d} = {a, d} + {b, d} + {c, d}. Similarly for the second component.
thaumasiotes 3 hours ago [-]
> Similarly for the second component.

Hmm.

{a + b, c + d} = {a, c + d} + {b, c + d} = {a, c} + {a, d} + {b, c} + {b, d}.

{a + b + c, x + y + z} = {a, x + y + z} + {b, x + y + z} + {c, x + y + z} = (a sum of nine direct brackets).

This doesn't look like it will scale well.

measurablefunc 3 hours ago [-]
Then don't use the Lie bracket. All bilinear forms scale the same way.
measurablefunc 5 hours ago [-]
Eventually ML folks will discover fiber bundles.
Y_Y 4 hours ago [-]
But what bastard "new" name will they give them?
esafak 3 hours ago [-]
Sooner if you explain why.
thaumasiotes 4 hours ago [-]
> An ideal machine learning model would not care what order training examples appeared in its training process. From a Bayesian perspective, the training dataset is unordered data and all updates based on seeing one additional example should commute with each other.

One of Andrew Gelman's favorite points to make about science 'as practiced' is that researchers fail to behave this way. There's a gigantic bias in favor of whatever information is published first.

Ifkaluva 33 minutes ago [-]
I think most ML models don’t have this property. Usually it’s assumed that the training samples are “independently identically distributed”.

This is the key insight that causes the DQN algorithm to maintain a replay buffer, and randomly sample from that buffer, rather than feed in the training examples as they come, since they would have strong temporal correlation and destabilize learning.

An easy way to wreck most ML models is to feed the examples in a way that they are correlated. For example in a vision system to distinguish cats and dogs, first plan to feed in all the cats. Even worse, order the cats so there are minimal changes from one to the next, all the white cats first, and every time finding the most similar cat to the previous one. That model will fail