This week's net.wars, "Passing the Uncanny Valley", sees Sophie Nightingale deliver the news that training people to detect synthetic facial images has little effect; and we bid goodbye to Typepad (may its bloggers find new homes): https://netwars.pelicancrossing.net/2025/08/29/passing-the-uncanny-valley/ #AI #MachineLearning #Blogging
machinelearning
I've been reading up on the Lottery Ticket Hypothesis, which is super interesting.
Basically, the observation is that these days we build vast neural networks with billions of parameters, but most of the parameters aren't needed. That is, after training, you can just throw away 95% of the network (pruning), and it will still work fine.
The LTH paper is asking: could we start with a network just 5% of the size, and get comparable results? If so, that would be a huge performance win for Deep Learning.
What's interesting is that you can do this, but only by training the full network (perhaps several times) to see which neurons are needed. They argue that training a neural network isn't so much creating a model, as finding a lucky sub-network (a lottery ticket) from the randomly initialized network, a bit like a sculpter "finding" the bust hidden in a block of marble.
Initial LTH paper: http://arxiv.org/abs/1803.03635
Follow-up with major clarifications: http://arxiv.org/abs/1905.01067
On reflection, I think the big mistake is the conflation of #AI with #LLM and #MachineLearning.
There are genuine exciting advances in ML with applications all over the place, in science, (not least in my own research group looking at high resolution regional climate downscaling), health diagnostics, defence etc. But these are not the AIs that journalists are talking about, nor that are really related the LLMs.
They're still good uses of GPUs and will probably produce economic benefits, but probably not the multi- trillion ones the pundits seem to be expecting
https://fediscience.org/@Ruth_Mottram/114896256761569397
Ruth_Mottram - My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it.
I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge!