machinelearning
Vox's Dylan Mathews explores the continued debate between AI optimists and pessimists on whether super-intelligent machine learning systems are a threat to humanity. Through a study produced by Forecasting Research, he looks at how experts in AI and "superforecasters" interpret the dangers of AI when exposed to each other's opinions and research.
Show HN: I built a game to help you learn neural network architectures
----
- 12 minutes ago | 4 points | 1 comments
- URL: https://graphgame.sabrina.dev/
- Discussions: https://news.ycombinator.com/item?id=40429200
- Summary: The Graph Game is a neural network assembly challenge where you choose a type, such as RNN with a single hidden layer, LSTM cell, GRU cell, ResNet block, or deep RNN, and attempt to build it. Created by Sabrina Ramonov. #NeuralNetworks #MachineLearning
Read “The Future of AI and Machine Learning: Innovating for a Better Tomorrow“ by Miraj Ansari on Medium: https://digitalmiru.medium.com/the-future-of-ai-and-machine-learning-innovating-for-a-better-tomorrow-d2535f7f9fb4
#ai #FutureOfTheInternet #machinelearning #tech
I've been reading up on the Lottery Ticket Hypothesis, which is super interesting.
Basically, the observation is that these days we build vast neural networks with billions of parameters, but most of the parameters aren't needed. That is, after training, you can just throw away 95% of the network (pruning), and it will still work fine.
The LTH paper is asking: could we start with a network just 5% of the size, and get comparable results? If so, that would be a huge performance win for Deep Learning.
What's interesting is that you can do this, but only by training the full network (perhaps several times) to see which neurons are needed. They argue that training a neural network isn't so much creating a model, as finding a lucky sub-network (a lottery ticket) from the randomly initialized network, a bit like a sculpter "finding" the bust hidden in a block of marble.
Initial LTH paper: http://arxiv.org/abs/1803.03635
Follow-up with major clarifications: http://arxiv.org/abs/1905.01067
On reflection, I think the big mistake is the conflation of #AI with #LLM and #MachineLearning.
There are genuine exciting advances in ML with applications all over the place, in science, (not least in my own research group looking at high resolution regional climate downscaling), health diagnostics, defence etc. But these are not the AIs that journalists are talking about, nor that are really related the LLMs.
They're still good uses of GPUs and will probably produce economic benefits, but probably not the multi- trillion ones the pundits seem to be expecting
https://fediscience.org/@Ruth_Mottram/114896256761569397
Ruth_Mottram - My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it.
I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge!