This week's net.wars, "The Gulf of Google" sees a new way to splinter the Internet; watches panic over a new Chinese reasoning model; thinks Silicon Valley has its cause and effect backwards; and asks, "Did Trump just screw data flows between the EU and US - again?": https://netwars.pelicancrossing.net/2025/01/31/the-gulf-of-google/ #NetWars #AI #Privacy #DataProtection
ai
The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several journals when a Russian chemist and scientific sleuth noticed the odd wording in a now-retracted paper in Springer Nature’s Environmental Science and Pollution Research.
Today, a Google Scholar search turns up nearly two dozen articles that refer to “vegetative electron microscopy” or “vegetative electron microscope,” including a paper from 2024 whose senior author is an editor at Elsevier, Retraction Watch has learned. The publisher told us it was “content” with the wording.
#AI #GenerativeAI #LLM #AISlop #InformationOilSpill #AcademicPublishing #ScientificPublishing #PaperMill #PeerReview
My lab's using an LLM in an experiment for the first time. It's interesting to see how that's going.
For one thing, we (roughly a dozen AI experts) struggle to understand whether this thing is doing what we want. It's just such an ambiguous interface! We send it some text and a picture and get text back, but what is it doing? We're forced to run side experiments just to validate this one component. That makes me uncomfortable, and wonder why folks who aren't AI researchers would do such a thing.
Worse, my lab mate keeps doing more prompt engineering, data pre-processing, and restricting the LLM's vocabulary to make it work. That's a lot of effort the LLM was meant to take care of which is becoming our problem instead.
It feels like he's incrementally developing a domain specific language for this project, and all the LLM is doing is translating between English into this DSL! If that's the case, then there's no point in using an LLM, but it's hard to tell when we've crossed that line.
襪褲布魯瑪
#月曜がんばれ 114
「フルパワー炸裂!限界を超える瞬間!」
#松實優麗 #うちの子
#stablediffusion #AI #AI繪圖 #AIイラスト #AIart #AIイラスト好きさんと繋がりたい
Nice to see a very visible person like @anilkseth converge, in every possible way, with things I've been saying for a while now:
https://www.noemamag.com/the-mythology-of-conscious-ai
Consciousness is *not* a matter of computation, but a matter of experiencing. #AI algorithms are unable to experience. They are simply not the kind of system that experiences anything. Autopoiesis is required. We may be able to simulate that, but simulation is not reality. To be real, organisms must invest energy to construct themselves.
In retrospect I might have written non-sense in place of nonsense.
If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).
I’m curious to know how often the distinctions between the various types of algorithms has been made? There has been a lot of discussion about #ai. Yet this is often used as a very generic term to describe everything from ChatGPT to AlphaFold without much distinction that the underlying algorithms are often very different. This difference seems like something that should be obvious and yet is not a common distinction in articles. Is this well known or distinction worth exploring? #philosophy
I wanted try out #deepseek and #Chatgpt o3-mini al little bit and thought: Get your #selfhosting setup cleaned up and those #nextcloud errors.
In a way I hate to say it, but #AI was way easier and faster than hours spent searching the web, reading reddit and asking in forums. And I totally get why that is: I can't expect that everybody and there grandfather jumps to my help. And AI is borrowed knowledge.
#technology #podcast #lispyGopherClimate
@nosrednayduj #live #Interview #archive https://archives.anonradio.net/202502190000_screwtape.mp3
Late notice, please #boost <3
Two hours from this toot on https://anonradio.net:8443/anonradio
yduJ from #lambdaMOO and the usual #lisp companies/Stanford #AI lab original extraction.
#lambdaMOO is back!
telnet lambda.moo.mud.org 8888
co guest
@join screwtape
yduJ
https://nosrednayduj.dreamwidth.org/
http://www-cs-students.stanford.edu/~yduj/history.html
#recipe s
https://olum.org/yduj/recipe/
The reason for this shouldn't be hard to see but apparently is. Simplistically, science is about hypothesis-driven investigation of research questions. You formulate the question first, you derive hypotheses from it, and then you make observations designed to tell you something about the hypotheses. (1)(2) If you stuff an LLM in what should be the observations part, you are not performing observations relevant to your hypothesis, you are filtering what might have been observations through a black box. If you knew how to de-convolve the LLM's response function from the signal that matters to your question, maybe you'd be OK, but nobody knows how to do that. (3)
If you stick an LLM in the question-generating part, or the hypothesis-generating part, then forget it, at that point you're playing a scientistic video game. The possibility of a scientific discovery coming out of it is the same as the possibility of getting physically wet while watching a computer simulation of rain. (4)
If you stick an LLM in the communication part, then you're putting yourself on the Retraction Watch list, not communicating.
#science #LLM #AI #GenAI #GenerativeAI #AIHype #hype
(1) I know this is a cartoonishly simple view of science, but I do firmly believe that something along these lines is the backbone of it, however real-world messy it becomes in practice.
(2) A large number of computer scientists are very sloppy about this process--and I have been in the past too--but that does not mean it should be condoned.
(3) Things are so dire that very few even seem to have the thought that this is something you should try to do.
(4) Yes, you might discover something while watching the LLM glop, but that's you, the human being, making the discovery, not the AI, in a chance manner despite the process, not in a systematic manner enhanced by the process. You could likewise accidentally spill a glass of water on yourself while watching RainSim.
#月曜がんばれ 165
「がんばれ♥白い息も、君の情熱で湯気に変えちゃえ!がんばれ♥」
#松實優麗 #うちの子
#stablediffusion #AI #AI繪圖 #AIイラスト #AIart #AIイラスト好きさんと繋がりたい
今天人有點多,破例調一下
#月曜がんばれ 118
「がんばるのですっ!ひなたが応援しちゃうのです♪」
追加イラストはPatreonでサブスクライブして
discordで見ておねかいします
https://www.patreon.com/yuurei
#袴田ひなた #ロウきゅーぶ #rokyubu #袴田ひなた生誕祭
#stablediffusion #AI #AI繪圖 #AIイラスト #AIart #AIイラスト好きさんと繋がりたい
Some argue that ai technology is more significant than electricity or the internet, and so it will spread fast. But there is little sign of this. Only 5-6% of American businesses said they used ai to produce goods and services in 2024, according to the country’s Census Bureau.
What's the puzzle? It has its uses but generally this technology is not particularly useful for most people. Countless billions of dollars have been spent hyping it up to pretend that isn't the case. But reality matters.
Nowadays I think of generative AI as a power grab. It's not a useful tool for a lot of people, but it's exceptionally useful to people who want to grab and hold power, or restructure power in their favor.
#AI #GenAI #GenerativeAI
This is not a healthy economy.
#AI #GenAI #GenerativeAI #NVIDIA #bubble #AssetBubble #economy
Ocassionally some silly silly folk will call me a #broligarch fanboi "thinking" I'm 100% in support of #AI and block me merely for debating, rather than echoing the #Luddite chorus.
... I drew this cartoon 35 years ago (1991 - Hal '91)
Long before all the wannabe kool kids were #antiai
Point is, I did not arrive at my AI position overnight (#regulateAI), as most folk had. I had a few dacades to mull things over.
Can you spot the #Dilbert influence?
I was always in the bleeding edge of #zeitgeist , Dilbert was only 3 years old.
#cartoon
(1/6) My blog #MisalignedMarkets turns one! I spent the year blogging about lots of topics: optimization, information asymmetry, risk, #AI but here are my most popular posts
A thread 🧵
This warms me in the meow meow
"How Replacing Developers With AI is Going Horribly Wrong"
预感 #AI 至少还有一波革命性的改变,
不可能按目前这样继续下去,
Transformer也肯定会消失。
因为当前不论再先进的模型,
在收到问题的那一刻其实答案已经有了,
这是计算不是智能。
真正的智能至少会像人类一样思考、记忆、感知,
以及创造并利用工具,
AI 只有实现自举才能引领未来。
#Mozilla #翻译 #AI #translate
RE: https://pullopen.xyz/@appinn/114104115170093814
Can someone clarify, in academia and industry are LLM hallucinations the result of overfitting, or simply a false positive?
I'm beginning to think that hallucinations are evidence of overfitting. It seems surprising that there are few attempts to articulate the underlying cause of hallucinations. Also, if the issue is overfitting, then increasing training time and datasets may not be an appropriate solution to the problem of hallucinations.
I was listening to a podcast where the guest argued: when it comes to art, today's AI is like a tool, but in the future, maybe it could be an artist.
As he explained why, it became clear why I disagree with him. He defines "art" as a product that is valued by art critics and by consumers. I agree that AI generated content may become more common and broadly accepted in the future, but I don't think that's a good thing, and I wouldn't call it art!
I see art as more of an activity than a product. An artist does art. Whatever the output of that process is, valuable or not, is also called "art."
Am I saying only humans can make art? Not at all. But to be "art" it has to be self-expression. The artist must have something to say. As long as we are designing the AI, training it, and prompting it, then it will be a tool. Without us it has no initiative, no opinion, and no creative urge.
🧵As AI systems grow in sophistication, some people are supposing chatbots are moving toward being conscious entities.
This is incorrect, but we should be more precise about what consciousness and perception are.
If we are, we realize that minds do not create experience; experience is what creates minds. https://plus.flux.community/p/its-like-this-why-your-perception
How LLMs & Chatbots Are Bad For the Indie Web https://osteophage.neocities.org/essays/bots-bad-indie-web #LLM #AI #IndieWeb
1. Economists from the physiocrats (18th century) onward promised society freedom from material deprivation and hard physical labor in exchange for submitting to an economic arrangement of society
2. In a country like the US, material deprivation and hard physical labor have been significantly reduced since then:
Though too many clearly still suffer too much, a large proportion of people live free from fear of starvation or lack of shelter
The US has deindustralized, meaning hard physical labor is not the reality for a lot of people. For a lot of people labor is emotional or symbolic (“knowledge work”)
In other words, for lots of people the economic promise has been fulfilled
It is not coincidental that “Gas Town”’s announcement post mentioned Towers of Hanoi, an undergraduate CS student homework problem that for most students requires thinking hard. It’s designed to encourage a kind of “eureka” moment where recursion as a computer programming technique becomes more clear. GT claims to fulfill the promise of not having to think hard like this anymore: the LLMs will do that thinking for you
It is not coincidental that Gas Town is described as being very expensive. Economic power in the form of asset accumulation is what earns you freedom in this way of conceiving things. If you want the freedom from having to think hard, you’d better accumulate assets
Since the promise is greater collective freedom, endeavoring to accumulate assets is, in this view, a collective good
This differs from effective altruism and other “do good by doing well” conceptions. Rather, the very mechanism of economics produces collective wealth, so the story goes, which means the more active one is as an economic agent, the more collective good one produces (“wealth” and “good” being conflated)
Accumulation of assets is the scorecard, so to speak, of such enhanced economic activity, and the individual reward can then be freedom from having to think hard
Lotka’s maximum power principle (supposedly) dictates that those entities that transform the most power into useful organization are most fit from an evolutionary standpoint
Ernst Juenger’s notion of “total mobilization” brings this principle to politics/political economy/geopolitics: those nations that “totally” mobilize their national resources are the ones that will dominate geopolitically
See, for instance, the RAND Corporation’s Commission on the National Defense Strategy: “The Commission finds that the U.S. military lacks both the capabilities and the capacity required to be confident it can deter and prevail in combat. It needs to do a better job of incorporating new technology at scale; field more and higher-capability platforms, software, and munitions; and deploy innovative operational concepts to employ them together better.” (emphasis mine). In summary: the US is about to be outcompeted (lacks fitness); in response, it should go big (“at scale”, “more”) in an organized way (“deploy innovative operational concepts”, “employ them together better”)
The rhetoric around LLM-based AI includes similar language, exemplified in the GT post: burn through as much infrastructural resources as possible to produce organized outputs “at scale”, while avoiding having human beings think too hard to produce those outputs, an indication that the power was burned to produce useful organization
LLM-based AI plays a prominent role in US federal government strategy, particularly military strategy, with language about dominance serving to justify its use
It is not coincidental that Gas Town uses many orders of magnitude more resources to solve the Towers of Hanoi problem (“Burn All The Gas” Town). This rhetoric dovetails perfectly with the “total mobilization” concept
