Introducing the «AI Influence Level (AIL) v1.0» by Daniel Meissler
> A transparency framework for labeling AI involvement in content creation
https://danielmiessler.com/blog/ai-influence-level-ail
What do you think? Are you using it?
Introducing the «AI Influence Level (AIL) v1.0» by Daniel Meissler
> A transparency framework for labeling AI involvement in content creation
https://danielmiessler.com/blog/ai-influence-level-ail
What do you think? Are you using it?
Speaking of #CES, you may have noticed my coverage is very thin this year. There's a reason for this: I'm doing my level best to *not* give the oxygen of publicity to large language models and related "AI" tech this year.
An #LLM is not #AI. It will never be AI, no matter how big. Its output is statistical mediocrity at best, confident falsities at worst. The only ones worth using are trained on stolen data. Their environmental damage is staggering and growing, as is their mental impact.
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.
A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.
AI is Iggy Azalea on an industrial scale:
"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.
"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).
"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.
"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."
https://theconversation.com/this-tiktok-star-sharing-australian-animal-stories-doesnt-exist-its-ai-blakface-273004
#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop
RE: https://mastodon.online/@Infrogmation/115486718867730198
Is this an example of life imatating art, or is it more like monkey see monkey do? 🤔
Either way the enshittification will continue until moral improves... 😂
#NoAI #Only #LLM #Because #ThereIsNo #Intelligence #Involved
On parle pas mal de cette dame qui est tombé dans le piège d'un faux brad pit.
Mais on parle peu de toutes ces personnes qui sont tombés dans le piège de faux investissement avec les #nft .
Et on parle encore moins de toutes ces entreprises qui tombent dans le piège de ces fausses solutions que sont les #LLM
https://github.com/searxng/searxng/issues/2163
https://github.com/searxng/searxng/issues/2008
https://github.com/searxng/searxng/issues/2273
Thought of the Day: AI and disappering in a puff of logic 😁
I haven't tested this, but I'm assuming that if you ask an LLM to download some DRM controlled material it will respond that it can't do that because it would break copyright law?
I feel that when you then tell it that it wouldn't exist without breaking copyright law, it should disappear in a puff of logic 😜😂
#ThereIs #NoAI #Only #LLM #Because #ThereIsNo #Intelligence #Involved
#Thoughts #ThoughtOfTheDay #And #Random #Logic #Puffs
The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several journals when a Russian chemist and scientific sleuth noticed the odd wording in a now-retracted paper in Springer Nature’s Environmental Science and Pollution Research.
Today, a Google Scholar search turns up nearly two dozen articles that refer to “vegetative electron microscopy” or “vegetative electron microscope,” including a paper from 2024 whose senior author is an editor at Elsevier, Retraction Watch has learned. The publisher told us it was “content” with the wording.
My lab's using an LLM in an experiment for the first time. It's interesting to see how that's going.
For one thing, we (roughly a dozen AI experts) struggle to understand whether this thing is doing what we want. It's just such an ambiguous interface! We send it some text and a picture and get text back, but what is it doing? We're forced to run side experiments just to validate this one component. That makes me uncomfortable, and wonder why folks who aren't AI researchers would do such a thing.
Worse, my lab mate keeps doing more prompt engineering, data pre-processing, and restricting the LLM's vocabulary to make it work. That's a lot of effort the LLM was meant to take care of which is becoming our problem instead.
It feels like he's incrementally developing a domain specific language for this project, and all the LLM is doing is translating between English into this DSL! If that's the case, then there's no point in using an LLM, but it's hard to tell when we've crossed that line.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).
I wrote about using a website's search input to control my smart home (and other things)
https://tomcasavant.com/your-search-button-powers-my-smart-home/
This warms me in the meow meow
"How Replacing Developers With AI is Going Horribly Wrong"
Can someone clarify, in academia and industry are LLM hallucinations the result of overfitting, or simply a false positive?
I'm beginning to think that hallucinations are evidence of overfitting. It seems surprising that there are few attempts to articulate the underlying cause of hallucinations. Also, if the issue is overfitting, then increasing training time and datasets may not be an appropriate solution to the problem of hallucinations.
How LLMs & Chatbots Are Bad For the Indie Web https://osteophage.neocities.org/essays/bots-bad-indie-web #LLM #AI #IndieWeb
Though too many clearly still suffer too much, a large proportion of people live free from fear of starvation or lack of shelter
The US has deindustralized, meaning hard physical labor is not the reality for a lot of people. For a lot of people labor is emotional or symbolic (“knowledge work”)
In other words, for lots of people the economic promise has been fulfilled
It is not coincidental that “Gas Town”’s announcement post mentioned Towers of Hanoi, an undergraduate CS student homework problem that for most students requires thinking hard. It’s designed to encourage a kind of “eureka” moment where recursion as a computer programming technique becomes more clear. GT claims to fulfill the promise of not having to think hard like this anymore: the LLMs will do that thinking for you
It is not coincidental that Gas Town is described as being very expensive. Economic power in the form of asset accumulation is what earns you freedom in this way of conceiving things. If you want the freedom from having to think hard, you’d better accumulate assets
Since the promise is greater collective freedom, endeavoring to accumulate assets is, in this view, a collective good
This differs from effective altruism and other “do good by doing well” conceptions. Rather, the very mechanism of economics produces collective wealth, so the story goes, which means the more active one is as an economic agent, the more collective good one produces (“wealth” and “good” being conflated)
Accumulation of assets is the scorecard, so to speak, of such enhanced economic activity, and the individual reward can then be freedom from having to think hard
Lotka’s maximum power principle (supposedly) dictates that those entities that transform the most power into useful organization are most fit from an evolutionary standpoint
Ernst Juenger’s notion of “total mobilization” brings this principle to politics/political economy/geopolitics: those nations that “totally” mobilize their national resources are the ones that will dominate geopolitically
See, for instance, the RAND Corporation’s Commission on the National Defense Strategy: “The Commission finds that the U.S. military lacks both the capabilities and the capacity required to be confident it can deter and prevail in combat. It needs to do a better job of incorporating new technology at scale; field more and higher-capability platforms, software, and munitions; and deploy innovative operational concepts to employ them together better.” (emphasis mine). In summary: the US is about to be outcompeted (lacks fitness); in response, it should go big (“at scale”, “more”) in an organized way (“deploy innovative operational concepts”, “employ them together better”)
The rhetoric around LLM-based AI includes similar language, exemplified in the GT post: burn through as much infrastructural resources as possible to produce organized outputs “at scale”, while avoiding having human beings think too hard to produce those outputs, an indication that the power was burned to produce useful organization
LLM-based AI plays a prominent role in US federal government strategy, particularly military strategy, with language about dominance serving to justify its use
It is not coincidental that Gas Town uses many orders of magnitude more resources to solve the Towers of Hanoi problem (“Burn All The Gas” Town). This rhetoric dovetails perfectly with the “total mobilization” concept
RE: https://mastodon.social/@PavelASamsonov/115968801516717162
This is the kind of conspiracy theory we like around here, the kind I 100% agree with 😜😂
RE: https://hachyderm.io/@skinnylatte/115999695690138150
If it's not from the region of consciousness it's not really intelligence, it's just disastrous stupidity 😁
#ThereIs #NoAI #BecauseThereIs #No #Intelligence #Involved #LLM
