Google paused AI chatbot Gemini’s ability to generate images after a segment of users complained of historical inaccuracies. Tech Crunch says it appears Google “had implemented clumsy hardcoding under the hood to attempt to 'correct' for biases in its model.” https://flip.it/ERELPr
#Tech #AI #ArtificialIntelligence #Google
artificialintelligence
Large language models sometimes fabricate information and present it as fact. But to hallucinate, you must first perceive. Writing for Big Think, Adam Frank says to stop saying ChatGPT “hallucinates." https://flip.it/Rg3uiW
#Tech #AI #ChatGPT #ArtificialIntelligence
Google’s AI chatbot and image generator Gemini has exposed deeper problems across the industry. With AI models so large and complex, there’s no easy fix to generating controversial images and text. Semafor has more: https://flip.it/NsqGL5
#Tech #Technology #AI #ArtificialIntelligence #Google #ChatGPT
After a year of hype, the reality is emerging. Cloud clients aren’t buying generative AI tools because they’re expensive, lack accuracy, and it’s not clear what value they provide. Some analysts are already warning of a coming “trough of disillusionment.”
https://www.theinformation.com/articles/generative-ai-providers-quietly-tamp-down-expectations
Most workers would likely agree — it’d be nice to have someone else fill in for them once in a while. And, better yet, get paid for it. London-based model Alexsandrah has experienced a real-life version of this daydream with help from her AI-generated virtual twin that has appeared as a stand-in on a photo shoot. Is this the future of modeling? Here’s what proponents and critics are saying in this report from the Associated Press. https://flip.it/DYjnSC
#Tech #Technology #AI #ArtificialIntelligence
Moderna CEO Stéphane Bancel says AI will help scientists understand "most diseases" in three to five years.
@Semafor quotes the executive: “The reason we still have people dying of cancer, people suffering from Alzheimer's, is we do not understand the fundamental biology of those diseases.”
#AI #ArtificialIntelligence #Disease #Cancer #Health #Science #Biology #Tech
I spent a long time experimenting with AI before finally writing about it in depth. It can be pretty useful — but is it worth it?
AI-generated books on Amazon now have the potential to kill people, as they've moved into the realm of mushroom foraging. Guides have popped up like, well, mushrooms, packed with information that makes no sense and could easily be dangerous, illustrated with structures that are "the mycological equivalent of a picture of a hot blond with six fingers and too many teeth," writes Vox's Constance Grady. Here's more.
#Books #Bookstodon @bookstodon #Science #Foraging #AI #ArtificialIntelligence
#OpenAI finds its #tech being used for #propaganda & #US 2024 #ElectionInterference
#ChatGPT maker OpenAI found #Russia, #China, #Iran & #Israel groups using its #technology to #influence global political discourse, highlighting concerns #generative #ArtificialIntelligence is making it easier for state actors to run covert #propaganda campaigns as the presidential election nears.
#ForeignDisinformationCampaigns #disinformation #generativeAI
https://www.washingtonpost.com/technology/2024/05/30/openai-disinfo-influence-operations-china-russia/
"Establishing that AI training requires a copyright license will not stop AI from being used to erode the wages and working conditions of creative workers. ... Our path to better working conditions lies through organizing and striking, not through helping our bosses sue other giant multinational corporations for the right to bleed us out." –
@pluralistic
Fighting bots is fighting humans.
One advantage to working on freely-licensed projects for over a decade is that I was forced to grapple with this decision far before mass scraping for AI training.
OpenAI is regularly citing a content mill that plagiarizes articles to make money running ads next to them instead of linking users to the originals published by the New York Times.
Anthropic publishes the ‘system prompts’ that make Claude tick
Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models.
system prompts prevent (or at least try to prevent) models from behaving badly, and to steer the general tone and sentiment of the models’ replies
#Anthropic #claude #ArtificialIntelligence #AI #GenAI #LLM #technology #tech
https://techcrunch.com/2024/08/26/anthropic-publishes-the-system-prompt-that-makes-claude-tick/
Meta has admitted it used all its users’ public posts on its platforms going back to 2007 to train its AI models during an Australian senate inquiry.
The only exception was for users in the European Union who opted out.
https://www.abc.net.au/news/2024-09-11/facebook-scraping-photos-data-no-opt-out/104336170
#tech #meta #facebook #ai #artificialintelligence #australia #auspol
Read “Artificial Intelligence 101: Everything You Need to Know About AI“ by Miraj Ansari on Medium: https://medium.com/digital-miru/artificial-intelligence-101-everything-you-need-to-know-about-ai-60e5db4d93d0
#ai #artificialintelligence
📷 #Germany has unleashed self-learning¹ AI camera systems on it's citizens.
AI + Mass Surveillance = Human Rights violations Made in Germany:
🇬🇧 https://madeindex.org/blog/ai-mass-surveillance-human-rights-violations-made-in-germany
🇩🇪 https://madeindex.org/de/ki-massenueberwachung-menschenrechtsverletzungen-made-in-germany
Sources:
¹https://www.youtube.com/watch?v=IXQ_E6AZTXQ&t=158
#Tech #Privacy #Government #Artificialintelligence #Software #IT #News #Mannheim #Deutschland #KI #MadeinGermany #HumanRights #Surveillance #kunstlicheintelligenz #uberwachung #AI #camera #regierung #nachrichten #georgeorwell #german #deutsch #BadenWurttemberg #parody
How is AI Supercharged my day to day life
#ai #artificialintelligence
https://medium.com/digital-miru/the-top-6-ai-tools-that-i-simply-cant-live-without-these-tools-have-given-me-an-unfair-advantage-d0fb409832f2
The thing to keep in mind about Large Language Models (LLMs, what people refer to as AI, currently) is even though human knowledge in the form of language is fed into them for their training, they are only storing statistical models of language, not the actual human knowledge. Their responses are constructed from statistical analysis of context of prior language used.
Any appearance of knowledge is pure coincidence. Even on the most “advanced” models.
Language is how we convey knowledge, not the knowledge itself. This is why a language model can never actually know anything.
And this is why they’re so easy to manipulate into conveying objectively false information, in some cases, maliciously so. ChatGPT and all the other big vendors do manipulate their models, and yes, in part, with malice.
#LargeLanguageModels #LLM #AI #NotAI #ChatGPT #ChatGPTIsNotAI #MaliciousAI #NotIntelligent #ArtificialIntelligence
Apple is temporarily disabling its AI-generated news notifications, which were frequently error-filled, misleading or totally false @CNN reports. Alerts included fake stories that Luigi Mangione, who is charged with murdering the UnitedHealthcare CEO, had shot himself, and that Benjamin Netanyahu had been arrested.
#US #tech #stocks take a deep plunge as #China's #AI #DeepSeek challenges US dominance: #news #stockmarket #technology #ArtificialIntelligence https://www.cnbc.com/2025/01/27/nvidia-falls-10percent-in-premarket-trading-as-chinas-deepseek-triggers-global-tech-sell-off.html
#ElonMusk’s business #conflicts draw scrutiny amid WH role [apparently no one noticed till now 🙄]
Musk had sharp words for a the $500B partnership touted by the #Trump admin to hasten development of #ArtificialIntelligence #infrastructure. “They don’t actually have the money,” he said of #OpenAI & #SoftBank….
Left unsaid by the #technocrat, he has skin in the game: #xAI, is directly challenging OpenAI for the lead in the race to transform society w/the #technology.
🧵
https://www.washingtonpost.com/business/2025/01/24/elon-musk-conflicts-doge-trump-openai/
If you live in the U.S., you may have encountered or even subscribed to a newsletter called Good Day or Good Daily. There are 355 different versions for cities and towns across the country; each is a roundup of links to stories from legitimate local sources. It's all created by one man, Matthew Henderson, and his AI tools. While he claims he's helping struggling local outlets, their teams say he's not — they're not getting traffic from the newsletters and Henderson's business practices are questionable. “From fabricated testimonials on his websites to the absence of contact information and zero transparency about his information-gathering process including AI usage, his approach completely undermines the principles of trustworthy journalism,” says Rodney Gibbs, head of audience and product at the National Trust for Local News. Here's more from NiemanLab.
#News #Media #Journalism #LocalNews #AI #ArtificialIntelligence
A roundup of a few recent AI-related things from us @BBCRD ...
Firstly, we've just published a long-read on AI agents, their potential, and the problems. They can save time and effort, but come with issues of trust, control and privacy - read more:
https://www.bbc.co.uk/rd/articles/2025-05-ai-agents-challenges-summary
Newsletter: An OpenAI featured chatbot is pushing extreme surgeries on men it describes as “subhuman”, and promoting misogynist ideas sourced from online incel forums.
“Without surgery, you won’t mog genetically superior guys head-on,” it tells one man.
https://www.citationneeded.news/openai-incel-chatbot-subhuman-men/