MIT Tech Review
Defense official reveals how AI chatbots could be used for targeting decisions
The US military is exploring the use of generative AI to rank and prioritize strike targets, with human operators reviewing the recommendations before any action is taken. The disclosure offers rare insight into how the Pentagon envisions deploying large language models in high-stakes battlefield decisions. It arrives at a moment of heightened scrutiny over the Defense Department's broader push to integrate AI into warfare.
Read article βHacker News
The Wyden Siren Goes Off Again: We'll Be "Stunned" by NSA Under Section 702
Senator Ron Wyden is sounding the alarm again over undisclosed NSA surveillance activities conducted under Section 702 of the Foreign Intelligence Surveillance Act, warning the public would be "stunned" if the full scope were revealed. Wyden has a track record of cryptic warnings that later prove prescient β his pre-Snowden hints about mass surveillance turned out to be understated. The pattern suggests significant legal or operational overreach may be occurring under a reauthorized surveillance authority that critics argue already lacks sufficient oversight.
Read article βHacker News
Tennessee grandmother jailed after AI face recognition error links her to fraud
A Tennessee grandmother was arrested and jailed after an AI-powered facial recognition system incorrectly identified her as a fraud suspect, highlighting the real-world consequences of algorithmic error. The case joins a growing list of wrongful arrests tied to facial recognition misidentification, a problem that disproportionately affects older individuals and people of color. As law enforcement increasingly relies on these tools, the lack of meaningful oversight or verification standards continues to put innocent people at serious legal risk.
Read article βHacker News
Innocent woman jailed after being misidentified using AI facial recognition
A North Dakota grandmother spent months in jail after AI facial recognition software incorrectly matched her to a fraud suspect, highlighting the real human cost of algorithmic error in law enforcement. The case underscores a growing concern: that police and prosecutors are treating AI-generated matches as reliable evidence rather than investigative leads requiring corroboration. For a technology with documented accuracy gaps β particularly among older women and people of color β the stakes of uncritical adoption could not be higher.
Read article βHacker News
AI error jails innocent grandmother for months in North Dakota fraud case
An AI system misidentified a North Dakota grandmother as a fraud suspect, resulting in her wrongful imprisonment for several months before the error was caught. The case highlights the real-world consequences of deploying AI tools in high-stakes legal and law enforcement contexts without adequate human oversight. As courts and prosecutors increasingly rely on algorithmic decision-making, cases like this raise urgent questions about accountability when the systems get it wrong.
Read article βGet this delivered every morning
Join thousands of readers who get the world's most important stories, curated daily.
Start reading free β