Post by : Raina Nasser
An incident in Baltimore County, Maryland has reignited international scrutiny of AI-driven surveillance after a teenage student was taken into custody when an automated system misclassified his bag of Doritos as a gun. The student, identified as Taki Allen, was outside Kenwood High School eating with friends when an AI alert precipitated a rapid police response — officers arrived with weapons drawn.
The alert originated from an AI monitoring platform developed by Omnilert, which reported what it assessed to be a weapon in Allen’s hand and notified school and law enforcement officials. Officers reached the scene quickly and ordered the student to lie on the ground. Allen later described the moment: “At first, I didn’t know what was happening until they started walking toward me with guns, shouting, ‘Get on the ground,’” adding that he had only been holding a Doritos bag.
Police reports indicate Allen was handcuffed and subjected to a search; no firearm was discovered. Subsequent review of surveillance footage showed the object flagged by the system was a crumpled orange chip packet. One officer noted the AI misread the way the bag was held, producing a false positive.
The episode has prompted immediate concern about the accuracy and deployment of artificial intelligence in school safety systems. Parents and students voiced alarm, demanding explanations for how such a misidentification could trigger an armed police intervention. “This could have gone horribly wrong,” said a parent gathered outside the school, stressing that human lives should not be put at risk by automated errors.
Kenwood High School’s administration issued an apology to Allen and to those who witnessed the event, acknowledging the distress it caused. The statement noted that counseling services have been made available to pupils affected by the episode.
Omnilert, the vendor of the detection software, issued a public expression of regret and confirmed it has opened an internal review of the visual recognition processes that produced the alert. The company said it is assessing ways to reduce erroneous identifications.
Experts argue the incident highlights broader questions about reliance on AI for security decisions in public spaces. Critics point out that current systems can lack context awareness and the nuanced judgement necessary for policing, increasing the risk of harmful outcomes when automated alerts are treated as definitive.
For Allen, the episode was traumatising. “I was just eating chips with my friends,” he said, reflecting on the sudden escalation. His account has attracted widespread attention online, serving as a cautionary example of how technological failures can have profound personal consequences and underscoring calls for stronger human oversight.
Mattel Revives Masters of the Universe Action Figures Ahead of Film Launch
Mattel is reintroducing Masters of the Universe figures in line with its upcoming film, tapping into
China Executes 11 Members of Criminal Clan Linked to Myanmar Scam
China has executed 11 criminals associated with the Ming family, known for major scams and human tra
US Issues Alarm to Iran as Military Forces Deploy in Gulf Region
With a significant military presence in the Gulf, Trump urges Iran to negotiate a nuclear deal or fa
Copper Prices Reach Unprecedented Highs Amid Geopolitical Turmoil
Copper prices soar to all-time highs as geopolitical tensions and a weakening dollar boost investor
New Zealand Secures First Win Against India, Triumph by 50 Runs
New Zealand won the 4th T20I against India by 50 runs in Vizag. Despite Dube's impressive 65, India