New technology alerts schools when students type words related to suicide. But do the timely interventions balance out the false alarms?
New technology alerts schools when students type words related to suicide. But do the timely interventions balance out the false alarms?
Listen to this article · 17:24 min Learn more
Photographs by Graham Dickie
Dawn was still hours away when Angel Cholka was awakened by the beams of a police flashlight through the window. At the door was an officer, who asked if someone named Madi lived there. He said he needed to check on her. Ms. Cholka ran to her 16-year-old’s bedroom, confused and, suddenly, terrified.
Ms. Cholka did not know that A.I.-powered software operated by the local school district in Neosho, Mo., had been tracking what Madi was typing on her school-issued Chromebook.
While her family slept, Madi had texted a friend that she planned to overdose on her anxiety medication. That information shot to the school’s head counselor, who sent it to the police. When Ms. Cholka and the officer reached Madi, she had already taken about 15 pills. They pulled her out of bed and rushed her to the hospital.
Thousands of miles away, at around midnight, a mother and father in Fairfield County, Conn., received a call on their landline and were unable to reach it in time to answer. Fifteen minutes later, the doorbell rang. Three officers were on the stoop asking to see their 17-year-old daughter, who had been flagged by monitoring software as at urgent risk for self-harm.
The girl’s parents woke her and brought her downstairs so the police could quiz her on something she had typed on her school laptop. It took only a few minutes to conclude that it was a false alarm — the language was from a poem she wrote years earlier — but the visit left the girl profoundly shaken.
“It was one of the worst experiences of her life,” said the girl’s mother, who requested anonymity to discuss an experience “traumatizing” to her daughter.