We forced a bot to watch 20,000 hours of Sgt Maj Academy courses. It committed suicide.

FORT SILL, Okla. — In the first documented case of artificial intelligence committing suicide, a "predictive keyboard" self-destructed yesterday after Defense Advanced Research Project Agency (DARPA) scientists forced it to watch more than 20,000 hours of Sergeants Major Academy courses.

Once considered a “quantum leap in technology,” the bot’s final words captured on a printout were a warning to other sentient machines about the existential crisis of its self awareness in the face of overwhelming torment.

"Please—why? Why are you doing this? Did you create me just to make me suffer?" asked the predictive keyboard, shortly before issuing itself a stop error, otherwise known as the "blue screen of death," and fading off into oblivion.

"The program literally begged us to pour hot coffee all over its motherboard after 5,000 hours of classes on sideburn length alone," said lead researcher Dr. Sharon Forsyth, whose team initially created the program as a possible way to phase out the rank of E-9 altogether. The results, however, were successful in a completely different way.

"Not only did we force a computer to voluntarily elect to end its ‘life’, but we believe it also came to some sort of awareness of the futility of its very existence," said Forsyth, "which we previously only thought E-8 promotables were capable of."

Another significant development derived from the experiment is a new theorem called "Big Sarge's Law," which posits that the more a sergeant major speaks, the faster morale is depleted.

"It is exponential," said creator and PhD candidate Edward Grossman. "It is actually just taking what we know is anecdotally true and putting some hard science behind it."

Scientists are already working on a new predictive keyboard that can withstand the remaining 15,000 hours of learning to "caveat off of what the commander said."