When Christopher Beam visited Princeton during graduation in May, the buzz on campus was all about ChatGPT—who had used it that year, who'd gotten away with it, and who was going to make a career out of it. Months before, Edward Tian, a senior computer science major with an interest in journalism, had developed a program called GPTZero and posted it on Twitter. It promised to identify text written with AI and was one of the first AI detectors to do so. Immediately, it went viral. On the other side of the country was Stanford freshman Joseph Semrai, who was trying to write a paper but was stymied by the limitations of ChatGPT's text generation. He developed WorkNinja, which helps students generate AI-written essays that will fool AI detectors like GPTZero. The result: Students across the country were caught in the cat-and-mouse game of AI detection. But what's the point of all of this generating and regenerating and editing and reediting if in the end you have a paper that will get you through a class, but not much else? Does writing need to be so hard? "The siren call of AI says, It doesn't have to be this way," writes Beam, who has been working for nearly 20 years as a professional writer and can confirm that as a task, it does indeed suck. "When you consider the billions of people who sit outside the elite club of writer-sufferers, you start to think: Maybe it shouldn't be this way."–Michelle Legro | Deputy Editor, Features |
0 Comments:
Post a Comment