AI content and critical thinking
When I was in year nine at school (around the age of 14) I recall an English class in which the teacher encouraged us ask questions of the article we were reading. What does the writer think about this issue? How are they presenting the data? Do they consider the other side of the argument? What is the agenda on the publisher? She was asking us to think critically about the source material, and it had, as you can probably gather given that it was 25 years ago, an effect on me.
I was reminded of this lesson when reading Ben Thompson’s thoughful piece on ChatGPT, the text synthesis app currently taking the Internet by storm, which he ends with a bit of a call to arms:
The solution will be to start with Internet assumptions, which means abundance, and choosing Locke and Montesquieu over Hobbes: instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out. In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.
I appears to me, however, that much of this has already come to pass.
On the abundance of information, well, the present is already awash with content of minimal value or that is patently false. It won’t take you long to find publicly traded companies publishng mundanities in order to aribtrage clicks. Or, to find vast swathes of information that is, at best, nonsense, at worst, deliberately false, on social media.
Similarly, when Thompson says…
an education model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful
…this is no different to a my year nine English teacher trying to enstill those virtues of critical thinking. And no doubt how today’s teachers encourage their students to practise the same skills when reviewing anything published online, particularly if it came from an unatributed source.
There has been junk content for as long as humans have been able to publish it. It was a problem in my pre-Internet school years, and in the quarter of a century since then humans have proven themselves adept at creating even more of it.
No doubt text systhesis models will create will produce ever more junk at the direction of its human users. But, despite how the emergence of ChatGPT might make it appear, the need to be trained in the methods of spotting junk is a problem of today, not one of the future.