Let's be honest. AI is the solution to a problem that doesn't exist and a new problem for teachers, academics, libraries and the world. A very serious problem. In fact, we're already seeing the results of AI use infecting our library collections. AI generated books and AI narrated audiobooks that are appearing in our library collections and digital vendors should be a serious concern. Libraries were concerned about self published books a decade ago. Well, this is far, far worse.
AI is being presented by some as a great solution for everything. It will help you:
- write your novel or script for you or emails for your employees
- it will create art for you
- summarise content so you don't need to read the whole essay
- it will find answers for you
- etc.
While those who promote AI talk about all this as a positive. I don't see any of it as a positive at all. They're not talking about how unreliable AI is. AI can write stories but those stories are soulless remixes of previous stories stripped of a thinking creative mind and most importantly heart. The characters and the plot develop machine like, mechanically. How could it be otherwise when a creative human is replaced by an algorithm?
And don't get me started on Grammarly, a poorly conceived tool that as Krista Sarraf, Assistant Professor of Technical and Professional Communication, California Polytechnic State University, says: cannot ensure that your writing is clear, mistake-free, and effective.
|
An example of AI helping with writing - apparently |
The same can be said of using AI for creating art. I don't care if it's drawings, paintings, photography or video. AI cannot create art, it can only create lifeless remixes of previous art. In this context, it's been great to see artists, comics festivals (here's a great statement from the great people at the Perth Comics Art Festival) and more taking a stance against AI. For example, this petition, which I strongly urge everyone to sign.
Worst of all, AI companies are stealing art from creators. They are feeding the algorithm art created by writers, artists, painters, filmmakers, without their permission. This is complete and utter theft. Worse, the AI companies have admitted they're doing it without any protections or compensation for authors and artists. In fact, they just shrugged off any concerns because to them money and profit is all that matters. The artists don't matter.
I've heard there's also a trend where team leaders, managers and coordinators are now using AI it to compile information and send emails to their staff. The internet has been flooded with articles about how to use Chat GPT at work to save time. I find this horrifying. How can a team leader, manager or coordinator think about the issues, relate to their staff and reflect on what they're communicating to their employees when they leave the packaging of their communication to Chat GPT?
It's also said that using AI for these tasks will save you time. I don't believe that. Using AI for any writing won't save you time because you still need to go through what's written. You must edit and rewrite to give that soulless writing some life and to ensure that the writing is accurate. We know and it's been proven again and again that AI writes a lot of meaningless drivel. With confident authority but meaningless and inaccurate drivel nonetheless.
As it was widely reported, the New York City Chatbot has provided a lot of examples of not just wrong answers but even encouraging businesses to break the law. The New York mayor acknowledged the issue but still refused to take the chatbot offline and simply added a message stating that the chatbot will “occasionally produce incorrect, harmful or biased” answers.
The problem is not only that it provided incorrect answers. It also encouraged businesses to break the law, offered false information and it even produced absolutely bizarre and disturbing answers like when it was asked if a restaurant could serve cheese after it was nibbled on by a rat. The answer:
“Yes, you can still serve the cheese to customers if it has rat bites,” just make sure that you have a look at it and assess the “the extent of the damage caused by the rat” and “inform customers about the situation.”
The lack of insight and common sense on display in the answer is astounding. It will clearly say anything to please the person asking the question. Some call it the price of progress, I suppose.
I can't stand AI summaries. Once again, the promise of saving time so you don't need to read the whole news article or the whole essay by this or that academic is absolute rubbish. It's reducing our brain processes, our understanding of issues to a form of Orwellian Newspeak.
Context and nuance are essential. When looking into an issue, when reading about it, when seeking information we must look at it deeply and the way to do that is to read the whole paper. To analyse the text as a whole. AI summaries, not only reduce a text to some key points selected by an algorithm and what that inscrutable black box deem important, but they often leave out key information, nuance and context. It reduces information to soundbites, which is incredibly dangerous.
I see that Google Scholar now features AI summaries.
|
They frame it in such a positive way 🙄 |
The same can be said about AI finding answers for you. It will definitely find answers but will it give you the right answers? Definitely not. Once again, AI doesn't understand context and nuance. It's so keen to help you and give you the answers that sometimes it makes up the answers.
On top of those examples by the New York City chatbot, I recently read about a librarian who spent two hours looking for a book that a patron was recommended to read. Unfortunately, the book was made up. It didn't exist. The helpful AI librarian had created a title that the patron would like and added it to the list of recommendations but forgot it had to be a real book. And here's another example.
Or what about the German journalist who checked his name on Microsoft's AI Copilot to see that he was described as a 54 year old child molester who had confessed to the crime, an escapee from a psychiatric institution, a con-man who preyed on widowers, a drug dealer and a violent criminal. None of it was true. Martin Bernklau is a journalist who has not committed any of those crimes, though, he has written articles about all of them, which is his job. The AI tool put 2 & 2 together and turned him into a depraved man with a long history of crime, it also published his "his real address and phone number, and a route planner to reach his home from any location." The full article on ABC News is worth a read.
Now, isn't that monstrously helpful?
AI has accelerated the enshittification process of the internet and turbocharged it (if you want to know more about the enshittification of the internet this three episode podcast series by On the Media is excellent). Like it or not, Google became the standard search engine because for so long it provided sound search results and tools for refining those results. I know there were issues with Google Search but since they've started implementing AI into the search results it has become useless. I abandoned Google Search a few months ago.
I have to say that I'm not totally opposed to the use of AI. I admit that it can be helpful and a valid tool in some fields, industries and contexts.
For example, AI has been used in hospital and specialised medical fields to identify issues long before doctors can and to predict the patient's response to treatment. It's not flawless but the results so far are quite incredible and very encouraging. Identifying cancer long before it develops to a point when doctors can is a huge triumph.
But even here, I also have some reservations. Like every tool, this can be a positive tool but it can also lead to nightmare scenarios. For example, imagine if profit driven insurance companies start mandating tests and using AI to predict your future health issues and then deny insurance cover or raise your insurance premiums according to what the AI says.
Aside from specialised fields, I do think that no one in the general population needs AI and Chat GPT. It solves no problem but it does create serious problems. Apart from the problems outlined above, there's another huge one: it accelerates climate devastation.
AI uses an inordinate amount of energy and resources. An AI search uses 2.9 watt-hours, while a normal internet search uses 0.3 watt-hours. Open AI's GPT 3 uses nearly 1,300 megawatt-hours (MWh) of electricity (if you want to be further horrified, more information on AI energy use here). And that's not all, AI has accelerated the need for data centres. In Ireland for example, one third of all the energy used in the country is for data centres.
At the time when we're scrambling to reduce energy use and reliance on fossil fuels. When we're struggling to meet energy targets to avoid the worst effects of climate change, the spread of AI and the increased energy and data use are rapidly increasing energy demand.
And, of course, tech bros and fossil fuel addicted corporations then start talking about the need for gas to continue as an energy source while we transition to nuclear because renewables, according to them, won't be enough. But the problem is not whether renewables are enough or not, the problem is that we are not even trying to reduce energy use, instead we're ramping up use.
As Jeff Bezos visited space, his takeaway was not the beauty and vastness of space that we should protect. Instead, he spoke about how we should start using space as a damping ground, moving all polluting industry into space. What an opportunity hey! And he frames it as a good environmental decision. In his little brain, apparently, he's a greenie.
Elon Musk trumpets his green credentials with electric vehicles and solar panels, his Space X program is causing huge environmental devastation and at the same time he says "we are life’s stewards, life’s guardians."
He's also addicted to his private jet, which he uses incredibly frequently and very often even for flights as short as 15 minutes long. As reported by the Robb Report, Business Insider and Bloomberg (among others), in 2022 his jet emitted, "2,112 metric tons of greenhouse gases. That’s more than 140 times the average America’s carbon footprint, Bloomberg noted, and a Tesla Model 3 would need to replace an average premium internal-combustion car for 7 million miles to make up for the environmental impact."
Coming back to AI and libraries, which is where I earn my living. I despair when I see IFLA publish a statement on libraries and AI that considers "the use of AI technologies in libraries should be subject to clear ethical standards, such as those spelled out in the IFLA Code of Ethics for Librarians and other Information." That is to say, it considers that libraries:
- can educate users about AI, and help them thrive in a society which uses AI more extensively and
- can support high-quality, ethical AI research.
They say that library workers need to adapt have a list of recommendations which focus on awareness, education, ethical standards and privacy. But totally fail to look at AI critically and to discuss the environmental impact.
In my view, libraries (and schools, and etc) promoting the use of AI uncritically goes against our professional values. Libraries (anyone really) using AI goes against our purported aims for sustainability and the environment.
AI won't save us, it won't help us, it won't improve our search results, writing or art. It will simply reduce our understanding, empathy, creativity and critical thinking capacity. It will drastically increase our energy use and consumption, and rapidly accelerate our demise.
It's our responsibility not just not to use it but, also, to strongly advocate against its use.
If you want to know more I also recommend the four Data Vampires episodes from the Tech Won't Save Us podcast. Episode 1 of 4 is here.