Coffee Shop Musings (AI Panic Edition)
Updated: May 13
Skynet is imminent and we are all going to die is exactly what everyone is currently shitting themselves about, rather than the planet expelling humanity as the troublesome infection it has evolved into? Are students creating essays in ChatGPT and the lack of humans answering customer service queries really the end of life as we know it? Let's look at some carefully selected facts for an unbalanced discussion...
Artificial Intelligence (AI) has been the stuff of sci-fi horror and navel-gazing narratives since Samuel Butler's 1872 novel Erewhon (no, me neither) and, for me, Kubrick's film 2001: A Space Odyssey. Humanity has a deeply ingrained fear of the 'other' which inevitably causes us to focus on the negative aspects of anything new that we struggle to understand. I've no idea what leads some people to be more inherently distrusting than others (over exposure to fight or flight scenarios? Multiple or prolonged episodes of stress? Watching too much Game of Thrones?) but there are some people that are genuinely fearful that we are on the slippery slopes to living out our own Terminator franchise. Though hopefully skipping Terminator 3 - that was a bit crap.
When reviewing the recent commentary that has sprung up regarding AI since the unleashing of ChatGPT on the world, there could be six different camps to slot opinion spouters into:
Fearful AI developers
Excited AI developers
Grumpy, salty AI developers behind the curve (looking at you, Elon)
Bemused general public
I slot snugly into the last category and like most people (probably) I am generally positive about AI applications, until I have to deal with a customer service chatbot that takes me through every FAQ option before putting me through to a human who will give me what I want. I appreciate the medical advances that are being made through AI generated medicines for targeted treatment and I live in hope that AI will cut costs on items so perhaps in the future, I won't need to sell a kidney to upgrade a graphics card. Noble and frivolous desires for AI- very human.
Though I am a bit salty about AI customer service bots, I also must confess that the last time I agitatedly sought a human to fix an issue with a new router, the chatbot text messaging function actually resolved my issue, leaving me mildly irritated and impressed in equal measure. The threat to people who work in customer service is not as immediate as some will make out. Generally speaking (and talking from a UK perspective) our service industry is understaffed and under pressure from the implementation of a low-cost business model - the use of AI to ensure global staff have an accurate knowledge base to service customers in whatever country they are servicing should improve work and service levels. Not all areas are safe, it would appear, and IBM recently announced it will pause recruiting in roles it envisions AI will replace over the next 5 years.
What are the current AI Chat models capable of anyway? Humorous commentary when people try to get it to give rude answers, if the interwebs are to be believed. The current market leader is ChatGPT by OpenAi that has been integrated into Microsoft offerings, such as Office.
They are search engines on steroids, trained by humans using Reinforcement Learning from Human Feedback (RLHF) so they can learn and respond to language queries. They are only as good as the information they have access to and even then, the ChatGPT documentation (link above) states, "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging..." Google's early efforts in 2020 were taken offline after Twitter taught it to be racist - it's more recent effort, Bard, is currently available and is a Large Language Model (LLM) similar to ChatGPT. It recently announced it will be integrating PaLM2 across its services to keep up with Microsoft in the productivity race.
On the face of it, AI is being used to help companies squeeze more productivity out of time and people. The more interesting developments can be found in the work of companies such as Absci who utilise zero-shot AI to develop targeted drugs. This model differs from the like of ChatGPT and Bard in that it derives its answers without being explicitly trained in the task it is being used for. This makes zero-shot more versatile but also more prone to errors. "Wait, more prone to errors? Medicine? WHAT!?" An understandable reaction however, it makes sense when you understand that zero-shot is designed to learn from descriptions of the task rather than examples. This lends itself perfectly to the creation of novel medical interventions when you know what you are trying to prevent but need to find unique ways to achieve that elusive cure/therapeutic. Is it this approach to AI that worries everyone? The chance of AI teaching itself rather than needing its human overlords to train it?
An open letter calling for a pause on the development of AI reads quite apocalyptical ("Advanced AI could represent a profound change in the history of life on Earth", "AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds") but is the future of the human race really the main concern of the signatories? Possibly, but the inclusion of people like Elon Musk in the list of those calling for the pause gives rise to the opinion that the six-month pause being called for is nothing short of a convenient way for salty billionaires with a bad case of FOMO to utilise fear to their own advantage. Reading the paragraph headlines of the 'Policy making in the pause' document, that accompanied the open letter, does nothing to contradict this when the first point is concerned about limiting, sorry, regulating 'access to compute power' for those engaging in development. National regulation bodies, liability for AI harm, standards for identifying AI generated content all read as worthy intentions. Flipped and looked at from another point of view, it reads as another sort of fear - loss of control over populations who may no longer need the likes of Elon Musk to provide them with information. Hilariously, the open letter asks, "Should we let machines flood our information channels with propaganda and untruth?" which seems a little tone-deaf in the internet age. Looking at you, Twitter.
Angry artists would tend to agree when it comes to Large Language Model AI's as recently shown by a group of artists suing Stability.ai for using their work without permission, or compensation, to train its art generator. In the case, Stability is likened to a "a parasite that, if allowed to proliferate, will cause irreparable harm to artists, now and in the future." It's hard to argue that a company using someone else's creation to generate profit for their own systems isn't wrong and, while not an expert on copyright, as a Twitch streamer I am all too aware of the trouble failing to declare paid promotion or using copyright music without permission can bring down on my head. This is a legitimate case for limiting AI software such as Midjourney and the likes of The World Photography Organization might agree after an AI generated photo won their top prize. The winning photographer, Boris Eldagsen, handed the prize back and highlighted the need for a discussion around AI generated art replacing photography, like it superseded painting in its day. The competition were understandably salty at having been duped but acknowledged a conversation needed to be had... but on their terms. How dare someone show them up like this?! Boris did ask that the money be donated to a gallery in Odessa. Bravo.
The wider internet has used AI to generate all sorts of imagery and many Discord servers that I am a member of are awash with fan images and oddness generated by AI, and it's glorious. There is a skill and a knack to getting impressive imagery to match your imaginings, as I found out with my early efforts to create pictures of Judith in the supermarket in the style of Caravaggio - there was a distinct lack of beheadings and an abundance of sad, medieval Italian ladies handling fruit and poorly lit. This democratising of artistic creation should be embraced and not limited, and it's an interesting square to circle to ensure traditional artists are not deprived of income. The Pause's call for regulation might have legs in this instance.
In conclusion, it would appear my initial list of groups people fall into could be more succinctly whittled down to:
Those with control through qualification
Those without qualification but enthusiasm
Salty billionaires may have plenty of money that helps determine what information is available to us and how we consume it currently, however I would lump them in the second group - they are no more qualified to determine how AI should be developed than Boris and his photography competition entry. They may employ clever AI developers, but the likes of Elon Musk have a disproportionately huge influence through their cult of personality and personal motivations that all creators of open letters should be wary of. The tension generated by AI development is going to be between those seeking to enhance our productivity, and those seeking to push the boundaries of creativity. It can be argued that the first group is motivated solely by performance and profit, where the second has a mixture of profit and/or pure creativity - lolz or not - as its motivation. If AI can make my job easier and enable me to create fantastic content, then I'm all for it. Where do you sit?
#AI #Midjourney #ChatGPT #Bard #PaLM2 #Google #Caravaggio