Thursday, 15 June 2023

Last one on AI for a while

I've written a few posts on AI lately. They have proved surprisingly popular, more so than my usual guff about some old song or other that I like. Although when I say popular, what I mean is they've had a lot of traffic and comments. I'm not trying to suggest the coming omnipotence of AI is popular, because it is far from that amongst my modest readership. But indulge me one more time. I read about a very basic limitation of one AI engine, and so had to try it out for myself. How's your counting, because ChatGPT's isn't always very good...

Oh dear. The correct answer is four, as any fule kno. So what's gone wrong? If I ask how many "z" are in "zzzzzz" it correctly answers six, so it's probably not to do with repeated letters. Maybe for words that it doesn't immediately recognise (and there are many variant spellings of pizzazz) it looks to the dictionary definition and counts from that? Or from the pronunciation pɪzæz?

Maybe I could teach ChatGPT to count...

Which, to my programmer's mind, suggests it is probably missing the last "z". Think for loops versus while loops, maybe. Either way, seems I'm still asking the question wrong. Let's try again. If I was programming this task, let's say in JavaScript, it's a doddle to iterate through a string and count how many times a certain letter appears. Look, I did it here if you want to learn some basic code - as you can see, it correctly returns 4. But what does ChatGPT make of it?

Which is really, really disappointing. For whilst it correctly understands and describes the purpose of the function, rather than run through the code it shortcuts straight to what it thinks it already knows...

Tip the authorThis is a very human response though - to have a preconception of what is right and assume it to be true, rather than retest one's understanding. Confirmation bias, in other words, and we all do it at times. I hope, for our sake, that ChatGPT is not exhibiting egocentric bias... Whatever, it seems that the real skill in these early days of AI is asking the question correctly. And then, just maybe, being sceptical about the answer.

Post script

ChatGPT, backed by Microsoft, struggles, as per the above. Bard, developed by Google, gets it right first time, as below. Not only that, it also offers a snippet of code to prove its answer, inadvertantly demonstrating that my programming skills will soon be obsolete too...

8 comments:

  1. The Man Of Cheese16 June 2023 at 07:00

    Exhibiting human characteristics....makes it all the more frightening for when it gets it wrong for something important later down the line when we rely on it more(or its taken over...)

    ReplyDelete
    Replies
    1. Especially when it can't admit, or won't accept, being wrong.

      Delete
    2. Sounds like Boris Johnston!

      Delete
    3. Let's be honest, an AI PM could hardly be worse than he was.

      Delete
  2. This one rather hurt my brain, in a slightly different way to previous posts. However, my conclusion remains the same: turn them all off.

    ReplyDelete
    Replies
    1. Speaking as someone who's been making computers do things for a living for 30yrs, it hurts my brain too.

      Delete
  3. This AI stuff already has supreme power over me and my mind - I know this because whenever I read about it I turn from a shy, peace-loving woman into a raging, uncouth, plebby yob. Fuck AI!

    ReplyDelete
    Replies
    1. I'll design you a t-shirt with that slogan. I promise I won't ask AI to design it.

      Delete