AI and Automation – Replacing Human Jobs?

HUMANS NEED NOT APPLY – SOMETIMES

 

In an interview with Bloomberg, IBM’s CEO Arvind Krishna said that he can see AI and automation replacing as many as 7.800 employees over the next five years. These would be so-called back-office jobs with no customer interaction.

 

It may not be as bad as it sounds because the reduction would be accomplished, at least in part, by not replacing workers who leave or retire but also by slowing hiring generally in the affected areas. We’ve all been told to expect AI to come for our jobs and this seems to be an early example of that.

 

Recently, we covered ChatGPT and how amazing it is. It could certainly be used to replace, say, journalists but many occupations based on the written word are also in the firing line. It’s not just wordsmiths either. One CEO said he wouldn’t be surprised if all of the order takers at drive-through takeaway restaurants were replaced by AI within five years. Again, those jobs would be absorbed by natural attrition but positions for those unskilled workers would evaporate leaving the disadvantaged even more so. However, none of this is the biggest problem with AI.

 

ChatGPT is great, until it’s not. It produces fantastic results until it delivers up a wrong answer, often spectacularly wrong. We’ve seen this at BICSI. As far as IBM and companies like it is concerned, no organisation is going to employ people to check the work of its AI response or hiring team. The work will just go out. Most of the time it will perfectly acceptable, or even better. However, every now and then it will create problems; think Robodebt.

 

Now, Robodebt wasn’t AI but it was automated and adding AI could make things considerably worse, or, it must be admitted, much better. However, AI has vulnerabilities that are starting to be realised, as is the case with the most successful Go-playing AI in the world – KataGO. It beats all of the world’s best players 100% of the time. Well, it used to, anyway.

 

Like all the best GO playing AI’s, KataGO taught itself to play. Lovely, a vindication for AI. The problem is, it only knows how to play GO. It doesn’t really know what GO is. It doesn’t know what a board is, it doesn’t know what a human is and it doesn’t know why it’s playing. It turns out it’s also missed some fairly basic strategies that humans can see and exploit.

 

The thing about non-sentient AI is that it doesn’t know anything. Even ChatGPT explains very clearly that it doesn’t know anything. This lack of contextual and motivational awareness or understanding is one of the reasons implementing AI might be a risk.

 

There was recently an open letter calling for a pause on AI development to assess the current state-of-the-art and consider future consequences signed by, among others, Steve Wozniak. Nice idea, but it’ll never happen and Wozniak is on record as knowing that. Oh well, it’s the thought that counts. It looks like we’re just going to continue to work it out as we go. Organisations like IBM with its firm foundations in IT might lead the way. However, it’s worth remembering Big Blue’s predictions for the PC back in the eighties. Don’t worry about it? We say worry about it.