no escape: AI to seize every job!
In the next decade, humanity is going to face mass unemployment and your job is affected too!
Don't you just love a good clickbait headline?
Of course, AI is not going to take your job.
It's also complete nonsense that "someone who knows how to use AI will take your job".
No 20-year-old TikToker with a few YouTube prompt engineering tutorials or an Instagram degree in ChatGPT is going to take your job - what is going to take your job is lethargy and a failure to evolve your skillset.
Part of that skillset now includes an understanding of AI, machine learning and process automation.
For the last 24 months I've been running a workshop called 'AI for Small and Medium Businesses' for executives, managers and employees of SMEs and one of the first questions is always about the risks of losing jobs. By the way, if you are interested in a similar workshop for your organisation, please get in touch. It is an honest and open look at the world of AI - everything you need to know - a live online workshop with yours truly, tailored specifically to your business. Past attendees tell me they loved it, and now I'm not kidding.
Sorry for the plug. Got to make a living, right? Anyway, back to the subject.
Enter ChatGPT, the new kid on the (tech) block and word game expert masquerading as a master of human expression.
Let's get something important out of the way before I continue.
Artificial intelligence has been around for a long, long time, and most people credit the birth of today's AI to John McCarthy, who coined the term artificial intelligence - "the science and engineering of making machines smart". McCarthy was a computer and cognitive scientist. He organised the Dartmouth Workshop, a pioneering event that brought together researchers from a variety of disciplines to discuss the possibility of creating machines capable of simulating human intelligence. This event marked the establishment of AI as a distinct field of study and research, setting the stage for further developments in the years to follow.
What followed were decades of "geekdom and nerdiness" (see timeline at the end of this blog) until earlier this year - when ChatGPT gained a near cult following among the general public for its ability to perform all sorts of textual magic. It seems to know everything, doesn't it?
ChatGPT is a generative AI, specifically designed for conversational purposes, and this wordplay genius acts like it's the king of human conversation.
By the way generative means "the ability of an AI system to generate content, such as text, images, music or other forms of data, and it is designed to create new and original content based on patterns and information it has learnt from existing data” (which it mostly finds on the World Wide Web).
Think of it as a digital copycat using a neural network - it's like a technical version of how brain cells talk to each other. This type of machine learning called deep learning, uses interconnected nodes that resemble the human brain. She's (the copycat) got a real talent for spotting patterns in language, so she can create answers and creative magical knowledge out of thin air. Patterns are the important part here.
Let's get real about its "magic knowledge", shall we?
It's all made up. That attempt at corporate strategy it spat out? Not from its vast strategy database - no, it's not that sophisticated. It's been given a diet of text to mimic real-world knowledge, a whole lot of pretence based on existing information and the AI has a go and starts identifying patterns. Whether that information is correct or not is not up to the AI.
Here's how it works:
Step 1: It crawls the web and tries to predict the next word in a sentence. If it messes up, it tweaks itself to avoid making the same mistake. Think of a mumbling toddler but with a digital twist. Every time Mom smiles, it knows it has got the word right.
Step 2: Humans jump in. They interact with the chatbot, rating its responses and training it like you'd teach a dog tricks. It's called reinforcement learning - basically advanced dog training on steroids.
But remember the internet is a mess. So in the next round, the chatbot gets polished up: it learns to avoid the junk and find meaning because most of what it finds online is total intellectual waste.
And that's partly why Generative AI "hallucinates" from time to time. It is basically lying to you or just making stuff up at random. If you rely on such results, you can get into big trouble, like the lawyers who represented Roberto Mata in a lawsuit against a Colombian airline. When filing a response, the lawyers cited other cases to show precedent. The team had used ChatGPT to research court cases, and they were all made up. Hallucinations.
AI is here to stay and it can be super helpful. We are back to the point when the www started but this time it is going to be faster and bigger and it’s a good thing to keep up to speed with the latest developments.
Maybe don’t use ChatGPT for that.
PS- I am using ChatGPT as an example - By now we are reaching 100s of thousands of AI solutions and applications with new ones being developed every day.
PPS- No AI was used to generate the above blog post - that’s all me, Yep my writing is that bad.
PPPS- Yes the cover image was generated using DALL-E AI Text-to-Image Engine. Prompt: Digital Art Image of 100s of homeless people.