I want you to think: as you’re reading this, are you conscious?
You probably are.
How about AI? When you type a prompt into its chatbox and it pops up with its response, is it information they know themselves? Does it exhibit any consciousness? Surprisingly, answers are rather mixed: according to a survey by the University of Waterloo conducted in the past year, two-thirds of respondents believed that AI tools had some degree of thought.
ChatGPT: Released in the past few years, it has quickly become one of the most used applications for everything from school to everyday tasks. I use it to help summarize chemistry notes, and my mom uses it to translate and write emails. If you’re a student, you’ve used it at least once.
Made by the company OpenAI, labeled as a “generative artificial intelligence chatbot”, it has with over 200 million weekly users. It’s basically become what most people think of when the term AI comes up. However, despite what many think, most professionals don’t consider ChatGPT to be AI at all.
“The joke is that you call it artificial intelligence when you’re marketing or you need to raise money,” said Jose Nazario, who works with AI and cybersecurity.
Originally, the goal in developing ChatGPT was to imitate human writing. It wasn’t designed to respond correctly, just in a way a human might. To simulate human response, text prompts –the questions you type into the chat — are reduced to a pattern of numbers. This lets the computer try to match the number that’s associated stronger with certain output numbers, and those numbers would be converted back to human text.
Generative artificial intelligence models in actuality are just big piles of data: also known as large language models (LLM), and are trained with a lot of sample data, or a dataset, coming back to earlier, this is what lets them be able to make associations, using what is called a neural net.
Simply put, LLMs is just a series of steps where decisions are made. The input number is processed and might have 20 different decisions possible based on the input, which leads to another step of 20 different decisions, etc. Think of it as tossing a marble into the top of a Plinko game, but instead of nine different slots at the end, it’s billions. Each of the pegs the marble hits is like one of its decisions, until the marble ultimately finds the endpoint.
But what if the endpoint is wrong? Now that’s the part where we come in, and also what proves that these chatbots don’t have any consciousness. What companies have been doing is basically shoving in a massive amount of sample data at the computer and telling it to randomly change its decisions, keeping the ones that are closest to the sample data. After hundreds of trials, possibly more, the computer now knows what to respond with and vice versa.
In short, companies give computers info, ask them for hundreds of responses, pick out the accurate ones—which are then used to respond to questions—and throw away the inaccurate ones.
I wanted to see what ChatGPT would respond with when asked whether or not it was conscious or not, so I asked the following: “Is there any conscious AI present?”
And I was given a response of: “No, there is no conscious AI at present. All existing AI systems, including the most advanced ones like ChatGPT, function based on algorithms, data processing, and pattern recognition without any form of awareness, subjective experience, or self-perception.”
So there we go, ChatGPT itself said AI is not conscious. Many of the big talking heads of the market, such as Sam Altman and Elon Musk, have been trying to work towards it, but chances seem slim, based on what we have. Maybe we’ll get to the point of conscious AI, but we aren’t there yet. There’s still much to do.