top of page
  • Writer's pictureReinhard Lindner

Large Language Model and Artificial General Intelligence

With the introduction of ChatGPT, there was confusion about it being an AGI. Artificial general intelligence (AGI) is rather a hypothetical AI, which performs intellectual tasks similar to humans. It encompasses a wide range of cognitive abilities. Large Language Models (LLMs) belong to AI as well. However, they are trained on vast amounts of text data to generate human-like responses to prompts. So, there are two opposite thoughts on whether LLMs can reach AGI and why they can’t do that. Based on this notion, is ChatGPT an AGI? Or rather an LLM? Sencury’s experts are here to make you more tech-savvy! 


AGI  

Artificial General Intelligence or AGI is the machine with the highest form of intellect, capable of doing everything humans do. Humanity hasn’t reached AGI yet. But we are heading towards it, with more than half done to make it happen. The most complex thing for AGI is sentiment analysis. To be able to differentiate, when people use sarcasm, irony, or other emotions in texts, AGI needs to be pre-trained on human emotions and lingual expressions.  


Nowadays, AI performs both basic and advanced sentiment analysis. If the basic analysis only requires the determination of polarity, the advanced type can identify emotional varieties (sarcasm, joy, sadness, etc.). The first one is used for social media monitoring, customer feedback analysis, and brand reputation management. The latter, in turn, is used in market research, opinion mining, and customer sentiment analysis. 

LLM  

Large Language Model or LLM for short is a smart computer program that generates language the way humans do. However, it can generate outputs only based on pre-trained input data. Our experts have enclosed more information on LLMs in our recent blog post “Does AI Think?” There is also a drawback, LLMs can be toxic as the ones inputting information are humans. And, the latter can be toxic, biased, discriminatory, inaccurate, based on location and cultural predisposition. 

Can LLM reach AGI? 

It Can, But  

According to recent investigations, LLMs can reach AGI. The path is promising as LLMs become better and more accurate. However, their limitations still restrict LLMs from the true understanding of human cognition, conscious thinking, and self-awareness. This is one of the biggest drawbacks existing so far.  


What kind of limitations ground LLMs? In the wrong hands, LLMs can be used to: 

  • generate text to mislead or deceive people, spread false information, manipulate public opinion, or incite violence 

  • create deep fakes that are very realistic and damage someone's reputation or spread misinformation   

  • trigger job losses and economic disruption up to a concentration of power by a few companies controlling LLMs  

LLMs are trained on data from the real world. It is unspeakably biased. So far, biases have not been fully addressed, and, thus, slowly embedded in the LLMs. Also, these complex systems are difficult to understand and secure. This makes them vulnerable to attacks by malicious programs. So, even the assumption that LLMs are the next step to AGI is incomprehensible.

 

The most crucial fact about LLMs is that they cannot retain short-term and long-term memories, which is one of the essential human learning characteristics. So, the approach LLMs use here is autoregressive. Humans do not learn like that, it is impossible. 


The road to AGI may look the following way: LLM developers should create larger models that are too complex in their parameters and supported by significant computational resources. Another drawback arising is the environmental unfriendliness of such models (black-box models) with a low ability to be scrutinized. 


Or Can It Not?    

The other point of view says LLMs are not heading towards AGI. According to Medical Informatician and Translational AI Specialist, Sandeep Reddy,

  

“...the Large Language Models (LLMs), are no closer to Artificial General Intelligence (AGI) than we are closer to humans settling on Mars.”

 

Reddy’s point of view claims that we should understand the process of human learning first. The way humans learn is the basis for AI learning capabilities. The human brain is a complex functional instrument that has various simultaneous processes carried out at once. LLMs carry out only those processes they were trained to do in the first place. 



LLMs work by breaking data into smaller tokens, which are then converted into numerical representations. The data is tokenized within the model and the latter uses complex mathematical functions and algorithms to analyze and understand the relationships between tokens. That’s how models are being trained. With the input, the model is fed with large amounts of data and adjusts its internal parameters until it can accurately predict the next token in a sequence. 


Models presented with new data use those trained parameters to generate outputs by predicting the most likely sequence of tokens following the input. Overall, LLMs use a combination of statistical analysis, machine learning, and natural language processing techniques to process data and generate outputs that mimic human language. ChatGPT4 perfectly illustrates this process in its architecture. 



AGI, in its turn, is a system performing human cognitions at 100%. Language is essential to human intelligence. It is also essential for LLMs. To define AGI the best, you should understand that language models are proficient at language tasks, but they are unable to perform tasks outside their training data. LLMs cannot generalize, they lack common sense, cannot interact with the physical world, etc. AGI should be capable of doing all of these things.


Is ChatGPT an AGI or LLM?     

ChatGPT stands for the Generative Pre-Trained (GPT) model that can provide answers to different requests. However, these answers cannot be taken as the general truth as they are based on the model’s training dataset. As you probably know, if this dataset has bias and other disinformation text, ChatGPT will “hallucinate” and “confabulate”, etc. Such behaviors can be partially fixed by setting an appropriate context and explicit rules to “ground” the LLM to a specific usage/context. 


Therefore, LLMs are large language models and ChatGPT is a level upper – a conversational model without complex reasoning to extract the “truth” out of ambiguous data. It is also not an AGI, as it still cannot perform reasoning as humans do.  


Sencury on LLMs and AGI 

We are known for delivering cutting-edge technology solutions that meet unique business needs. 

  

Whether you need ready open-source LLMs, commercial ones or paid APIs (like OpenAI), Sencury can provide high-quality solutions that cater to your specific requirements. Our experienced team has worked on numerous projects in the field of AGI and LLM, and we stay up-to-date with the latest technologies and trends. 

  

At Sencury, we understand that every business is different, and our tailored approach ensures that our clients receive bespoke, scalable and cost-effective solutions. We work closely with our clients to understand their needs and requirements. 

  

We believe that our expertise in AGI and LLM can help your business achieve great things. If you have any questions or would like to schedule a consultation, please do not hesitate to reach out. 

bottom of page