AI, LLMs, and associated technology (and terminology!) explained

Back

By Aidan Weston, course team leader of IT - Long Road Sixth Form College
 

AI encompasses a broad range of technologies capable of simulating human intelligence, with LLMs a specific subset focused on language. Machine learning underpins AI's ability to learn from data, improving the system's performance over time. ChatGPT, a prominent LLM, showcases the advanced capabilities of generative AI in processing and generating human-like text, marking a significant advancement in the field of natural language processing.
 

Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines programmed to think like humans and mimic their actions. The term can apply to any machine that exhibits traits associated with a human mind, such as learning and problem-solving. AI systems are designed to handle tasks that typically require human intelligence, including speech recognition, decision-making, visual perception, and language translation. AI has evolved from rule-based systems to more complex models that can analyse and make decisions based on data.


Large Language Models (LLMs)

Large Language Models (LLMs) are a subset of AI that specialize in understanding, generating, and manipulating human language. LLMs like GPT (Generative Pre-trained Transformer) are trained on vast amounts of text data, enabling them to generate human-like text, answer questions, summarize documents, translate languages, and more. These models use deep learning techniques to understand the nuances of language and generate responses that are contextually relevant. LLMs have applications ranging from creating content to powering conversational agents and enhancing natural language understanding.


Machine Learning in AI

Machine Learning (ML) is a core part of creating or training most AI applications. ML is the development of algorithms and statistical models that enable computers to perform specific tasks without using explicit instructions. Instead, these systems learn and improve from experience. ML is used in AI to recognize patterns, make predictions, and improve decision-making over time. For example, in AI applications like voice recognition or recommendation systems, ML algorithms analyse large datasets to learn from patterns and improve their accuracy with each interaction.


ChatGPT

ChatGPT is an example of an LLM developed by OpenAI. It's based on the GPT architecture, designed to generate human-like text responses in a conversational manner. ChatGPT has been trained on a diverse range of internet text, allowing it to answer questions, simulate dialogues, write essays, and even create code based on the prompts it receives. ChatGPT is a part of the generative AI landscape, demonstrating the ability of LLMs to understand context, infer meaning, and generate responses that are remarkably similar to those a human might produce. Its applications are vast, from educational tools and customer service bots to creative writing aids and programming assistants.


Beyond ChatGPT, the landscape of generative AI tools is diverse, encompassing various domains from text and imagery to music and code generation.

Here’s a summary of other notable AI generative tools available:

Text Generation and NLP Tools

BERT (Bidirectional Encoder Representations from Transformers): Developed by Google, BERT is designed to understand the context of words in search queries, improving the accuracy of search engine results and language understanding tasks.

Website link - BERT

GPT-3 (Generative Pre-trained Transformer 3): Prior to ChatGPT, which was based on GPT-3.5 and now uses GPT-4, GPT-3 set the standard for advanced language models, capable of writing essays, translating text, and even generating simple code based on natural language prompts.

Website link - GPT-3

MyAI (Snapchat): A chatbot running on GPT, this tool is part of the Snapchat app frequently used by young people to share images and short messages, allowing them to interact with an AI chatbot alongside real-life friends.

Claude, Copilot, Gemini: These are further examples of chatbots which have emerged since ChatGPT’s success, and all work in a similar way. Claude (made by Anthropic) runs on its own model of the same name, as does Gemini (made by Google); Copilot is owned by Microsoft and runs on a version of GPT-4. In addition to these chatbots, Google and Microsoft are increasingly seeking to integrate AI functions from Gemini and Copilot into their office programs (e.g., Google Docs and MS Word) and other products.

Website link - Claude

Website link - Copilot


Image and Art Generation

DALL-E: Another innovation from OpenAI, DALL-E generates images from textual descriptions, demonstrating remarkable creativity in producing artwork, product designs, and photorealistic images based on specified attributes.

Website link - DALL-E

Prisma: Utilizes deep neural networks to transform photos into artworks mimicking the styles of famous artists like Van Gogh, Picasso, and others.

Website link - Prisma


Music Composition

AIVA (Artificial Intelligence Virtual Artist): An AI composer that creates original music compositions for films, video games, and other entertainment mediums, learning from a vast database of classical and contemporary music.

Website link - AIVA

OpenAI Jukebox: A generative model that produces music, complete with lyrics and singing, in various genres and styles. It can even mimic specific artists’ styles, generating new songs that sound like their work.

Website link - OpenAI Jukebox


Code Generation and Programming Assistance

GitHub Copilot: Developed by GitHub and OpenAI, Copilot provides programming assistance by suggesting whole lines or blocks of code as developers type, learning from the vast amount of code available on GitHub.

Website link - GitHub Copilot


Deepfake and Voice Synthesis

DeepFaceLab: A leading software for creating deepfakes, allowing users to swap faces in videos, contributing to both entertainment and the discussion on ethical AI use.

Website link - DeepFaceLab

Descript’s Overdub: Offers the ability to edit audio content by typing, using AI to synthesize and edit spoken word in podcasts or video production with a natural-sounding voice.

Website link - Descript's Overdub


3D Content Creation

DreamFusion: A tool that generates 3D models from textual descriptions, combining the capabilities of text-to-image models with 3D rendering techniques to create detailed and customizable 3D objects.

Website link - DreamFusion

These tools represent just a fraction of the generative AI technologies being developed across various fields. They highlight the growing capability of AI to not only understand and process human language but also to create new content that can inspire, assist, and enhance human creativity and productivity across multiple domains.


Other terms used in AI

Neural Network: A computational model inspired by the human brain’s network of neurons, designed to recognize patterns and interpret data by simulating the way a human brain operates.

Deep Learning: A subset of ML that uses neural networks with many layers (deep neural networks) to learn from vast amounts of data. It’s particularly effective for tasks like image and speech recognition.


Large Language Models (LLMs) and Related Terms

Natural Language Processing (NLP): The field of AI focused on the interaction between computers and humans through natural language, enabling computers to understand, interpret, and generate human language.

Transformer Models: A type of neural network architecture that’s particularly effective for handling sequential data, such as text, for tasks in NLP. Transformers use self-attention mechanisms to weigh the importance of different words in a sentence.

Generative Pre-trained Transformer (GPT): A series of AI models developed by OpenAI that use deep learning to produce human-like text. GPT models are pre-trained on a diverse range of internet text, then fine-tuned for specific tasks.

Tokenization: The process of converting text into smaller units (tokens), such as words or phrases, for processing or understanding by an AI model.

Vector Embeddings: The representation of text as vectors (arrays of numbers) in a high-dimensional space, allowing the model to process and understand language based on the mathematical properties of these vectors.


AI Model Training and Operation

Fine-tuning: The process of taking a pre-trained model and continuing the training with a smaller, specific dataset to adapt the model for particular tasks without starting from scratch.

Inference: The process of using a trained AI model to make predictions or decisions based on new input data.

Prompt Engineering: The practice of crafting input prompts to guide an AI, particularly an LLM, to generate desired outputs or responses.

Reinforcement Learning: A type of machine learning where an agent learns to make decisions by taking actions in an environment and receiving rewards or penalties.


Applications and Tools

Chatbot: A software application used to conduct an online chat conversation via text or text-to-speech, simulating a conversation with a human user.

Language Model Fine-tuning: The process of adjusting a pre-trained language model on a new, typically smaller, dataset to specialize its performance on a specific task or domain.


Ethical and Technical Considerations

Bias in AI: Prejudices or unfairness in AI outputs that reflect the biases present in the training data or the design of the algorithm.

Data Privacy: Concerns related to the protection of personal or sensitive information processed by AI systems.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×