AI Glossary
Amazon Web Services (AWS)
Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments
Analytics
Moveo.AI provides a comprehensive conversational analytics dashboard for AI and live agent performance monitoring. Learn more
API
Application Programming Interface (API) is a set of rules and protocols that allows one software application to interact with another. Moveo API
API key
An API Key is a unique identifier used to authenticate an application or user when calling an API. It is a secret token that is required to access certain functionalities within our system, such as fetching analytics data, sending messages, or managing resources. Learn more
Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks at a level comparable to a human. Contrary to narrow AI, AGI aims for generalization and flexibility, allowing it to perform any intellectual task that a human can
Artificial Intelligence (AI)
Artificial Intelligence (AI) is the development of computer systems and algorithms that can perform tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding
Auto AI (or Self-learning)
Auto AI is an enterprise feature that uses the conversations between the Brain and the end users to recommend new Intents. Learn more
Auto builder
Auto builder is a functionality provided in the Moveo.AI platform, where enterprises can describe their processes in simple text and automatically create the conversation flow their AI agent will follow. Learn more
Bots (or Chatbots)
Bots or Chatbots are computer programs designed to simulate human conversations through text or voice interactions
Brains
In the Moveo.AI platform, Brains constitute the logic of an AI virtual agent. Brains contain all the necessary information that allows Moveo.AI to use its Natural Language Processing (NLP) engine to engage in a natural conversation with the end user
Broadcast
Broadcasts are personalized messages that can be scheduled for a specific audience on communication channels like WhatsApp and Viber
Channels (or Integrations)
Channels are the communication channels through which your AI virtual agent is able to interact with your users, such as your website, messaging apps, SMS, and email
ChatGPT
ChatGPT is an artificial intelligence (AI) chatbot that uses Natural Language Processing (NLP) to create humanlike conversational dialogue. ChatGPT was developed by OpenAI and is trained on a massive amount of data available on the internet
Closed-source
Closed-source refers to a software licensing model where the source code of a software program is not made available to the public. In closed-source software, the developer or company retains exclusive control over the source code, and users are typically provided with only the executable version of the software. Users can interact with the software but cannot see, modify, or distribute the underlying code
Collections
Collections are libraries of information that your Virtual Agent can access to answer questions accurately and reliably. You can curate the content of your collections by uploading documents, web pages, and knowledge bases
Conversational AI
Conversational AI or Conversational Artificial Intelligence refers to technologies that enable machines to understand, process, and respond to human language is a way that mimics human conversations. This encompasses a range of technologies, including chatbots and virtual agents
Conversational Commerce
Conversational Commerce is a term introduced by Chris Messina in 2016 and refers to the intersection of messaging apps and shopping, where businesses use chat, messaging, or other natural language interfaces to engage with customers and facilitate the buying process
Data retention
Data retention refers to the policies and practices related to storing data for a specified period to meet regulatory, legal, or business requirements
Datasources
A datasource is simply the source of the data you want to have in your collection, such as web pages, file uploads (DOCX, DOC, MD, HTML, TXT, PDF), or knowledge bases
Deep learning
Deep learning is a subset of machine learning that involves the use of neural networks with many layers (hence "deep") to model complex patterns in data
Deployment
Deployment is the process of publishing a bot (or chatbot) to channels in order to interact with users
Dialogs
Dialogs are small, independent components that define the flow of the conversation. For example, when a user sends a greeting message (trigger), you can define the specific answer your Virtual Agent will give
Disambiguation
When the end user sends a message that is ambiguous, the AI virtual agent can handle it by proposing different alternatives to disambiguate the user's original message
Embeddings
Embeddings are numerical representations that capture the meaning of data in a lower-dimensional space. They are particularly popular in natural language processing (NLP), where they represent words, phrases, or even entire documents as vectors in a continuous vector space. The key idea behind embeddings is to convert complex, high-dimensional data into a more manageable form that preserves important relationships and similarities
Entities
Entities are terms that provide context for an Intent. You can use synonyms to declare different ways the user can refer to each value (options for the end user). You can also use patterns to control the way you ask for information
Explainable AI (XAI)
Explainable AI (XAI) refers to artificial intelligence systems and models designed to make their decision-making processes transparent and understandable to humans. The goal of XAI is to ensure that AI systems are not "black boxes," where their internal workings are opaque, but instead are systems whose outputs can be interpreted and trusted by users, developers, and regulators
Fine-tune
Fine-tuning is the process of further training a pre-trained model on a smaller, task-specific dataset to adapt it to particular requirements or nuances
Foundational Model
A foundational model is a large-scale machine learning model, such as a Large Language Model (LLM), trained on broad and diverse datasets. They are called “foundation” models because they serve as the base upon which applications can be built through fine-tuning, catering to a wide range of domains and use cases
Generative AI
Generative AI is a category of artificial intelligence systems designed to create new, original content based on patterns learned from existing data
Hallucination
Hallucination refers to the generation of outputs from Large Language Models (LLMs) that are incorrect, nonsensical, or fabricated, despite appearing plausible
Handover
(Live agent) handover is the process where an AI agent seamlessly transfers a conversation from any channel to a live agent
Hosting
Hosting refers to the service of providing infrastructure and resources for storing, managing, and delivering data or applications over the Internet. Enterprises that use Moveo.AI can choose between Public Cloud ή your Private Cloud / On-premises options
Intent
Intents are the specific goals that a user has in mind when sending a message. For example, in the message "I'd like to book a table for three people on Friday night", the intent is making a reservation
Knowledge base
A knowledge base is a centralized repository of information, resources, and data that is designed to store, organize, and manage knowledge for easy retrieval and use
Large Language Model (LLM) agents
LLM agents, or Large Language Model agents, refer to autonomous or semi-autonomous systems that utilize large language models (LLMs) like GPT-4 to perform tasks, interact with users, and make decisions. These agents leverage the capabilities of LLMs to understand and generate human-like text, enabling them to perform a wide range of functions, from simple question-answering to more complex tasks like research, content creation, or even executing certain software commands
Latency
Latency refers to the time delay between the initiation of a request and the completion of the response. In various contexts, latency can impact performance and user experience, and understanding it is crucial for optimizing systems
Large Language Model (LLM)
Large language models are a type of AI model designed to understand, generate, and interact with human language at a sophisticated level. They are characterized by their large size, extensive training datasets, and advanced architecture, enabling them to perform various language-related tasks. The most well-known LLMs are GPT-4, Gemini, and LLaMa
Multimodal Language Model
A multimodal LLM refers to an advanced AI model that integrates and processes multiple types of data, such as text, images, audio, and sometimes even video, to understand and generate content
Natural Language Processing (NLP)
Natural Language Processing (NLP) is a field of AI focused on the interaction between computers and human language. It involves the development of algorithms and models that enable computers to understand, interpret, and generate human language in a meaningful and helpful way
Natural Language Understanding (NLU)
Natural Language Understanding (NLU) is a subfield of Natural Language Processing (NLP) focused on enabling machines to comprehend and interpret human language in a meaningful and helpful way. While NLP encompasses a wide range of tasks related to processing text and speech, NLU aims explicitly to understand the intent and context behind language, allowing for more accurate and nuanced interactions
Neural networks
Neural networks are fundamental to machine learning and artificial intelligence, enabling systems to recognize patterns, make decisions, and perform complex tasks. It consists of interconnected nodes, or "neurons," organized in layers, which work together to process information and learn from data
No-code
No-code refers to a software development approach that allows users to create applications and workflows without writing traditional code. Instead, no-code platforms use visual interfaces, drag-and-drop components, and pre-built templates to enable users to design, build, and deploy software solutions
Omnichannel
Omnichannel, in the context of chatbots, refers to a strategy where a chatbot is designed to provide a seamless and consistent user experience across multiple communication channels. This means that users can interact with the chatbot through various platforms—such as websites, mobile apps, social media, messaging apps (like WhatsApp and Facebook Messenger), and even voice assistants—without losing continuity in their conversations
Open source
Open source refers to a type of software licensing model that allows the source code of a software program to be freely available to the public. This means anyone can view, modify, and distribute the code
Prompt Engineering
Prompt engineering is the process of crafting and optimizing input prompts to achieve desired outcomes from AI models, particularly large language models like GPT. It involves designing the text or instructions given to the model to guide its responses in a specific direction, ensuring the output is relevant, accurate, and useful
Reinforcement Learning
Reinforcement Learning is a type of machine learning where an AI agent learns to make decisions by interacting with an environment and receiving feedback through rewards or penalties. Unlike supervised learning, where the model is trained on a fixed dataset with labeled examples, reinforcement learning involves learning from the consequences of actions taken in a dynamic environment
Retrieval- Augmented Generation (RAG) pipeline
The Retrieval-Augmented Generation (RAG) pipeline is a method used in Natural Language Processing (NLP) to improve how language models (LLMs) generate text by combining two main tasks: retrieving information and generating text
Responsible AI
Responsible AI refers to the practice of developing and deploying artificial intelligence (AI) systems in a manner that is ethical, transparent, and aligned with societal values. It involves ensuring that AI technologies are designed, used, and governed in ways that are fair, accountable, and respectful of human rights
Rules
Rules let you automate actions in conversations by defining triggers and conditions. For example, you can assign conversations to a specific Brain if a user falls into certain parameters, tag conversations, and much more
Sentiment analysis
Sentiment analysis is a technique in natural language processing (NLP) that involves determining the emotional tone or attitude expressed in a text. It is used to identify whether the sentiment behind the text is positive, negative, or neutral and sometimes to detect more specific emotions like joy, anger, or sadness
Supervised learning
Supervised learning is a type of machine learning where a model is trained on a labeled dataset. The goal of supervised learning is to learn a mapping from inputs to outputs so the model can predict the correct label for new, unseen data
Tokenization
Tokenization is a fundamental process in natural language processing (NLP) that involves breaking down text into smaller units called "tokens." These tokens can be words, phrases, or even characters, depending on the granularity needed for the task. Tokenization is the first step in preparing text for further analysis or processing by machine learning models
Transformer
A Transformer is a type of neural network architecture designed to handle sequential data, such as text, more efficiently and effectively than previous models
Unstructured data
Unstructured data refers to information that doesn't have a predefined data model or is not organized in a specific, easily searchable format. Unlike structured data, which is often neatly arranged in rows and columns (like in a database or spreadsheet), unstructured data is more freeform and can include a wide variety of formats, making it harder to analyze using traditional data processing methods
Unsupervised learning
Unsupervised learning is a type of machine learning where the model learns from data that does not have labeled responses. Unlike supervised learning, the goal of unsupervised learning is to find hidden patterns, groupings, or structures in the data
Webhooks
A webhook is a way for one system or application to send real-time data to another system when a specific event occurs. It allows applications to communicate with each other automatically without needing to check for updates constantly