Designing AI Agents

Nikos
UI/UX Designer
January 24, 2025
in
🗣️ Team Voices
Agents for customer support. This is the product we offered up until the change of our business focus. With our shift to the Fintech space, we transitioned from a reactive AI that required user interaction to engage to a proactive system that can reach out to users on demand. This meant that the current AI Agent creation process had to change.
The new AI Agent creation process adds new functionalities, updates the User Experience, and provides more automation. We empowered users to effortlessly feed data to their AI Agents, going from time-consuming, manual processes to just a few clicks of a button, minimizing the effort while maximizing the output. Apart from the new capabilities, we refreshed the environment's User Interface (UI), making it simpler and easier to navigate. Focusing on providing a linear experience to this process to give more guidance to the users.
Why?
But why the need for a proactive AI Agent?
As Moveo.AI moved its focus to the Fintech space, we identified a new opportunity. Allow our customers to reach out to their users at scale! This led us to work on and launch a new functionality called Broadcast. This powerful feature allows our customers to create campaigns and target multiple recipients simultaneously. The need for a new AI Agent was apparent since our current version was not optimized to pursue users for sales and product promotions. We needed to shift from a reactive AI Agent to a proactive one that would be able to seek new potential clients and achieve product sales.
In parallel, our NLP team had been working on improving and evolving our LLM (Large Language Model) in order to prepare it to cover these new use cases.
How?
This pivot was the perfect opportunity to rework the very important but problematic flow of the AI Agent creation. Now, with a newer and more capable LLM, there was an opportunity to let the AI take on more of the manual work that so far had fallen upon the user. We took this chance to add several smart features designed to help users complete tasks faster and more easily, aiming at increasing the overall completion rate. These smart features assist the users by providing content suggestions, pre-populating fields that would otherwise need manual input, and even filling entire sections by generating data from already existing sources.
Then vs Now
New AI Agent types
As previously mentioned, our AI Agents were initially built to be used in the Customer Support space, but with our new focus in the Fintech space, they needed to be able to conduct product sales based on our customers' needs. The structure and the way the Customer Support Agents worked didn’t match the requirements for product sales, and their reactive nature couldn’t provide the services required from a campaign, such as actively pursuing customers. Through research and user interviews, we decided to start with 4 use cases for the Fintech space for the new AI agents to be able to cover (i.e., debt collection and product adoption). These use cases became new Agent types that users could now choose during the creation process. But why divide them into types and not have one AI Agent cover all use cases? Having separate types depending on the use case will help them specialize in the context of the environment they will work in beforehand. In other words, the user won’t need to explain to the Agent what his primary purpose is, so the data and effort required from the user are significantly reduced.

More automation
More automation. Smarter interactions. Less manual work. We followed these three principles in the design process and applied them while creating the new information architecture. Our main goal was to move on from the time-consuming, manual inputs and offer a smarter system to assist the user with minimal input. We added sections like the Agent’s Knowledge, an option to autogenerate the content of a field (for example, the Product Benefits) by just adding a relative URL and even generating content from data of previously filled sections. We extended this feature by allowing the user to get better suggestions from our LLM for his manual inputs (like a small sentence). Of course, everything can be edited later so that the user has full control.

Guidelines & Agent goal
With our goal in mind, we wanted to approach the AI Agent creation process differently to increase usability, especially for new users, since the current approach didn't yield very good results due to its complexity and lack of a simple process flow. With the latest, improved version of our LLM we had the opportunity to approach the process outside the box: what if the data could be autogenerated instead of manually creating flows? In previous iterations, users had to create a series of dialog flows that would cover each scenario, create intents and entities that would give more context to the AI, connect a collection of data sources, and much more. What if, instead, users could just instruct the Virtual Agent to achieve a goal, and the means for it to accomplish it would come from already existing data? This was the vision for the Agent’s “Goal” and “Guidelines”.

Navigation/ Information Architecture (IA)
In the previous UI, the navigation in the AI Agent page was created around the dialogs. For those unfamiliar with this concept, dialogs are small, independent components that define the flow of the conversation based on specific triggers. For example, when a user sends a greeting message (trigger), users can define the specific answer the Virtual Agent will give by creating a dialog. This meant that every navigation item was based on a supporting feature or component of the dialog system and wasn't organized with a linear flow in mind, which didn't help users know the right steps needed to complete the Agent creation process. In the new iteration, the dialogs take second place and become one of the last optional steps necessary for completing the creation process. For the new navigation, we grouped all the features necessary for a step or a process or with a similar function together under a category. With this approach, we concluded with 5 navigation elements in comparison to the 8 we had before. The biggest UI change was the transition from a horizontal navigation to a vertical sidebar and the nesting of specific features and pages inside the last two tabs.

Process Outline
The process we followed was similar to every other project we previously tackled, with the exception that this time, we had a bigger involvement of the NLP (natural-language-processing) team. This was due to the need to ensure the practicality of some new smart features and new sources of AI knowledge.

Reconstruction phase
To accommodate the new use cases, we had to rethink the whole AI Agent creation flow, so we had to start from scratch. That meant, as a first step, we had to deconstruct everything from functions and settings to entire pages so we could re-arrange and restructure them.
Creating the new IA was a task that required matching and grouping all the divided features and sections by their functionality, importance, and priority. This time, the order of the main groups was made with a linear flow in mind, meaning that filling each step/ group was vital for the next. There was a sizable reduction in the number of main categories, making the flow less cumbersome and easier to follow.
Greg is one of our main personas (computer skills, enjoys problem-solving), and we created a scenario based on him to discover possible interaction problems, needs, and features but, most importantly, how the flow of the AI Agent creation should work for a smooth experience for first-time users
To visualize our new structure, we proceeded to make low-fidelity wireframes. These were the first stepping stones to start receiving some feedback from our stakeholders.
Design phase
The longest phase of this project was the design phase due to the multiple iterations in a quest to find the perfect balance between simple and user-friendly UI and a linear UX flow. With every iteration came new feedback, and with every feedback improvement. After 3 iterations, we managed to achieve the desired result. In the first 2 cycles of design iterations, we mostly focused on the layout of the properties and the order of each section since the structure of the flow was the most vital point of this redesign effort. After reaching the desired UI structure, we focused more on the content, and this helped the rest of the stakeholders understand our vision.
Optimization phase
The high-fidelity mockups gave us a realistic view of how the product would work, and this allowed a conversation with the rest of the stakeholders about how everything could be implemented and brought ideas for possible new features for later versions. With their approval, we moved on to optimize every element. From various input states to adding animations to elements that would require them (like the autogeneration progress), as well as making every design responsive to help the developers understand how everything should react in the environment.
Conclusion
This transition was a long process that required multiple iterations and redesigns in order to work around tech limitations and feedback. It was important for us to ensure a smooth transition from the old UI to the new by keeping familiar elements and building upon recognizable patterns and flows. We carefully considered every detail to ensure that the user experience felt both intuitive and fresh while staying aligned with our core design principles.
We are currently launching the Beta version to collect data and feedback from our customers, enabling us to refine and improve the product through valuable insights. Through this iterative process, we’ve been able to continuously enhance the user experience, and we are proud of the final product we’ve delivered.