salesforce agentforce Practice Questions & Answers (Set 10) | CodeWme
#1 Universal Containers recently launched a pilot program to integrate conversational AI into its CRM business operations with Agentforce Agents. How should the Agentforce Specialist monitor Agents' usability and the assignment of actions? Select 1
✅ Answer: C. Run Agent Analytics.
Comprehensive and Detailed In-Depth Explanation:Monitoring the usability and action assignments of Agentforce Agents requires insights into how agents perform, how users interact with them, and how actions are executed within conversations. Salesforce provides Agent Analytics (Option C) as a built-in capability specifically designed for this purpose. Agent Analytics offers dashboards and reports that track metrics such as agent response times, user satisfaction, action invocation frequency, and success rates. This tool allows the Agentforce Specialist to assess usability (e.g., are agents meeting user needs?) and monitor action assignments (e.g., which actions are triggered and how often), providing actionable data to optimize the pilot program. > Option A: Platform Debug Logs are low-level logs for troubleshooting Apex, Flows, or system processes. They don't provide high-level insights into agent usability or action assignments, making this unsuitable. ➤ Option B: The Metadata API is used for retrieving or deploying metadata (e.g., object definitions), not runtime log data about agent performance. While Agent log data might exist, querying it via Metadata API is not a standard or documented approach for this use case. ➤ Option C: Agent Analytics is the dedicated solution, offering a user-friendly way to monitor conversational Al performance without requiring custom development. Option C is the correct choice for effectively monitoring Agentforce Agents in a pilot program.
#2 Universal Containers deploys a new Agentforce Service Agent into the company's website but is getting feedback that the Agentforce Service Agent is not providing answers to customer questions that are found in the company's Salesforce Knowledge articles. What is the likely issue? Select 1
✅ Answer: C. The Agentforce Service Agent user was not given the Allow View Knowledge permission set.
Comprehensive and Detailed In-Depth Explanation:Universal Containers (UC) has deployed an Agentforce Service Agent on its website, but it's failing to provide answers from Salesforce Knowledge articles. Let's troubleshoot the issue. ➤ Option A: The Agentforce Service Agent user is not assigned the correct Agent Type License. There's no "Agent Type License" in Salesforce—agent functionality is tied to Agentforce licenses (e.g., Service Agent license) and permissions. Licensing affects feature access broadly, but the specific issue of not retrieving Knowledge suggests a permission problem, not a license type, making this incorrect. ➤ Option B: The Agentforce Service Agent user needs to be created under the standard Agent Knowledge profile.No "standard Agent Knowledge profile" exists. The Agentforce Service Agent runs under a system user (e.g., "Agentforce Agent User") with a custom profile or permission sets. Profile creation isn't the issue—access permissions are, making this incorrect. ➤ Option C: The Agentforce Service Agent user was not given the Allow View Knowledge permission set.The Agentforce Service Agent user requires read access to Knowledge articles to ground responses. The "Allow View Knowledge" permission (typically via the "Salesforce Knowledge User" license or a permission set like "Agentforce Service Permissions") enables this. If missing, the agent can't access Knowledge, even if articles are indexed, causing the reported failure. This is a common setup oversight and the likely issue, making it the correct answer. Why Option C is Correct:Lack of Knowledge access permissions for the Agentforce Service Agent user directly prevents retrieval of article content, aligning with the symptoms and Salesforce security requirements.
#3 Universal Containers (UC) wants to implement an AI-powered customer service agent that can: Retrieve proprietary policy documents that are stored as PDFs. Ensure responses are grounded in approved company data, not generic LLM knowledge.What should UC do first? Select 1
✅ Answer: A. Set up an Agentforce Data Library for AI retrieval of policy documents.
Comprehensive and Detailed In-Depth Explanation:To implement an AI-powered customer service agent that retrieves proprietary policy documents (stored as PDFs) and ensures responses are grounded in approved company data, UC must first establish a foundation for the AI to access and use this data. The Agentforce Data Library (Option A) is the correct starting point. A Data Library allows UC to upload PDFs containing policy documents, index them into Salesforce Data Cloud's vector database, and make them available for AI retrieval. This setup ensures the agent can perform Retrieval-Augmented Generation (RAG), grounding its responses in the specific, approved content from the PDFs rather than relying on generic LLM knowledge, directly meeting UC's requirements. ➤ Option B: Expanding the AI agent's scope to search all Salesforce records is too broad and unnecessary at this stage. The requirement focuses on PDFs with policy documents, not all Salesforce data (e.g., cases, accounts), making this premature and irrelevant as a first step. ➤ Option C: "Add the files to the content, and then select the data library option" is vague and not a precise process in Agentforce. While uploading files is part of setting up a Data Library, the phrasing suggests adding files to Salesforce Content (e.g., ContentDocument) without indexing, which doesn't enable AI retrieval. Setting up the Data Library (A) encompasses the full process correctly. ➤ Option A: This is the foundational step-creating a Data Library ensures the PDFs are uploaded, indexed, and retrievable by the agent, fulfilling both retrieval and grounding needs. Option A is the correct first step for UC to achieve its goals.
#4 Universal Containers wants to use an external large language model (LLM) in Prompt Builder. What should An Agentforce recommend? Select 1
✅ Answer: B. Use BYO-LLM functionality in Einstein Studio.
Bring Your Own Large Language Model (BYO-LLM) functionality in Einstein Studio allows organizations to integrate and use external large language models (LLMs) within the Salesforce ecosystem. Universal Containers can leverage this feature to connect and ground prompts with external LLMs, allowing for custom Al model use cases and seamless integration with Salesforce data. ➤ Option B is the correct choice as Einstein Studio provides a built-in feature to work with external models. ➤ Option A suggests using Apex, but BYO-LLM functionality offers a more streamlined solution. ➤ Option C focuses on Flow and External Services, which is more about data integration and isn't ideal for working with LLMs.
#5 A Universal Containers administrator is setting up Einstein Data Libraries. After creating a new library, the administrator notices that only the file upload option is available; there is no option to configure the library using a Salesforce Knowledge base. What is the most likely cause of this Issue? Select 1
✅ Answer: B. Salesforce Knowledge is not enabled in the organization; without Salesforce Knowledge enabled, the Knowledge-based data source option will not be available in Einstein Data Libraries.
None
#6 What is the role of the large language model (LLM) in understanding intent and executing an Agent Action? Select 1
✅ Answer: B. Identify the best matching topic and actions and correct order of execution.
Comprehensive and Detailed In-Depth Explanation:In Agentforce, the large language model (LLM), powered by the Atlas Reasoning Engine, interprets user requests and drives Agent Actions. Let's evaluate its role. ➤ Option A: Find similar requested topics and provide the actions that need to be executed.While the LLM can identify similar topics, its role extends beyond merely finding them—it matches intents to specific topics and determines execution. This option understates the LLM's responsibility for ordering actions, making it incomplete and incorrect. ➤ Option B: Identify the best matching topic and actions and correct order of execution.The LLM analyzes user input to understand intent, matches it to the best-fitting topic (configured in Agent Builder), and selects associated actions. It also determines the correct sequence of execution based on the agent's plan (e.g., retrieve data before updating a record). This end-to-end process—from intent recognition to action orchestration is the LLM's core role in Agentforce, making this the correct answer. ➤ Option C: Determine a user's topic access and sort actions by priority to be executed.Topic access is governed by Salesforce permissions (e.g., user profiles), not the LLM. While the LLM prioritizes actions within its plan, its primary role is intent matching and execution ordering, not access control, making this incorrect. Why Option B is Correct:The LLM's role in identifying topics, selecting actions, and ordering execution is central to Agentforce's autonomous functionality, as detailed in Salesforce documentation.
#7 Universal Containers needs its sales reps to be able to only execute prompt templates. What should the company use to achieve this requirement? Select 1
✅ Answer: B. Prompt Template User permission set
Comprehensive and Detailed In-Depth Explanation:Salesforce Agentforce leverages Prompt Builder, a powerful tool that allows administrators to create and manage prompt templates, which are reusable frameworks for generating AI-driven responses. These templates can be invoked by users to perform specific tasks, such as generating sales emails or summarizing records, based on predefined instructions and grounded data. In this scenario, Universal Containers wants its sales reps to have the ability to only execute these prompt templates, meaning they should be able to run them but not create, edit, or manage them. Let's break down the options and analyze why B. Prompt Template User permission set is the correct answer: ➤ Option A: Prompt Execute Template permission setThis option sounds plausible at first glance because it includes the phrase "Execute Template," which aligns with the requirement. However, there is no specific permission set named "Prompt Execute Template" in Salesforce's official documentation for Prompt Builder or Agentforce. Salesforce typically uses more standardized naming conventions for permission sets, and this appears to be a distractor option that doesn't correspond to an actual feature. Permissions in Salesforce are granular, but they are grouped logically under broader permission sets rather than hyper-specific ones like this. ➤ Option B: Prompt Template User permission setThis is the correct answer. In Salesforce, the Prompt Builder feature, which is integral to Agentforce, includes permission sets designed to control access to prompt templates. The "Prompt Template User" permission set is an official Salesforce permission set that grants users the ability to execute (or invoke) prompt templates without giving them the ability to create or modify them. This aligns perfectly with the requirement that sales reps should only execute prompt templates, not manage them. The Prompt Template User permission set typically includes permissions like "Run Prompt Templates," which allows users to trigger templates from interfaces such as Lightning record pages or flows, while restricting access to the Prompt Builder setup area where templates are designed. ➤ Option C: Prompt Template Manager permission setThis option is incorrect because the "Prompt Template Manager" permission set is designed for users who need full administrative control over prompt templates. This includes creating, editing, and deleting templates in Prompt Builder, in addition to executing them. Since Universal Containers only wants sales reps to execute templates and not manage them, this permission set provides more access than required, violating the principle of least privilege a key security best practice in Salesforce. How It Works in Salesforce To implement this, an administrator would: Navigate to Setup > Permission Sets. Locate or create the "Prompt Template User" permission set (this is a standard permission set available with Prompt Builder-enabled orgs). Assign this permission set to the sales reps' profiles or individual user records. Ensure the prompt templates are configured and exposed (e.g., via Lightning components like the Einstein Summary component) on relevant pages, such as Opportunity or Account record pages, where sales reps can invoke them. Why This Matters By assigning the Prompt Template User permission set, Universal Containers ensures that sales reps can leverage Al-driven prompt templates to enhance productivity (e.g., drafting personalized emails or generating sales pitches) while maintaining governance over who can modify the templates. This separation of duties is critical in a secure Salesforce environment.
#8 A Salesforce Administrator is exploring the capabilities of Agent to enhance user interaction within their organization. They are particularly interested in how Agent processes user requests and the mechanism it employs to deliver responses. The administrator is evaluating whether Agent directly interfaces with a large language model (LLM) to fetch and display responses to user inquiries, facilitating a broad range of requests from users. How does Agent handle user requests In Salesforce? Select 1
✅ Answer: C. Agent analyzes the user's request and LLM technology is used to generate and display the appropriate response.
Agent is designed to enhance user interaction within Salesforce by leveraging Large Language Models (LLMs) to process and respond to user inquiries. When a user submits a request, Agent analyzes the input using natural language processing techniques. It then utilizes LLM technology to generate an appropriate and contextually relevant response, which is displayed directly to the user within the Salesforce interface. Option C accurately describes this process. Agent does not necessarily trigger a flow (Option A) or perform an HTTP callout to an LLM provider (Option B) for each user request. Instead, it integrates LLM capabilities to provide immediate and intelligent responses, facilitating a broad range of user requests.
#9 Universal Containers would like to route SMS text messages to a service rep from an Agentforce Service Agent. Which Service Channel should the company use in the flow to ensure it's routed properly? Select 1
✅ Answer: A. Messaging
Comprehensive and Detailed In-Depth Explanation:UC wants to route SMS text messages from an Agentforce Service Agent to a service rep using a flow. Let's identify the correct Service Channel. ➤ Option A: MessagingIn Salesforce, the "Messaging" Service Channel (part of Messaging for In-App and Web or SMS) handles text-based interactions, including SMS. When integrated with Omni-Channel Flow, the "Route Work" action uses this channel to route SMS messages to agents. This aligns with UC’s requirement for SMS routing, making it the correct answer. ➤ Option B: Route Work Action"Route Work" is an action in Omni-Channel Flow, not a Service Channel. It uses a channel (e.g., Messaging) to route work, so this is a component, not the channel itself, making it incorrect. ➤ Option C: Live Agent"Live Agent" refers to an older chat feature, not the current Messaging framework for SMS. It's outdated and unrelated to SMS routing, making it incorrect. ➤ Option D: SMS ChannelThere's no standalone "SMS Channel" in Salesforce Service Channels—SMS is encompassed within the "Messaging" channel. This is a misnomer, making it incorrect. Why Option A is Correct:The "Messaging" Service Channel supports SMS routing in Omni-Channel Flow, ensuring proper handoff from the Agentforce Service Agent to a rep, per Salesforce documentation.
#10 Which element in the Omni-Channel Flow should be used to connect the flow with the agent? Select 1
✅ Answer: A. Route Work Action
Comprehensive and Detailed In-Depth Explanation:UC is integrating an Agentforce agent with Omni-Channel Flow to route work. Let's identify the correct element. ➤ Option A: Route Work ActionThe "Route Work" action in Omni-Channel Flow assigns work items (e.g., cases, chats) to agents or queues based on routing rules. When connecting to an Agentforce agent, this action links the flow to the agent's queue or presence, enabling interaction. This is the standard element for agent integration, making it the correct answer. ➤ Option B: AssignmentThere's no "Assignment" element in Flow Builder for Omni-Channel. Assignment rules exist separately, but within flows, routing is handled by "Route Work," making this incorrect. ➤ Option C: DecisionThe "Decision" element branches logic, not connects to agents. It's a control structure, not a routing mechanism, making it incorrect. Why Option A is Correct:"Route Work" is the designated Omni-Channel Flow action for connecting to agents, including Agentforce agents, per Salesforce documentation.
#11 Universal Containers (UC) wants to make a sales proposal and directly use data from multiple unrelated objects (standard and custom) in a prompt template. How should UC accomplish this? Select 1
✅ Answer: C. Create a Flex template to add resources with standard and custom objects as inputs.
Comprehensive and Detailed In-Depth Explanation:UC needs to incorporate data from multiple unrelated objects (standard and custom) into a prompt template for a sales proposal. Let's evaluate the options based on Agentforce capabilities. ➤ Option A: Create a prompt template passing in a special custom object that connects the records temporarily.While a custom object could theoretically act as a junction to link unrelated records, this approach requires additional setup (e.g., creating the object, populating it with data via automation), and there's no direct mechanism in Prompt Builder to "pass in" such an object to a prompt template without grounding or flow support. This is inefficient and not a native feature, making it incorrect. ➤ Option B: Create a prompt template-triggered flow to access the data from standard and custom objects.There's no such thing as a "prompt template-triggered flow" in Salesforce. Flows can invoke prompt templates (e.g., via the "Prompt Template" action), but the reverse—triggering a flow from a prompt template—is not a standard construct. While a flow could gather data from unrelated objects and pass it to a prompt, this option's terminology is inaccurate, and it's not the most direct solution, making it incorrect. ➤ Option C: Create a Flex template to add resources with standard and custom objects as inputs.In Agentforce's Prompt Builder, a Flex template (short for Flexible Prompt Template) allows users to define dynamic inputs, including data from multiple Salesforce objects (standard or custom), even if they're unrelated. Resources can be added to the template (e.g., via merge fields or Data Cloud queries), enabling the prompt to pull data directly from specified objects without requiring a junction object or complex flows. This is ideal for generating a sales proposal using disparate data sources and aligns with Salesforce's documentation on Flex templates, making it the correct answer. Why Option C is Correct:Flex templates are designed for scenarios requiring flexible data inputs, allowing UC to directly reference multiple unrelated objects in the prompt template. This simplifies the process and leverages Prompt Builder's native capabilities, as outlined in Salesforce documentation.
#12 Universal Containers (UC) is creating a new custom prompt template to populate a field with generated output. UC enabled the Einstein Trust Layer to ensure AI Audit data is captured and monitored for adoption and possible enhancements. Which prompt template type should UC use and which consideration should UC review? Select 1
✅ Answer: A. Field Generation, and that Dynamic Fields is enabled
Comprehensive and Detailed In-Depth Explanation:Salesforce Agentforce provides various prompt template types to support Al-driven tasks, such as generating text or populating fields. In this case, UC needs a custom prompt template to populate a field with generated output, which directly aligns with the Field Generation prompt template type. This type is designed to use generative AI to create field values (e.g., summaries, descriptions) based on input data or prompts, making it the ideal choice for UC's requirement. Additionally, UC has enabled the Einstein Trust Layer, a governance framework that ensures AI outputs are safe, explainable, and auditable, capturing AI Audit data for monitoring adoption and identifying improvement areas. The consideration UC should review is whether Dynamic Fields is enabled. Dynamic Fields allow the prompt template to incorporate variable data from Salesforce records (e.g., case details, customer info) into the prompt, ensuring the generated output is contextually relevant to each record. This is critical for field population tasks, as static prompts wouldn't adapt to record-specific needs. The Einstein Trust Layer further benefits from this, as it can track how dynamic inputs influence outputs for audit purposes. Option A: Correct. "Field Generation" matches the use case, and "Dynamic Fields" is a key consideration to ensure flexibility and auditability with the Trust Layer. ➤ Option B: "Field Generation" is correct, but "Dynamic Forms" is unrelated. Dynamic Forms is a UI feature for customizing page layouts, not a prompt template setting, making this option incorrect. Option C: "Flex" templates are more general-purpose and not specifically tailored for field population tasks. While Dynamic Fields could apply, Field Generation is the better fit for UC's stated goal. Option A is the best choice, as it pairs the appropriate template type (Field Generation) with a relevant consideration (Dynamic Fields) for UC's scenario with the Einstein Trust Layer.
#13 Universal Containers is considering leveraging the Einstein Trust Layer in conjunction with Einstein Generative AI Audit Data. Which audit data is available using the Einstein Trust Layer? Select 1
✅ Answer: C. Masked data and toxicity score
Universal Containers is considering the use of the Einstein Trust Layer along with Einstein Generative AI Audit Data. The Einstein Trust Layer provides a secure and compliant way to use AI by offering features like data masking and toxicity assessment. The audit data available through the Einstein Trust Layer includes information about masked data—which ensures sensitive information is not exposed—and the toxicity score, which evaluates the generated content for inappropriate or harmful language.
#14 Which configuration must An Agentforce complete for users to access generative AI-enabled fields in the Salesforce mobile app? Select 1
✅ Answer: C. Enable Dynamic Forms on Mobile.
Explanation: Context of the Question Universal Containers (UC) has generative AI–enabled fields that users can access in the desktop experience. The Agentforce Specialist needs these same fields to be visible and usable in the Salesforce Mobile App. Why Dynamic Forms on Mobile? Dynamic Forms allow you to configure record pages so that fields and sections can appear or be hidden based on certain criteria. When you enable “Dynamic Forms for Mobile," any generative AI–enabled fields placed on the dynamic layout become accessible in the Salesforce mobile experience. There is no standard Setup option labeled “Enable Mobile Generative AI” or “Enable Mobile Prompt Responses” as a universal toggle; the existing official approach is to ensure dynamic forms (and the relevant fields) are supported on mobile. Conclusion Ensuring that these AI-driven fields are visible on mobile is accomplished by turning on Dynamic Forms on Mobile and adding those fields to the dynamic layout. Therefore, Option C is correct.
#15 How should an organization use the Einstein Trust layer to audit, track, and view masked data? Select 1
✅ Answer: A. Utilize the audit trail that captures and stores all LLM submitted prompts in Data Cloud.
The Einstein Trust Layer is designed to ensure transparency, compliance, and security for organizations leveraging Salesforce's AI and generative AI capabilities. Specifically, for auditing, tracking, and viewing masked data, organizations can utilize: ➤ Audit Trail in Data Cloud: The audit trail captures and stores all prompts submitted to large language models (LLMs), ensuring that sensitive or masked data interactions are logged. This allows organizations to monitor and audit all AI-generated outputs, ensuring that data handling complies with internal and regulatory guidelines. The Data Cloud provides the infrastructure for managing and accessing this audit data. ev Why not B? Using Prompt Builder in Setup to send prompts to the LLM is for creating and managing prompts, not for auditing or tracking data. It does not interact directly with the audit trail functionality. ➤ Why not C? Although the audit trail can be accessed in Setup, the user-generated prompts are primarily tracked in the Data Cloud for broader control, auditing, and analysis. Setup is not the primary tool for exporting or managing these audit logs. More information on auditing AI interactions can be found in the Salesforce AI Trust Layer documentation, which outlines how organizations can manage and track generative AI interactions securely.
#16 Universal Containers (UC) recently rolled out Einstein Generative AI capabilities and has created a custom prompt to summarize case records. Users have reported that the case summaries generated are not returning the appropriate information. What is a possible explanation for the poor prompt performance? Select 1
✅ Answer: B. The data being used for grounding is incorrect or incomplete.
Comprehensive and Detailed In-Depth Explanation:UC's custom prompt for summarizing case records is underperforming, and we need to identify a likely cause. Let's evaluate the options based on Agentforce and Einstein Generative Al mechanics. ➤ Option A: The prompt template version is incompatible with the chosen LLM.Prompt templates in Agentforce are designed to work with the Atlas Reasoning Engine, which abstracts the underlying large language model (LLM). Salesforce manages compatibility between prompt templates and LLMs, and there's no user-facing versioning that directly ties to LLM compatibility. This option is unlikely and not a common issue per documentation. ➤ Option B: The data being used for grounding is incorrect or incomplete.Grounding is the process of providing context (e.g., case record data) to the AI via prompt templates. If the grounding data— sourced from Record Snapshots, Data Cloud, or other integrations—is incorrect (e.g., wrong fields mapped) or incomplete (e.g., missing key case details), the summaries will be inaccurate. For example, if the prompt relies on Case.Subject but the field is empty or not included, the output will miss critical information. This is a frequent cause of poor performance in generative AI and aligns with Salesforce troubleshooting guidance, making it the correct answer. ➤ Option C: The Einstein Trust Layer is incorrectly configured.The Einstein Trust Layer enforces guardrails (e.g., toxicity filtering, data masking) to ensure safe and compliant AI outputs. Misconfiguration might block content or alter tone, but it's unlikely to cause summaries to lack appropriate information unless specific fields are masked unnecessarily. This is less probable than grounding issues and not a primary explanation here. Why Option B is Correct:Incorrect or incomplete grounding data is a well-documented reason for subpar AI outputs in Agentforce. It directly affects the quality of case summaries, and specialists are advised to verify grounding sources (e.g., field mappings, Data Cloud queries) when troubleshooting, as per official guidelines. References: ➤ Salesforce Agentforce Documentation: Prompt Templates > Grounding – Links poor outputs to grounding issues. Trailhead: Troubleshoot Agentforce Prompts – Lists incomplete data as a common problem. ➤ Salesforce Help: Einstein Generative AI > Debugging Prompts – Recommends checking grounding data first.
#17 A support team handles a high volume of chat interactions and needs a solution to provide quick, relevant responses to customer inquiries. Responses must be grounded in the organization's knowledge base to maintain consistency and accuracy. Which feature in Einstein for Service should the support team use? Select 1
✅ Answer: B. Einstein Reply Recommendations
The support team should use Einstein Reply Recommendations to provide quick, relevant responses to customer inquiries that are grounded in the organization's knowledge base. This feature leverages AI to recommend accurate and consistent replies based on historical interactions and the knowledge stored in the system, ensuring that responses are aligned with organizational standards. ய Einstein Service Replies (Option A) is focused on generating replies but doesn't have the same emphasis on grounding responses in the knowledge base. ➤ Einstein Knowledge Recommendations (Option C) suggests knowledge articles to agents, which is more about assisting the agent in finding relevant articles than providing automated or AI-generated responses to customers. Salesforce Agentforce Specialist References:For more information on Einstein Reply Recommendations: https://help.salesforce.com/s/articleView?id=sf.einstein_reply_recommendations_overview.htm
#18 An Agentforce at Universal Containers (UC) is building with no-code tools only. They have many small accounts that are only touched periodically by a specialized sales team, and UC wants to maximize the sales operations team's time. UC wants to help prep the sales team for the calls by summarizing past purchases, interests in products shown by the Contact captured via Data Cloud, and a recap of past email and phone conversations for which there are transcripts. Which approach should the Agentforce Specialist recommend to achieve this use case? Select 1
✅ Answer: A. Use a prompt template grounded on CRH and Data Cloud data using standard foundation model.
For no-code implementations, Prompt Builder allows Agentforce Specialists to create prompt templates that dynamically ground responses in Salesforce CRM data (e.g., past purchases) and Data Cloud insights (e. g., product interests) without custom coding. The standard foundation model (e.g., Einstein GPT) can synthesize this data into summaries, leveraging structured and unstructured sources (e.g., email/phone transcripts). Fine-tuning (B) or custom models (C) require code and are unnecessary here, as the use case does not involve unique data patterns requiring model retraining.
#19 What is the main purpose of Prompt Builder? Select 1
✅ Answer: B. A tool that enables companies to create reusable prompts for large language models (LLMs), bringing generative Al responses to their flow of work
Prompt Builder is designed to help organizations create and configure reusable prompts for large language models (LLMs). By integrating generative AI responses into workflows, Prompt Builder enables customization of AI prompts that interact with Salesforce data and automate complex processes. This tool is especially useful for creating tailored and consistent AI-generated content in various business contexts, including customer service and sales. JE It is not a tool for Apex programming (as in option A). It is also not limited to real-time suggestions as mentioned in option C. Instead, it provides a flexible way for companies to manage and customize how AI-driven responses are generated and used in their workflows. References: ➤ Salesforce Prompt Builder Overview: https://help.salesforce.com/s/articleView?id=sf.prompt_builder. htm
#20 An Agentforce created a custom Agent action, but it is not being picked up by the planner service in the correct order. Which adjustment should the Al Specialist make in the custom Agent action instructions for the planner service to work as expected? Select 1
✅ Answer: A. Specify the dependent actions with the reference to the action API name.
When a custom Agent action is not being prioritized correctly by the planner service, the root cause is often missing or improperly defined action dependencies. The planner service determines the execution order of actions based on dependencies defined in the action instructions. To resolve this, the Agentforce Specialist must explicitly specify dependent actions using their API names in the custom action's configuration. This ensures the planner understands the sequence in which actions must be executed to meet business logic requirements. Salesforce documentation highlights that dependencies are critical for orchestrating workflows in Einstein Bots and Agentforce. For example, if Action B requires data from Action A, Action A's API name must be listed as a dependency in Action B's instructions. The Einstein Bot Developer Guide states that failing to define dependencies can lead to race conditions or incorrect execution order. In contrast: U ➤ Profiles or custom permissions (B) control access to the action but do not influence execution order. LLM model provider and version (C) determine the AI model used for processing but are unrelated to the planner's sequencing logic.