SALESFORCE AGENTFORCE

salesforce agentforce Practice Questions & Answers (Set 17) | CodeWme

📝 Instructions: Read the hint to know if you need to select one or multiple options.

#1 A developer is troubleshooting why an Agentforce Service Agent provided an incorrect response. They want to inspect the exact 'Chain of Thought' and the sequence of steps the reasoning engine took. Which tool provides this visibility? Select 1

A. Debug Logs in Developer Console
B. The 'Plan Tracer' in Agent Builder
C. Data Cloud Query Editor

✅ Answer: The 'Plan Tracer' in Agent Builder


Explanation:
The Plan Tracer (often found in the Agent Builder testing pane) visualizes the reasoning engine's decision-making process, showing topic selection, action execution, and the final response generation.

#2 Universal Containers wants to deploy their Agentforce Agent from Sandbox to Production using the Metadata API. Which metadata type represents the collection of Topics assigned to the agent? Select 1

A. BotVersion
B. GenAiPlugin
C. FlowDefinition

✅ Answer: GenAiPlugin


Explanation:
In the Metadata API, `GenAiPlugin` represents the definition of topics (formerly plugins) and their associated actions/instructions for an Agent.

#3 An Agentforce Specialist needs to create a prompt template that generates a personalized email. The email must include data from the related Account and the current User's name. Which prompt template type is most suitable? Select 1

A. Field Generation Template
B. Flex Template
C. Draft with Einstein (Standard)

✅ Answer: Flex Template


Explanation:
Flex Templates are designed for open-ended generation tasks like drafting emails or summaries where you need to mix multiple resources (Objects, Apex, etc.) and generate text output.

#4 When configuring a 'Record Summary' standard action for an Agent, what determines which fields are included in the summary? Select 1

A. The 'Compact Layout' of the object.
B. The fields referenced in the prompt instruction.
C. The 'Search Layout' of the object.

✅ Answer: The 'Compact Layout' of the object.


Explanation:
Standard record summary actions typically utilize the fields defined in the object's Compact Layout to determine the key information to present to the LLM for summarization.

#5 A user asks the Agentforce Service Agent a question that triggers the 'Toxic Language' detector in the Einstein Trust Layer. What is the expected behavior? Select 1

A. The agent ignores the toxicity and answers the question.
B. The agent replies with a standard safety fallback message and does not process the query.
C. The agent escalates the chat to a manager immediately.

✅ Answer: The agent replies with a standard safety fallback message and does not process the query.


Explanation:
The Trust Layer intercepts harmful or toxic inputs (or outputs) and replaces them with a safe, pre-configured fallback response to prevent policy violations.

#6 Universal Containers has a strict requirement that their Agent must explicitly ask for the 'Order Number' before attempting to check order status. The agent is currently guessing or searching broadly. How can this be enforced? Select 1

A. Add a 'Slot Filling' requirement to the 'Order Status' action inputs.
B. Write 'Always ask for order number' in the Agent Description.
C. Use a Validation Rule on the Order object.

✅ Answer: Add a 'Slot Filling' requirement to the 'Order Status' action inputs.


Explanation:
By marking an input (like Order Number) as required in the Action definition (Slot Filling), the Reasoning Engine is forced to ask the user clarifying questions to obtain that specific value before it can execute the action.

#7 An administrator wants to limit the documents an Agent retrieves from Data Cloud to only those tagged with 'Region: North America'. Where is this filter configured? Select 1

A. In the Prompt Template instructions.
B. In the Retriever definition within Einstein Studio.
C. In the Sharing Rules of the Knowledge Object.

✅ Answer: In the Retriever definition within Einstein Studio.


Explanation:
Retriever definitions in Einstein Studio allow you to specify 'Metadata Filters' (like Region = 'North America') to restrict the search scope of the vector index.

#8 Which capability distinguishes an 'Autonomous Agent' from a standard 'Einstein Copilot'? Select 1

A. Agents can access Data Cloud; Copilots cannot.
B. Agents can run 24/7 and initiate actions based on triggers/events, while Copilots typically wait for user input.
C. Agents do not use LLMs.

✅ Answer: Agents can run 24/7 and initiate actions based on triggers/events, while Copilots typically wait for user input.


Explanation:
A key differentiator is autonomy. Agents can be event-driven and handle tasks asynchronously (like handling a chat session or responding to a data change), whereas Copilot is an interactive assistant driven by direct user prompts.

#9 Universal Containers wants to display a dynamically generated 'Sales Pitch' on the Opportunity page layout using Generative AI. Which component should the Admin use? Select 1

A. Einstein Next Best Action
B. Dynamic Forms with a 'Field Generation' field
C. Agentforce Service Agent component

✅ Answer: Dynamic Forms with a 'Field Generation' field


Explanation:
To display AI-generated text on a record page (like a pitch or summary), you configure a Field Generation prompt template and connect it to a field using Dynamic Forms.

#10 A Sales Agent needs to access an external Credit Check API that requires a specific API Key. What is the secure way to store this credential for the Agent's action? Select 1

A. Hardcode it in the prompt instructions.
B. Store it in a Custom Label.
C. Use a Named Credential and associate it with the External Service/Flow.

✅ Answer: Use a Named Credential and associate it with the External Service/Flow.


Explanation:
Named Credentials are the standard Salesforce security feature for storing authentication details for external callouts. The Agent (via Flow or External Service) references the Named Credential to authenticate safely.

#11 What is the function of the 'Masking' step in the Einstein Trust Layer? Select 1

A. It encrypts the data at rest in Data Cloud.
B. It replaces sensitive data (PII/PCI) with placeholder tokens before sending the prompt to the LLM.
C. It hides fields from the Agent user.

✅ Answer: It replaces sensitive data (PII/PCI) with placeholder tokens before sending the prompt to the LLM.


Explanation:
Data Masking identifies sensitive entities (like names, emails, credit cards) and replaces them with generic tokens (e.g., [PERSON_NAME]) to prevent the LLM from seeing or retaining real private data.

#12 An Agentforce Specialist observes that the agent is selecting the wrong topic ('Sales') for a user query clearly about 'Billing'. What is the most likely root cause? Select 1

A. The 'Sales' topic has a 'Classification Description' that is too broad or overlaps with 'Billing'.
B. The 'Billing' topic is missing a custom icon.
C. The LLM temperature is set too low.

✅ Answer: The 'Sales' topic has a 'Classification Description' that is too broad or overlaps with 'Billing'.


Explanation:
Topic selection relies on semantic similarity against the Classification Description. If one topic's description is vague or encompasses keywords from another, the reasoning engine may misclassify the intent.

#13 Which artifact is required to 'Ground' a Prompt Template with unstructured data from PDF files? Select 1

A. A Data Cloud Search Index and Retriever.
B. A CRM Content Library.
C. An Apex Class implementing the `PromptModifier` interface.

✅ Answer: A Data Cloud Search Index and Retriever.


Explanation:
To use unstructured data (like PDFs) for grounding (RAG), the data must be indexed in Data Cloud (Vector Index) and exposed via a Retriever.

#14 Can an Agentforce Agent call another Agent (Agent-to-Agent)? Select 1

A. No, agents operate in isolation.
B. Yes, using the Agent-to-Agent (A2A) pattern where one agent invokes another as a tool.
C. Only if they share the same Topic.

✅ Answer: Yes, using the Agent-to-Agent (A2A) pattern where one agent invokes another as a tool.


Explanation:
Salesforce supports multi-agent patterns where a specialized agent can be registered as a tool/function for another agent to call when specialized expertise is needed.

#15 What is the maximum number of 'Results' (chunks) a standard Retriever typically fetches for a single prompt generation? Select 1

A. Unlimited
B. Configurable, but defaults (e.g., 5-10) exist to manage context window limits.
C. Exactly 1

✅ Answer: Configurable, but defaults (e.g., 5-10) exist to manage context window limits.


Explanation:
Retrievers fetch a specific number of top-ranked chunks (often configurable in the prompt or retriever settings) to fit within the LLM's context window while providing sufficient context.

🎉 You have reached the end of this series!

Back to Practice Dashboard