salesforce agentforce Practice Questions & Answers (Set 9) | CodeWme
#1 Universal Containers' sales team engages in numerous video sales calls with prospects across the nation. Sales management wants an easy way to understand key information such as deal terms or customer sentiments. Which Einstein Generative Al feature should An Agentforce recommend for this request? Select 1
✅ Answer: Einstein Call Summaries
Einstein Call Summaries is the best option for this scenario because it leverages Salesforce's AI capabilities to automatically summarize key details of video or voice calls. It includes details like deal terms, customer sentiments, follow-up tasks, and other crucial information. This feature is designed to help sales teams focus on their strategies rather than taking extensive manual notes during conversations. Einstein Call Summaries: Automatically generates summaries for calls, identifying critical points such as next steps and follow-ups, enhancing efficiency and understanding of deal progression. Einstein Conversation Insights: While it provides insights into customer sentiment and engagement, it is more suited for analyzing patterns across conversations rather than summarizing specific call details. Einstein Video KPI: Focuses on analyzing key performance indicators within video calls but does not offer summarization features needed for deal terms or sentiment tracking. This feature ensures actionable insights are delivered directly into the Salesforce CRM, allowing sales managers to gain a concise overview without manually reviewing long recordings.
#2 An Agentforce Specialist is creating a custom action in Agentforce. Which option is available for the Agentforce Specialist to choose for the custom Agent action? Select 1
✅ Answer: Flows
Comprehensive and Detailed In-Depth Explanation:The Agentforce Specialist is defining a custom action for an Agentforce agent in Agent Builder. Actions determine what the agent does (e.g., retrieve data, update records). Let's evaluate the options. Option A: Apex TriggerApex Triggers are event-driven scripts, not selectable actions in Agent Builder. While Apex can be invoked via other means (e.g., Flows), it's not a direct option for custom agent actions, making this incorrect. ➤ Option B: SOQLSOQL (Salesforce Object Query Language) is a query language, not an executable action type in Agent Builder. While actions can use queries internally, SOQL isn't a standalone option, making this incorrect. ➤ Option C: FlowsIn Agentforce Studio's Agent Builder, custom actions can be created using Salesforce Flows. Flows allow complex logic (e.g., data retrieval, updates, or integrations) and are explicitly supported as a custom action type. The specialist can select an existing Flow or create one, making this the correct answer. ➤ Option D: JavaScriptJavaScript isn't an option for defining agent actions in Agent Builder. It's used in Lightning Web Components, not agent configuration, making this incorrect. Why Option C is Correct:Flows are a native, flexible option for custom actions in Agentforce, enabling tailored functionality for agents, as per official documentation. References: ➤ Salesforce Agentforce Documentation: Agent Builder > Custom Actions – Lists Flows as a supported action type. Trailhead: Build Agents with Agentforce – Details Flow-based actions. ➤ Salesforce Help: Configure Agent Actions – Confirms Flows integration.
#3 Universal Containers tests out a new Einstein Generative AI feature for its sales team to create personalized and contextualized emails for its customers. Sometimes, users find that the draft email contains placeholders for attributes that could have been derived from the recipient's contact record. What is the most likely explanation for why the draft email shows these placeholders? Select 1
✅ Answer: The user does not have permission to access the fields.
Comprehensive and Detailed In-Depth Explanation:UC is using an Einstein Generative AI feature (likely Einstein Sales Emails) to draft personalized emails, but placeholders (e.g., {!Contact.FirstName}) appear instead of actual data from the contact record. Let's analyze the options. ➤ Option A: The user does not have permission to access the fields.Einstein Sales Emails, built on Prompt Builder, pulls data from contact records to populate email drafts. If the user lacks field-level security (FLS) or object-level permissions to access relevant fields (e.g., FirstName, Email), the system cannot retrieve the data, leaving placeholders unresolved. This is a common issue in Salesforce when permissions restrict data access, making it the most likely explanation and the correct answer. ➤ Option B: The user's locale language is not supported by Prompt Builder.Prompt Builder and Einstein Sales Emails support multiple languages, and locale mismatches typically affect formatting or translation, not data retrieval. Placeholders appearing instead of data isn't a documented symptom of language support issues, making this unlikely and incorrect. ➤ Option C: The user does not have Einstein Sales Emails permission assigned. The Einstein Sales Emails permission (part of the Einstein Generative AI license) enables the feature itself. If missing, users couldn't generate drafts at all—not just see placeholders. Since drafts are being created, this permission is likely assigned, making this incorrect. Why Option A is Correct:Permission restrictions are a frequent cause of unresolved placeholders in Salesforce AI features, as the system respects FLS and sharing rules. This is well-documented in troubleshooting guides for Einstein Generative AI. References: ➤ Salesforce Help: Einstein Sales Emails > Troubleshooting – Lists permissions as a cause of data issues. Trailhead: Set Up Einstein Generative AI – Emphasizes field access for personalization. Agentforce Documentation: Prompt Builder > Data Access – Notes dependency on user permissions.
#4 What is best practice when refining Agent custom action instructions? Select 1
✅ Answer: Provide examples of user messages that are expected to trigger the action.
When refining Agent custom action instructions, it is considered best practice to provide examples of user messages that are expected to trigger the action. This helps ensure that the custom action understands a variety of user inputs and can effectively respond to the intent behind the messages. ➤ Option B (consistent phrases) can improve clarity but does not directly refine the triggering logic. ➤ Option C (specifying a persona) is not as crucial as giving examples that illustrate how users will interact with the custom action. For more details, refer to Salesforce's Agent documentation on building and refining custom actions.
#5 An Agentforce Specialist is tasked with analyzing Agent interactions, looking into user inputs, requests, and queries to identify patterns and trends. What functionality allows the Agentforce Specialist to achieve this? Select 1
✅ Answer: User Utterances dashboard.
Comprehensive and Detailed In-Depth Explanation:The task requires analyzing user inputs, requests, and queries to identify patterns and trends in Agentforce interactions. Let's assess the options based on Agentforce's analytics capabilities. ➤ Option A: Agent Event Logs dashboard.Agent Event Logs capture detailed technical events (e.g., API calls, errors, or system-level actions) related to agent operations. While useful for troubleshooting or monitoring system performance, they are not designed to analyze user inputs or conversational trends. This option does not meet the requirement and is incorrect. ➤ Option B: AI Audit and Feedback Data dashboard.There's no specific "AI Audit and Feedback Data dashboard" in Agentforce documentation. Feedback mechanisms exist (e.g., user feedback on responses), and audit trails may track changes, but no single dashboard combines these for analyzing user queries and trends. This option appears to be a misnomer and is incorrect. ➤ Option C: User Utterances dashboard.The User Utterances dashboard in Agentforce Analytics is specifically designed to analyze user inputs, requests, and queries. It aggregates and visualizes what users are asking the agent, identifying patterns (e.g., common topics) and trends (e.g., rising query types). Specialists can use this to refine agent instructions or topics, making it the perfect tool for this task. This is the correct answer per Salesforce documentation.
#6 Universal Containers (UC) is looking to improve its sales team's productivity by providing real-time insights and recommendations during customer interactions. Why should UC consider using Agentforce Sales Agent? Select 1
✅ Answer: To streamline the sales process and increase conversion rates
Agentforce Sales Agent provides real-time insights and AI-powered recommendations, which are designed to streamline the sales process and help sales representatives focus on key tasks to increase conversion rates. It offers features like lead scoring, opportunity prioritization, and proactive recommendations, ensuring that sales teams can interact with customers efficiently and close deals faster. Option A: While tracking customer interactions is beneficial, it is only part of the broader capabilities offered by Agentforce Sales Agent and is not the primary objective for improving real-time productivity. Option B: Agentforce Sales Agent does not automate the entire sales process but provides actionable recommendations to assist the sales team. Option C: This aligns with the tool's core purpose of enhancing productivity and driving sales success.
#7 How does the Einstein Trust Layer ensure that sensitive data is protected while generating useful and meaningful responses? Select 1
✅ Answer: Masked data will be de-masked during response journey.
The Einstein Trust Layer ensures that sensitive data is protected while generating useful and meaningful responses by masking sensitive data before it is sent to the Large Language Model (LLM) and then de-masking it during the response journey. How It Works: Data Masking in the Request Journey: Sensitive Data Identification: Before sending the prompt to the LLM, the Einstein Trust Layer scans the input for sensitive data, such as personally identifiable information (PII), confidential business information, or any other data deemed sensitive. Masking Sensitive Data: Identified sensitive data is replaced with placeholders or masks. This ensures that the LLM does not receive any raw sensitive information, thereby protecting it from potential exposure. Processing by the LLM: Masked Input: The LLM processes the masked prompt and generates a response based on the masked data. No Exposure of Sensitive Data: Since the LLM never receives the actual sensitive data, there is no risk of it inadvertently including that data in its output. De-masking in the Response Journey: Re-insertion of Sensitive Data: After the LLM generates a response, the Einstein Trust Layer replaces the placeholders in the response with the original sensitive data. Providing Meaningful Responses: This de-masking process ensures that the final response is both meaningful and complete, including the necessary sensitive information where appropriate. Maintaining Data Security: At no point is the sensitive data exposed to the LLM or any unintended recipients, maintaining data security and compliance. Why Option A is Correct: De-masking During Response Journey: The de-masking process occurs after the LLM has generated its response, ensuring that sensitive data is only reintroduced into the output at the final stage, securely and appropriately. Balancing Security and Utility: This approach allows the system to generate useful and meaningful responses that include necessary sensitive information without compromising data security. Why Options B and C are Incorrect: Option B (Masked data will be de-masked during request journey): Incorrect Process: De-masking during the request journey would expose sensitive data before it reaches the LLM, defeating the purpose of masking and compromising data security. Option C (Responses that do not meet the relevance threshold will be automatically rejected): Irrelevant to Data Protection: While the Einstein Trust Layer does enforce relevance thresholds to filter out inappropriate or irrelevant responses, this mechanism does not directly relate to the protection of sensitive data. It addresses response quality rather than data security.
#8 Universal Containers (UC) wants its AI agent to return responses quickly. UC needs to optimize the retriever's configuration to ensure minimal latency when grounding AI responses. Which configuration aspect should UC prioritize? Select 1
✅ Answer: Ensure the retriever's filters are defined to limit the scope of each search efficiently.
Why is "Ensure the retriever's filters are defined to limit the scope of each search efficiently" the correct answer? In Agentforce, when optimizing a retriever's configuration to ensure minimal latency in AI-generated responses, the most effective approach is narrowing the scope of searches by applying specific filters. Key Considerations for Optimizing Retrievers in Agentforce: Defining Effective Filters Applying precise search filters reduces unnecessary data retrieval, decreasing response time. Filters help focus on relevant records, avoiding delays caused by processing large datasets. Reducing Query Complexity Overly broad searches can increase retrieval time, leading to latency issues. Well-configured retriever filters streamline queries, improving response speed. Optimizing the Data Indexing Process Restricting retriever searches to indexed fields enhances efficiency. Pre-indexed data is faster to access, reducing retrieval time. Why Not the Other Options? # A. Configure the retriever to operate in dynamic mode so that it modifies the search index structure at runtime. Incorrect because modifying the search index at runtime increases latency rather than reducing it. Index modifications require restructuring large datasets, which can slow down AI-generated responses. # C. Increase the recency bias setting for the retriever, limiting scope to more recent data. Incorrect because increasing recency bias only prioritizes recent records but does not necessarily improve overall retrieval speed. While it affects relevance, it does not directly address latency issues.
#9 What is a Salesforce Agentforce Specialist able to configure in Data Masking within the Einstein Trust Layer? Select 1
✅ Answer: The privacy data entities to be masked
In the Einstein Trust Layer, the Salesforce Agentforce Specialist can configure privacy data entities to be masked (Option C). This ensures sensitive or personally identifiable information (PII) is obfuscated when processed by AI models. Data Masking Configuration: The Agentforce Specialist defines which fields or data types (e.g., email, phone number, Social Security Number) should be masked. For example, masking the Email field in a prompt response to protect user privacy. This is done through declarative settings in Salesforce, where entities (standard or custom fields) are flagged for masking. Why Other Options Are Incorrect: A. Profiles exempt from masking: Exemptions are typically managed via permissions (e.g., field-level security), not directly within Einstein Trust Layer's Data Masking settings. B. Encryption keys for masking: Encryption is separate from masking. Masking involves obfuscation (e.g., replacing "john@example.com" with "@"), not encryption, which uses keys to secure data.
#10 How does the AI Retriever function within Data Cloud? Select 1
✅ Answer: It performs contextual searches over an indexed repository to quickly fetch the most relevant documents, enabling grounding AI responses with trustworthy, verifiable information.
Comprehensive and Detailed In-Depth Explanation: The AI Retriever is a key component in Salesforce Data Cloud, designed to support AI-driven processes like Agentforce by retrieving relevant data. Let's evaluate each option based on its documented functionality. Option A: It performs contextual searches over an indexed repository to quickly fetch the most relevant documents, enabling grounding AI responses with trustworthy, verifiable information. The AI Retriever in Data Cloud uses vector-based search technology to query an indexed repository (e.g., documents, records, or ingested data) and retrieve the most relevant results based on context. It employs embeddings to match user queries or prompts with stored data, ensuring AI responses (e.g., in Agentforce prompt templates) are grounded in accurate, verifiable information from Data Cloud. This enhances trustworthiness by linking outputs to source data, making it the primary function of the AI Retriever. This aligns with Salesforce documentation and is the correct answer. Option B: It monitors and aggregates data quality metrics across various data pipelines to ensure only high-integrity data is used for strategic decision-making. Data quality monitoring is handled by other Data Cloud features, such as Data Quality Analysis or ingestion validation tools, not the AI Retriever. The Retriever's role is retrieval, not quality assessment or pipeline management. This option is incorrect as it misattributes functionality unrelated to the AI Retriever. Option C: It automatically extracts and reformats raw data from diverse sources into standardized datasets for use in historical trend analysis and forecasting. Data extraction and standardization are part of Data Cloud's ingestion and harmonization processes (e.g., via Data Streams or Data Lake), not the AI Retriever's function. The Retriever works with already-indexed data to fetch results, not to process or reformat raw data. This option is incorrect. Why Option A is Correct: The AI Retriever's core purpose is to perform contextual searches over indexed data, enabling AI grounding with reliable information. This is critical for Agentforce agents to provide accurate responses, as outlined in Data Cloud and Agentforce documentation.
#11 Which business requirement presents a good use case for leveraging Einstein Prompt Builder? Select 1
✅ Answer: Send reply to a request for proposal via a personalized email.
Context of the Question Einstein Prompt Builder is a Salesforce feature that helps generate text (summaries, email content, responses) using AI models. The question presents three potential use cases, asking which one best fits the capabilities of Einstein Prompt Builder. Einstein Prompt Builder Typical Use Cases Text Generation & Summaries: Great for writing or summarizing content, like responding to an email or generating text for a record field. Why Not Forecast Future Sales Trends or Identify Potential High-Value Leads? (Option A) Forecasting trends typically involves predictive analytics and modeling capabilities found in Einstein Discovery or standard reporting, not generative text solutions. (Option B) Identifying leads for marketing campaigns involves lead scoring or analytics, again an Einstein Discovery or Lead Scoring scenario. Sending a Personalized RFP Email (Option C) is a classic example of using generative AI to compose well-structured, context-aware text. Conclusion Option C (Send reply to a request for proposal via a personalized email) is the best match for Einstein Prompt Builder's generative text functionality.
#12 Once a data source is chosen for an Agentforce Data Library, what is true about changing that data source later? Select 1
✅ Answer: The data source cannot be changed after it is selected.
Why is "The data source cannot be changed after it is selected" the correct answer? When configuring an Agentforce Data Library, the data source selection is permanent. Once a data source is set, it cannot be modified or replaced. This design ensures data consistency, security, and reliability within Salesforce's AI-driven environment. Key Considerations in Agentforce Data Library Data Source Lock-In The chosen data source remains fixed to maintain data integrity and avoid inconsistencies. Any updates or modifications require creating a new Data Library instead of modifying the existing one. Why Can't the Data Source Be Changed? The data source defines the foundation of AI-driven workflows, and any modification would disrupt processing logic. Agentforce tools rely on structured datasets to enable AI-powered recommendations, and changing data sources could lead to inconsistencies in grounding techniques. Workarounds for Changing Data Sources If an organization needs to use a different data source, a new Agentforce Data Library must be created and configured from scratch. Old data can be manually migrated into the new data source for continuity. Why Not the Other Options? # A. The data source can be changed through the Data Cloud settings. Incorrect because once the data source is linked to an Agentforce Data Library, it cannot be altered, even via Data Cloud settings. # B. The Data Retriever can be reconfigured to use a different data source. Incorrect as the Data Retriever works within the constraints of the selected data source and does not provide an option to swap data sources post-selection.
#13 What is a valid use case for Data Cloud retrievers? Select 1
✅ Answer: Returning relevant data from the vector database to augment a prompt.
Comprehensive and Detailed In-Depth Explanation: Salesforce Data Cloud integrates with Agentforce to provide real-time, unified data access for AI-driven applications. Data Cloud retrievers are specialized components that fetch relevant data from Data Cloud's vector database—a storage system optimized for semantic search and retrieval—to enhance agent responses or actions. A valid use case, as described in Option A, is using these retrievers to return pertinent data (e.g., customer purchase history, support tickets) from the vector database to augment a prompt. This process, often part of Retrieval-Augmented Generation (RAG), allows the LLM to generate more accurate, context-aware responses by grounding its output in structured, searchable data stored in Data Cloud. Option B: Grounding data from external websites is not a primary function of Data Cloud retrievers. While RAG can incorporate external data, Data Cloud retrievers specifically work with data within Salesforce's ecosystem (e.g., the vector database or harmonized data lakes), not arbitrary external websites. This makes B incorrect. Option C: Data Cloud retrievers are read-only mechanisms designed for data retrieval, not for modifying or updating source systems. Updates to source systems are handled by other Salesforce tools (e.g., Flows or Apex), not retrievers. Option A is correct because it aligns with the core purpose of Data Cloud retrievers: enhancing prompts with relevant, vectorized data from within Salesforce Data Cloud.
#14 When a customer chat is initiated, which functionality in Salesforce provides generative AI replies or draft emails based on recommended Knowledge articles? Select 1
✅ Answer: Einstein Service Replies
When a customer chat is initiated, Einstein Service Replies provides generative AI replies or draft emails based on recommended Knowledge articles. This feature uses the information from the Salesforce Knowledge base to generate responses that are relevant to the customer's query, improving the efficiency and accuracy of customer support interactions. Option B is correct because Einstein Service Replies is responsible for generating AI-driven responses based on knowledge articles. Option A (Einstein Reply Recommendations) is focused on recommending replies but does not generate them.
#15 Universal Containers has a custom Agent action calling a flow to retrieve the real-time status of an order from the order fulfillment system. For the given flow, what should the Agentforce Specialist consider about the running user's data access? Select 1
✅ Answer: B. The custom action adheres to the permissions, held-level security, and sharing settings configured in the flow.
When a flow is invoked via a custom Agent action, its data access depends on the flow's runtime configuration, not system mode by default. Salesforce flows can be configured to respect the running user's permissions and sharing settings: ➤ If the flow is set to "Run as the User Who Launched the Flow" (enabled in Flow Settings), it adheres to the user's permissions, field-level security (FLS), and sharing rules. Option C is incorrect because flows do not always run in system mode unless explicitly configured to do so. ➤ Option A is misleading because "with sharing" is an Apex concept, not a flow setting. Flows use runtime settings like FLS and sharing enforcement.
#16 Universal Containers built a Field Generation prompt template that worked for many records, but users are reporting random failures with token limit errors. What is the cause of the random nature of this error? Select 1
✅ Answer: B. The number of tokens generated by the dynamic nature of the prompt template will vary by record.
Comprehensive and Detailed In-Depth Explanation:In Salesforce Agentforce, prompt templates are used to generate dynamic responses or field values by leveraging an LLM, often with grounding data from Salesforce records or external sources. The scenario describes a Field Generation prompt template that fails intermittently with token limit errors, indicating that the issue is tied to exceeding the LLM's token capacity (e.g., input + output tokens). The random nature of these failures suggests variability in the token count across different records, which is directly addressed by Option B. Prompt templates in Agentforce can be dynamic, meaning they pull in record-specific data (e.g., customer names, descriptions, or other fields) to generate output. Since the data varies by record—some records might have short text fields while others have lengthy ones—the total number of tokens (words, characters, or subword units processed by the LLM) fluctuates. When the token count exceeds the LLM's limit (e.g., 4,096 tokens for some models), the process fails, but this only happens for records with higher token-generating data, explaining the randomness. > Option A: Switching to a "Flex" template type might sound plausible, but Salesforce documentation does not define "Flex" as a specific template type for handling token variability in this context (there are Flow-based templates, but they're unrelated to token limits). This option is a distractor and not a verified solution. ➤ Option C: The LLM's token processing capacity is fixed per model (e.g., a set limit like 128,000 tokens for advanced models) and does not vary with user demand. Demand might affect performance or availability, but not the token limit itself. Option B is the correct answer because it accurately identifies the dynamic nature of the prompt template as the root cause of variable token counts leading to random failures.
#17 Universal Containers wants to allow its service agents to query the current fulfillment status of an order with natural language. There is an existing autolaunched flow to query the Information from Oracle ERP, which is the system of record for the order fulfillment process. How should an Agentforce Specialist apply the power of conversational AI to this use case? Select 1
✅ Answer: A. Create a custom Agent action which calls a flow.
Why is "Create a custom Agent action which calls a flow" the correct answer? In Agentforce, the best way to allow service agents to query order fulfillment status from an external system (Oracle ERP) using natural language is to create a custom Agent action that invokes an existing autolaunched flow. Key Considerations for This Approach: Custom Agent Action Triggers the Flow A custom Agent action is designed to call Salesforce flows, enabling external system integration. The flow retrieves real-time fulfillment data from Oracle ERP and returns results to the agent. Enables AI-Powered Query Execution The Agent can understand natural language and map user utterances to the correct Agent action. This ensures that agents receive accurate order fulfillment updates quickly. No Need for Manual Data Entry Instead of manually searching Oracle ERP, agents can query fulfillment status using Al-powered Agentforce workflows. Why Not the Other Options? # B. Configure the Integration Flow Standard Action in Agent Builder Incorrect because Integration Flow Standard Actions are for predefined use cases, not custom ERP integrations. They do not provide the flexibility needed to connect with Oracle ERP dynamically. # C. Create a Flex Prompt Template in Prompt Builder Incorrect because Flex prompts are used for structuring AI-generated responses, not executing queries on external systems. This approach does not enable the AI to retrieve live fulfillment status from Oracle ERP.
#18 What considerations should an Agentforce Specialist be aware of when using Record Snapshots grounding in a prompt template? Select 1
✅ Answer: A. Activities such as tasks and events are excluded.
Comprehensive and Detailed In-Depth Explanation:Record Snapshots grounding in Agentforce prompt templates allows the AI to access and use data from a specific Salesforce record (e.g., fields and related records) to generate contextually relevant responses. However, there are specific limitations to consider. Let's analyze each option based on official documentation. ➤ Option A: Activities such as tasks and events are excluded.According to Salesforce Agentforce documentation, when grounding a prompt template with Record Snapshots, the data included is limited to the record's fields and certain related objects accessible via Data Cloud or direct Salesforce relationships. Activities (tasks and events) are not included in the snapshot because they are stored in a separate Activity object hierarchy and are not directly part of the primary record's data structure. This is a key consideration for an Agentforce Specialist, as it means the AI won't have visibility into task or event details unless explicitly provided through other grounding methods (e.g., custom queries). This limitation is accurate and critical to understand. ➤ Option B: Empty data, such as fields without values or sections without limits, is filtered out. Record Snapshots include all accessible fields on the record, regardless of whether they contain values. Salesforce documentation does not indicate that empty fields are automatically filtered out when grounding a prompt template. The Atlas Reasoning Engine processes the full snapshot, and empty fields are simply treated as having no data rather than being excluded. The phrase "sections without limits" is unclear but likely a typo or misinterpretation; it doesn't align with any known Agentforce behavior. This option is incorrect. ➤ Option C: Email addresses associated with the object are excluded.There's no specific exclusion of email addresses in Record Snapshots grounding. If an email field (e.g., Contact.Email or a custom email field) is part of the record and accessible to the running user, it is included in the snapshot. Salesforce documentation does not list email addresses as a restricted data type in this context, making this option incorrect. Why Option A is Correct:The exclusion of activities (tasks and events) is a documented limitation of Record Snapshots grounding in Agentforce. This ensures specialists design prompts with awareness that activity-related context must be sourced differently (e.g., via Data Cloud or custom logic) if needed. Options B and C do not reflect actual Agentforce behavior per official sources.
#19 Universal Containers has seen a high adoption rate of a new feature that uses generative AI to populate a summary field of a custom object, Competitor Analysis. All sales users have the same profile but one user cannot see the generative AlI-enabled field icon next to the summary field. What is the most likely cause of the issue? Select 1
✅ Answer: C. The user does not have the field Generative AI User permission set assigned.
In Salesforce, Generative AI capabilities are controlled by specific permission sets. To use features such as generating summaries with AI, users need to have the correct permission sets that allow access to these functionalities. Generative AI User Permission Set: This is a key permission set required to enable the generative AI capabilities for a user. In this case, the missing Generative AI User permission set prevents the user from seeing the generative AI-enabled field icon. Without this permission, the generative AI feature in the Competitor Analysis custom object won't be accessible. Why not A? The Prompt Template User permission set relates specifically to users who need access to prompt templates for interacting with Einstein GPT, but it's not directly related to the visibility of AI-enabled field icons. > Why not B? While a prompt template might need to be activated, this is not the primary issue here. The question states that other users with the same profile can see the icon, so the problem is more likely to be permissions-based for this particular user.
#20 Universal Containers is very concerned about security compliance and wants to understand: Which prompt text is sent to the large language model (LLM) * How it is masked * The masked response What should the Agentforce Specialist recommend? Select 1
✅ Answer: C. Enable audit trail in the Einstein Trust Layer.
To address security compliance concerns and provide visibility into the prompt text sent to the LLM, how it is masked, and the masked response, the Agentforce Specialist should recommend enabling the audit trail in the Einstein Trust Layer. This feature captures and logs the prompts sent to the large language model (LLM) along with the masking of sensitive information and the AI's response. This audit trail ensures full transparency and compliance with security requirements. ➤ Option A (Einstein Shield Event logs) is focused on system events rather than specific AI prompt data. Option B (debug logs) would not provide the necessary insight into AI prompt masking or responses.