[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# Conversational filtering user experience guide\n\nSuccessful implementation of conversational product filtering relies on well-thought-out user experience design.\n\nVisual display elements\n-----------------------\n\nThe placement and appearance of the conversational filter significantly impact its effectiveness.\n\n### Vertical versus horizontal layout\n\nThese are some considerations as to whether to design a vertical or horizontal layout:\n\n- **Recommendation**: Prioritize a horizontally-oriented, vertically compact design. This minimizes the risk of pushing product results below the fold.\n\n- **Reasoning**: If the filter is displayed horizontally across the top, it can push product results down the page, which increases the cost of the feature by reducing immediate product visibility. Additionally, minimizing blank space between elements can add prime space on the web page for showing additional product tiles.\n\n### Handle long attributes\n\nIf multiple-choice options are long (such as brand names), don't wrap them to new lines as this adds height to the elements. Instead, allow them to extend horizontally off the page and enable side-scrolling.\n\nHere is an example of a horizontal scroll implementation:\n\n### Optimal placement\n\nConsider placing the conversational filter after 3-5 rows of products. This approach prevents the conversational element from displacing the initial list of products.\n\nA key takeaway for this placement is that the conversation filtering bar should be as vertically compact as possible. When the conversational product filtering feature is positioned prominently, it can shift product displays further down the page, out of immediate view. This can be a drawback, as shoppers see fewer products. Therefore, the products that **are visible** must be as relevant as possible.\n\n- **Side (vertical) versus top (horizontal)**: Consider placing the conversational filter after 3-5 rows of products. This approach prevents the conversational element from displacing the initial list of products.\n\n- **Strong consideration**: If conversational product filtering becomes your main method for narrowing product selections, consider fully minimizing or replacing your manual filter bar. This can let you add another column of product items.\n\nDesktop and mobile\n------------------\n\nWhile desktop implementations have proven successful, results on mobile have been less consistent and have shown lower overall performance gains. The limited screen size on mobile requires a more creative and deliberate approach to placement.\n\n- **Recommendation**: Prioritize desktop implementations over mobile, at first. The larger screen size on desktop allows for greater flexibility in creative designs. The smaller screen on mobile forces developers to prioritize certain elements.\n\n- **Avoid**: Chat window interfaces. Don't implement the conversational filter as a chat window. This takes users away from the main web interface and can disrupt the intended web checkout flow design that developers typically spend considerable time optimizing.\n\n### Additional mobile considerations\n\nMobile web and apps should also be treated independently when it comes to testing. Mobile app testing is inherently difficult to conduct, but offers greater flexibility. Mobile web is often quicker to test, but comes with different tradeoffs for various mobile web browsers.\n\nUser interaction with filters\n-----------------------------\n\nThis section describes how to configure conversational product filtering. [Replacing static, hard-coded filter elements](#replace-hard-coded-elements) with dynamic conversational filtering to liberate screen space for more targeted products is recommended. All applied filters, regardless of their origin, can globally update the product grid.\n\nSubsequent conversational questions adapt to the complete set of applied filters, which offers both multiple-choice options.\n\n### Unified global filters\n\nShoppers can interact with both conversational filters and any remaining filter elements applied. Your frontend implementation must be able to handle this scenario.\n\nUnified global filters have these characteristics:\n\n- **Global application** : When a user makes a selection from any filter element on the page, whether it is a conversational product filter or static filter element, the product grid must update to show results with *all* global filters applied.\n- **Intelligent follow-up** : The next conversational question the user sees should be relevant based on the complete set of applied filters applies, regardless of which element the user selects. For example, if a shopper selects a `color` filter from the conversational element and a `size` filter from the classic filter element, the subsequent conversational question should not ask the shopper what *size* they want.\n\n### Filter types\n\nConversational product filtering enables the option to use both multiple choice selections on the site.\n\n#### Multiple choice\n\nVertex AI Search for commerce can present up to 20 multiple-choice options, based on the value names in the product catalog. Options appear in a sorted list of the most relevant choices. Long options, such as long brand names, help ensure that users can side scroll rather than wrap to new lines, which maintains vertical compactness.\n\n### Replace hard-coded elements\n\nMany commerce search site developers have prebuilt, manual category filter components in their web interface that are intended for top revenue-generating queries. These filter components are typically expensive, time-consuming to produce, and not very interactive with the user.\n\n\n**Figure 2**. Example of hard-coded element display.\n\n- **Recommendation** : The core idea behind conversational filtering lets you quickly deploy dynamic experiences like these across *all* your products, not just for the few top queries that the visual elements were designed for. Therefore, identify and remove elements that conversational filtering is designed to replace. Avoid having two competing sets of filter elements that perform similar functions. This liberates space on the screen to show more targeted products.\n\nIdeas for experimentation\n-------------------------\n\nSome ideas for experimentation are:\n\n- **Placement between product rows**: Insert the component partway down the page, after three to five rows of products. This approach prevents the conversational element from significantly displacing the initial product displays.\n- **Fly-out or pop-up**: Use a button that triggers a dialog or fly-out menu containing the filter questions. This can be integrated with existing filter pop-ups, or a fly-out can be a separate element.\n- **Sticky bar**: A persistent bar on the screen presents the questions and options. This sits in front of the products rather than pushing them down.\n- **Testing considerations**: When testing mobile and desktop, ensure that these experiments are conducted independently. The shopping behaviors for each device vary greatly, and the visual components that work on one device might not translate to the other.\n\nData ingestion and quality\n--------------------------\n\nThe Vertex AI model's intelligence is built on user interaction data. The onboarding process uses a two-phased approach to data ingestion.\n\n### Phase 1: Initial start with historical events\n\nThe model can be trained on historical event data. Historical event data is initially ingested into the Google environment, which allows the model to recognize even new projects with no live interaction data.\n\n### Phase 2: Transition to live query data\n\nAfter the capability is live and starts to collect historically captured data, Vertex AI uses the live query data stream to refine the serving model. The live query data is generally of higher quality than historically captured event data as historical events can sometimes miss key information. This makes live query data more effective for ongoing optimization."]]