Overview: Reimagine the search box. Let your users find answers to their questions without having to trawl through page after page of results. Add Generative AI to your current search platform.
Boost conversions with a better search experience. Add product descriptions, reviews and more with Generative AI powered summaries.
Get more from your current engine by adding vector embeddings, question / answering and summarization. Fine tune securely with your own data.
Optimize performance by matching the best hardware configuration and the best inference serving framework to latency and training needs for specific search use cases.
bookend makes Safe AI simple
How it works:
- Convert text to vectors
Documents or items from a catalog such as products are converted into arrays of numbers [0.03, 0.45, ….] and stored in a vector database. New search requests are converted to the same vector space. Upon initiation of a search request, a nearest neighbor match is performed to find the best semantic match. Results numbering in a range of 10s or 100s depending upon the corpus size are then returned.
2. Question and Answering with search results
When results from keyword searches are don’t meet the mark, users will abandon the buying or discovery process. LLMs give users a way to interact with data that go beyond keyword searches, with interactive question and answer experiences. Since processing millions of search documents with LLMs is expensive and time consuming, only the most relevant results from the “text to vector” step above are to be passed to the LLMs.
3. Summarization of search results
When searching for a needle in a haystack a good user experience is to summarize the results, which saves time for users analyzing a corpus of 100s of potential results and gives them the best response condensed..