AI's use in customer data analysis is often unreliable due to outputs filled with errors, invented evidence, and generic insights. Caitlin Sullivan shares techniques for extracting trustworthy user insights from LLMs like ChatGPT, Claude, and Gemini. The core issue lies in AI's tendency to generate plausible but often fabricated quotes and its struggle with unstructured interview data and ambiguous survey responses. To combat invented evidence, Sullivan advises defining strict "quote rules" and verifying quotes in AI analysis. To avoid generic insights, she recommends loading prompts with project, business, product, and participant context. Different LLMs have different strengths: Claude for thorough analysis, Gemini for evidenced themes, and ChatGPT for final framing.
Sign in to continue reading, translating and more.
Continue