
The rapid adoption of AI has created a powerful new class of OSINT-ready data sources: exposed private LLM conversations and, secondarily, leaked system prompts. Over the past year, users and organizations have inadvertently exposed millions of private ChatGPT-style conversations through public share links, misconfigured integrations, browser extensions, API logs, and app-side vulnerabilities. Alongside these, many companies have also leaked internal system prompts containing workflow logic, decision rules, guardrails, and operational details. Together, these exposures form a new intelligence discipline: PromptINT. This talk demonstrates how exposed private conversations and leaked system prompts can be ethically collected, validated, and analyzed to extract valuable intelligence. These artifacts often reveal internal processes, escalation paths, business logic, authentication flows, and security assumptions-creating opportunities for OSINT practitioners to map organizational behavior, identify vulnerabilities, enhance phishing investigations, and understand how AI systems influence real-world decision-making. We will also walk through two real enterprise use cases where this type of leaked LLM data directly helped solve concrete organizational questions and uncover hidden operational risks.