Recent AI advancements have led to the emergence of “Deep Research” from companies like Google, OpenAI, and Perplexity, but the term lacks a clear definition. This post explores different Deep Research implementations, indicating it primarily involves using large language models (LLMs) for report generation through iterative searches and analysis. Various techniques, from Directed Acyclic Graphs (DAG) to trained models like Stanford's STORM, show differing depths of research and levels of sophistication. These developments illustrate an evolving landscape, though naming conventions remain confusing.
https://leehanchung.github.io/blogs/2025/02/26/deep-research/