March 2025

Asking Good Questions Is Harder Than Giving Great Answers

AI intelligence assessments focus on answering complex questions, but meaningful inquiry begins with asking good questions. The author reflects on their poor performance in an AI exam, highlighting biases in the test and the importance of framing compelling historical queries. Successful historical research often stems from unique questions that reveal insights, suggesting current AI assessments miss this critical aspect of human intelligence.

https://newsletter.dancohen.org/archive/asking-good-questions-is-harder-than-giving-great-answers/

No One Knows What the Hell an AI Agent Is

Tech industry is buzzing about AI agents, but definitions are unclear. Companies like OpenAI, Microsoft, and Salesforce have varying interpretations, leading to confusion and frustration. With marketing influencing terms, standardization is lacking, causing challenges in measuring effectiveness and aligning expectations. The ambiguity in defining AI agents is both an opportunity for customization and a risk for misaligned goals.

https://techcrunch.com/2025/03/14/no-one-knows-what-the-hell-an-ai-agent-is/

Browse No More

TLDR: The joy of web browsing has diminished as AI answer engines like ChatGPT and Perplexity take over, prioritizing convenience over exploration. Users sacrifice control and serendipitous discovery for efficiency, leading to a homogenized internet experience. Challenges include weak attribution, lack of transparency in search processes, and uninspired content. To preserve web diversity, AI tools should embrace intentional personalization and transparency, creating a more engaging and tailored browsing experience that connects users to unique voices online.

https://paulstamatiou.com/browse-no-more

Preparing for the Intelligence Explosion

Extreme TL;DR:

“Preparing for the Intelligence Explosion” discusses the potential for AI to accelerate technological progress equivalent to a century's advancements in just a decade. This rapid evolution presents both opportunities and significant challenges, termed grand challenges, including risks from AI, new technologies, and socio-political upheavals. The authors argue for proactive AGI preparedness to address these challenges before they arise, emphasizing that relying solely on aligned superintelligence is insufficient. Suggested preparations include creating policies to prevent power concentration, empowering responsible actors, and improving collective decision-making processes. The paper emphasizes the need for readiness amid accelerated change and uncertainty.

https://www.forethought.org/research/preparing-for-the-intelligence-explosion

VACE

VACE: All-in-One Video Creation and Editing tool by Zeyinzi Jiang et al. Enables diverse video generation and editing features (e.g., Move, Swap, Reference, Expand, Animate). Examples showcase various scenes and styles. Includes re-rendering capabilities preserving content, structure, subject, posture, and motion. Acknowledges contributors and sources design template.

https://ali-vilab.github.io/VACE-Page/

AI Search Engines Cite Incorrect Sources at an Alarming 60% Rate, Study Says

A study by Columbia Journalism Review reveals that AI search engines incorrectly cite sources over 60% of the time, significantly misinforming users and disregarding publisher requests. Testing eight AI search tools found error rates varying from 37% to 94%. Many models fabricated URLs and failed to respect robots.txt settings, creating tension for publishers. Despite premium versions performing poorly, industry leaders call for improvements while users are cautioned against expecting high accuracy from free AI tools.

https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/

AI Search Has a Citation Problem

AI search tools struggle with accurate citations, often providing incorrect or fabricated information. A study comparing eight generative search engines found over 60% of responses incorrect. Premium models had higher error rates due to confidently incorrect answers. Chatbots frequently ignored publisher preferences and misattributed content. Despite formal licensing deals, citation accuracy remains low, harming both publishers and users. Overall, generative tools present risks in information reliability, with potential negative impacts on the news industry.

https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php

A Bear Case: My Predictions Regarding AI Progress — LessWrong

AI advancement will likely experience diminishing returns, with incremental improvements in models like GPT-4.5, but no breakthrough to AGI. Current LLMs reliable for specific tasks but won't generalize well. Agi labs popularize hype, masking limitations. Predictions suggest slow integration into tools without transforming jobs. Future might bring new approaches, but significant advancements expected by 2030 are uncertain. Overall, skepticism prevails regarding claims of LLM capabilities and potential.

https://www.lesswrong.com/posts/oKAFFvaouKKEhbBPm/a-bear-case-my-predictions-regarding-ai-progress

Prompt Injection Explained, November 2023 Edition

Prompt injection is a security vulnerability affecting AI applications, enabling users to bypass original programming instructions and issue alternative commands. Despite over a year of discussion, effective solutions remain elusive. This poses risks, especially for AI assistants handling private data, as they could inadvertently follow harmful user-issued instructions. Key applications, particularly involving sensitive information like emails or law enforcement reports, are under threat. Continued efforts in AI development face challenges due to this unresolved vulnerability.

https://simonwillison.net/2023/Nov/27/prompt-injection-explained/

Scroll to Top