Things We Learned About LLMs in 2024

2024 LLM Highlights:
GPT-4 Competition: 18 orgs surpassed GPT-4 with new models, including Google’s Gemini 1.5 Pro with 2 million tokens input.
Local ML Power: Powerful LLMs now run on personal laptops, showcasing incredible efficiency.
Lower Costs: LLM operational expenses plummeted due to competition, enabling affordable usage (e.g., Google’s Gemini 1.5 Flash pricing).
Multi-Modal Advances: Most major models adopted multi-modal capabilities (audio, video).
Voice Integration: Realistic audio input/output was introduced, enhancing interaction.
App Creation Commoditization: Prompt-driven app development became commonplace, illustrating LLMs’ capabilities.
Temporary Free Access: Top LLMs were briefly available for free before subscription services resumed.
Agents Concept Stagnation: The term “agents” remains vague; substantial improvements needed.
Evaluations Essential: Effective automated evals became crucial for developing impactful LLM applications.
Apple's ML Progress: Apple’s MLX library aided model efficiency, though their LLM offerings fell short.
Reasoning Model Advances: New model architectures focus on inference-based reasoning.
Cost-Effective Training: Major models like DeepSeek v3 trained efficiently under $6 million, indicating potential for sustainable practices.
Environmental Impact Mixed: Training efficiency improved, but large-scale data center growth poses environmental threats.

https://simonwillison.net/2024/Dec/31/llms-in-2024/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top