The Information has a simple mission: deliver important, deeply reported stories about the technology business you won't find elsewhere.
| Sep 29, 2025 | | | | The Information Forum Digest The top posts from The Information's subscriber only community. | | | Featured posts Posted by Brian Demsey · Founder 8 hours ago It's easier to train a puppy than an LLM—at least with a puppy you can smell a mess.
With large language models, the failures hide in... | | | Top comments | | Robert Dvorak | Distyl AI's raise highlights the demand for hands-on AI expertise. Forward-deployed engineers and quick integrations solve immediate needs — but history shows nearly all AI pilots collapse before reaching scale. The reason? Enterprises are still trying to graft AI onto Traditional Operating Models (TOMs) that were never built to carry it.
What's needed isn't just adoption — it's operationalization. A Business Operating System (BOS), harmonizing AI × IT × Human Intelligence, changes the equation from incremental automation to exponential operating leverage. That's where revenue growth, cost optimization, and resilience converge.
Capital is pouring into adoption. The next wave of leaders will be those who deliver operationalization. Bob Message | | Josh Bersin | Benioff has been a terrific cheerleader for AI but now reality sets in. All the hard work is in redesigning jobs, business processes, and customer experiences - it's not as easy as "buying a tool and turning it on." So it's not surprising that Salesforce's AI-centric revenues are lagging (much of their tech is quite dated). Message | | John Collins | The Apple iPhone analogy reveals a fundamental misunderstanding of AI infrastructure complexity: an iPhone is a complete consumer product while an H100 GPU is a specialized component requiring billions in supporting infrastructure to be operationally useful. Martin's assertion that there's "surely a simpler way" for NVIDIA staff to use their own product misses that Lambda's infrastructure services (and CoreWeave's) ARE the simpler way, providing extreme power density management (100kW+ per rack), specialized InfiniBand networking, direct-to-chip liquid cooling, and orchestration software that would require massive capital misallocation for NVIDIA to replicate internally. NVIDIA isn't engaging in financial engineering; they're following sound technical strategy by outsourcing non-core infrastructure operations to specialists while focusing on chip design R&D. Building competing cloud infrastructure would force NVIDIA into utility-scale data center operations, a completely different business model that could create channel conflicts with major customers like Microsoft and Amazon who buy billions in NVIDIA hardware. The dismissal of operational necessity overlooks that these partnerships solve real capacity constraints (the same bottlenecks causing months-long GPU wait times at major cloud providers) through legitimate division of labor between component manufacturing and infrastructure services, not the "round-trip" financial manipulation suggested. Message | | | | | | | |
0 comentários:
Postar um comentário