Big Tech & AI, Coding Agents, and Investing in Intangibles
Issue #4 | Margin of Signal: June 29, 2025
A weekly curation of the highest-signal content across investing, business, tech, and AI.
This week, we explore:
AI & Big Tech positioning
The future of coding with Cursor’s CEO
Coding agent rankings
Michael Mauboussin’s Intangibles and Modern Value Investing presentation
Geopolitical risks in the US, China, and beyond
Growth investing & enabling technology companies
Checking In on AI and the Big Five (Stratechery)
Ben Thompson revisits his January 2023 “AI and the Big Five” framework to assess where Apple, Google, Meta, Microsoft, and Amazon stand after two and a half years of developments in AI land.
Each company’s AI posture still maps neatly onto Clayton Christensen’s “sustaining vs. disruptive innovation” lens, but the stakes and the spending have soared.
Apple remains a classic sustaining player: its devices gain from AI (on‑device LLMs, semantic indexes), but Apple Intelligence has underwhelmed so far. Thompson argues that Apple’s real lever is hardware differentiation. This could mean partnering deeply (iOS 26 + OpenAI) or, preferably, making new form factors such as AR/VR or robotics to lock in cloud‑augmented experiences.
Google is the disruptive outlier: top‑tier infrastructure, best data, and Gemini’s model prowess threaten Search’s ad‑driven cash cow. Yet integration pitfalls (TPU lock‑in) and underwhelming real‑world adoption relative to OpenAI show that “perfect models” don’t guarantee product wins. On the other hand, Google Cloud’s AI gravity could steal enterprise workloads if Google can thread the product needle.
Meta’s hiring spree (Zuckerberg’s “superintelligence” lab) underscores a newfound urgency. Generative UIs and XR gambles hinge on getting AI right.
Thompson’s surprise wasn’t just Meta’s technical missteps but its leadership vacuum. Zuckerberg admitted as much by recruiting Alexandr Wang, Nat Friedman, and Daniel Gross.
If Meta can monetize AI‑driven attention as well as it does ads, the upside is massive; if not, time spent in chatbots is time lost for the core business.
Microsoft has leaned into a “platform plus copilot” strategy, bundling Copilot features across Office 365, Azure, and GitHub to drive both lock‑in and new demand. Thompson highlights that, unlike standalone models, Microsoft’s bet is on embedding AI as a default layer, converting existing enterprise customers into active AI users. Notably, Microsoft’s strength lies in synchronizing cloud‑scale infrastructure with SaaS adoption, making AI an almost invisible productivity tax for businesses.
Amazon is taking a two‑pronged approach: internally, AWS invests heavily in AI Ops (automated monitoring, anomaly detection) to optimize its own cloud; externally, it layers generative personalization across retail, advertising, and logistics. Thompson notes that Amazon’s edge is data breadth (transactional, shipping, customer service) and its ability to feed real‑time signals into models.
The major implication is that Amazon can both reduce its cost base and elevate margins, while offering AI‑driven services that competitors struggle to match in end‑to‑end integration.
Towards the end of the article, Thompson shifts the lens to the foundation model makers: OpenAI, Anthropic, xAI, and Meta.
Foundation model makers form the backbone of the AI ecosystem, with OpenAI uniquely positioned as the de facto consumer AI provider, Anthropic carving out a developer‑centric niche through coding products and API revenue, and xAI hampered by capital‑intensive infrastructure bets.
He cautions that Big Tech incumbents who are not model makers must forge deep partnerships or build in‑house capabilities, because owning the end‑user relationship—whether via subscriptions, ads, or API integrations—will determine who captures the long‑term value in AI.
Cursor’s Vision for AI‑Native IDEs (Y Combinator)
In his YC Startup Library interview, Cursor CEO Michael Truell lays out the company’s backstory and his vision for the future of coding.
Cursor is an AI‑powered integrated development environment (IDE) that transforms natural‑language prompts into working code, acting as a real‑time co‑pilot to amplify developer productivity. It’s arguably the hottest AI startup outside of OpenAI and Anthropic.
Cursor already sees AI generate 40–50% of lines for paid power users, and usage across these users is their north star metric. These power users leverage Cursor several days a week, multiple times a day.
“The goal with the company is to replace coding with something that’s much better… We think that over the next 5 to 10 years, it will be possible to invent a new way to build software that’s higher level and more productive, distilled down to defining how you want the software to work and how you want it to look.”
- Michael Truell
Truell noted that two engineering bottlenecks stand out:
Context windows (massive codebases exceed even million‑token models)
Continual learning (hourly feedback vs. occasional fine‑tuning)
Cursor’s solution is to wire runtime logs, design files, and chat threads directly into the model’s training loop, turning every “reject” or tweak into fuel for smarter agents.
Still, taste remains irreplaceable. Truell emphasizes that perfect code is not just correct, it must feel right.
Layout, UX pacing, naming conventions—these soft judgments are where human “logic designers” retain their edge. For founders, the lesson is clear: build feedback loops not just for functional accuracy, but for the gestalt of good code.
The interview also had some interesting commentary on Cursor’s founding story. They originally targeted mechanical‑engineering CAD workflows, only to discover through customer interviews that deep expertise beats generic feedback.
Truell mentioned that Cursor employees probably would have been better off going “undercover at a mechanical engineering company” instead of conducting user interviews, since the founding team lacked the necessary domain expertise.
After struggling to find product-market fit, they decided to pivot to general code, where AI assistance scales across millions more developers.
Fresh off a $900 million funding round, that move looks quite prescient.
Coding Agents Deep Dive: What Works, What Doesn't & What's Next
Coding agents like Cursor are transforming programming from writing code in languages like Python and C++ to giving instructions in plain English. While they have notable limitations today, they're already making programmers more productive and will only improve.
Timothy B. Lee (Understanding AI Substack) tested seven coding agents to see if the hype matches reality.
The Task: Build a website to search and browse Waymo crash data by merging two spreadsheets—testing how well coding agents handle real-world problems versus "canned" examples.
The Results (ranked from best to worst):
Claude Code: Best overall, completing tasks with minimal backtracking
Cursor: Solid performance after initial setup struggles
OpenAI Codex: Got the job done but with poor formatting and quirks
Windsurf: Fast initial setup but brittle—small changes broke functionality
Lovable: Created attractive designs but struggled with data import
Replit: Worked for 17 minutes but couldn't create a functioning website
Bolt.new: Failed immediately due to file size limits
Tradeoffs
There's a clear tension between user-friendliness and horsepower:
"Vibe coding" platforms (Bolt, Lovable, Replit) promise one-prompt websites for non-programmers but struggle with complex or unusual requests
Professional tools (Claude Code, Cursor, Codex) require technical setup and command-line knowledge, but offer significantly more versatility
Lee mentioned:
I want to start by repeating what I said in yesterday’s post: it’s amazing that these products work as well as they do. Like the computer-use agents, these coding agents are the result of combining reinforcement learning with tool use. They all have significant limitations, but all of them are also far more powerful than any coding tools that existed even a year ago.
How the pros use these tools
Unlike vibe coding, professionals use two key strategies:
1. Comprehensive Context Files: Companies maintain detailed guidelines (hundreds of lines) that specify which tools to use, coding standards, and company-specific requirements.
2. Plan-First Approach: Agents write detailed implementation plans that human programmers review and refine before execution.
We're witnessing the next step in programming language evolution: from assembly language to high-level languages, and now to natural language instructions. This doesn't eliminate programmers, it just changes what they do.
Someone still needs to define objectives, provide precise instructions, understand system architecture, and evaluate results.
This new paradigm means that programmers have to spend a lot less time sweating implementation details and tracking down minor bugs. But what hasn’t changed is that someone needs to figure out what they want the computer to do—and give the instructions precisely enough that the computer can follow them. For large software projects, this is going to require systematic thinking, awareness of tradeoffs, attention to detail, and a deep understanding of how computers work. In other words, we are going to continue to need programmers, even if most of them are writing their code in English instead of C++ or Python.
In other words, agentic AI speeds up how work gets done, but someone still needs to decide what tasks it should tackle (and then evaluate whether it’s done a good job).
This pattern will likely repeat across professions.
The technology augments expertise rather than replacing it. Legal agents will help lawyers, but lawyers will still need to direct tasks and apply professional judgment.
Bottom Line
Coding agents represent a fundamental shift in how we interact with computers, but they're tools that amplify human expertise, rather than replace it. This dynamic reinforces my thesis that the proliferation of AI and AI agents will make domain expertise and human agency more valuable, not less.
The future belongs to professionals who can effectively direct and evaluate AI agents, not to those who simply prompt and hope.
Intangibles and Modern Value Investing (Michael Mauboussin)
When Mauboussin speaks, I listen.
His Intangibles and Modern Value Investing talk highlights a critical recalibration: intangibles now represent the majority of S&P 500 market value, yet our headline earnings understate true profitability.
Mauboussin explains why we should care about this topic, and why you need to be careful when looking at the S&P 500 multiple today compared to the past (quote edited for clarity):
If we apply these data adjustments to the S&P 500, our analysis indicates that S&P earnings are actually 10% to 15% higher than what's reported as stated operating earnings. This automatically means that multiples are around 10% lower than their headline figures. Therefore, when comparing historical data, it's crucial to be mindful that you're not always comparing "apples to apples" beyond a certain point.
Book‑to‑price and P/E comparisons across eras become misleading without adjusting for R&D, software, brand equity, and other intangible investments.
Companies heavy in intangibles tend to grow faster but also carry wider dispersion, so base‑rate expectations must shift accordingly.
Analysts and founders should break out intangible investments from SG&A or capex line items, model their amortization, and overlay competitive moats. Investors can update their value factor definitions to include intangible‑adjusted multiples and leverage scenario analysis for higher‑volatility growth pathways.
As software and AI continue to eat the world, the importance of accounting for intangibles will only increase.
Geopolitical Risks and the Debt Clock (Ken Rogoff × Dwarkesh Patel)
Ken Rogoff (Harvard, ex‑IMF) delivers a sobering diagnosis of global macro: within a decade, the U.S. faces a debt‑induced inflation crisis, though not a Japan‑style financial depression, and China is trapped by financial repression and state‑directed investment that deepens its malaise.
Rogoff argues that America’s “exorbitant privilege” will erode as high debt forces refinancing at rising rates, leading to hundreds of billions in extra interest expense, and that AGI’s long‑term impact on deficits is uncertain. He sees a coming rebalancing toward foreign equities as the dollar’s dominance wanes under competitive central‑bank digital currencies.
So, how should we position ourselves?
Investors must stress-test portfolios against inflation's return and favor markets with sound fiscal footing.
Business leaders face competitive upheaval as central-bank digital currencies and AI-powered economic management reshape the playing field.
Rogoff's analysis cuts through the complacency: both the US and China stand at economic crossroads, and neither can afford to stumble.
The era of easy money is ending, and the age of hard choices has begun.
Enabling Technology Companies: ASML, TSMC, MercadoLibre and Beyond
Speedwell Research interviewed multi-billion-dollar global tech fund manager William de Gale on Nvidia, tech enablers vs disruptors, and growth investing.
This was a mind‑bending conversation, and I enjoyed the discussion on investing in “enabling” technology companies (firms whose products or platforms unlock entire ecosystems).
ASML’s extreme ultraviolet lithography machines, TSMC’s contract manufacturing, and MercadoLibre’s payments and logistics network each amplify thousands of downstream businesses. William de Gale explains that these business models combine high switching costs, pricing power, and network effects in a unique way.
Key takeaways for investors
Look for capital‑intensive leaders with near‑monopolistic positions in their niche, because these firms set the technology frontier and benefit from long‑duration cash flows.
Even when end‑market demand softens, enabling players can sustain margins through maintenance, upgrades, or value‑added services.
Valuation multiples may compress less during downturns, given the structural resilience of their moats.
That’s all for this week. We’ll be back in your inbox next Sunday with another Margin of Signal.
Thanks for reading,
Chima