I think a lot of marketers are abandoning traditional SEO for the wrong reasons. They’re looking at AI search, AI answers, vector databases, semantic retrieval, and they’re acting like the old fundamentals don’t matter anymore. I think that’s a huge mistake.

What they’re seeing is the surface layer. They’re seeing prompts, generated answers, vector search, semantic search, all of that. And they’re deciding that links don’t matter anymore, traditional SEO doesn’t matter anymore, and all of the old rules are gone.

I don’t think that’s true at all. And I think it’s a disasterous way to approach the current evolution of digital marketing.


ChatGPT and other LLMs still have a search engine problem. Right now, they can get away with leaning on external systems. They can pull in search results from Google or Reddit, and use the language model to generate something that feels direct and useful.

That works for now. But it doesn’t make sense in the long-run for any major LLM.

At some point, they need deeper control over search. They need their own search indexes. They need stronger ranking. They need better freshness. They need ways to deal with spam and low-quality content at scale. The more they have to own search quality, the more exponentially difficult this becomes.

The basic mechanics are not the hard part. Building an inverted index is not difficult. TF-IDF is basic math. BM25 is only slightly-less basic math for ranking text documents. You can build workable versions of that stuff, by hand, without an LLM, in a day or two.

The hard part is deciding what deserves to rank.


That’s where the real search engine work always lived. How do you identify spam? How do you evaluate authority? How do you compare documents that all look relevant on the surface? How do you stop garbage from flooding the system?

If your answer is vector similarity, that gets you almost nowhere in the context of those problems. Vector search can tell you which documents are close in meaning to a prompt, or even if two documents are duplicate content. That’s insanely useful.

But that only solves similarity. It doesn’t solve authority. It doesn’t solve credibility. It doesn’t solve the problem of tens of thousands of pages that all look semantically similar but vary wildly in quality and authority. It doesn’t solve the fact that the internet is full of spam.

That’s why links and spam detection still matter. Not because we’re trapped in 2003. They matter because they’re one of the most important signals that helps solve ranking quality. And once AI companies have to solve more of the authority problem, they’re going to need more and more signals to get them to the solution.

If you don’t understand how hard of a problem this is, let me help you:

OpenAI has had over 13 billion dollars of investment from Microsoft. But when it came to scraping search results, the decided to scrape Google results instead of Bing. And it appears that’s still the case.

Let me say that another way: The second best search engine on the planet, which has tens of billions of dollars invested into it and has been around for decades, is still not nearly good enough for quality input to augment an LLM’s response.

And without going into the details, LLMs still have to be trained in a way that more authoritative and trustworthy information is factored more heavily into the process of training the model. How do you think that trust and authority is calculated?


That’s why I think a lot of current AI SEO advice is shallow. People say links don’t matter because vectors don’t need links. That’s technically true in a very narrow sense, but it’s missing the real issue. Links are not just navigation. Links are a signal. They help express authority, citation, relationships, and trust.

And there are probably other signals that will matter more in the future too. Video transcripts are a great example. If you can connect a transcript to a real person, a real channel, real history, real engagement, and a long publishing record, that starts to look like a very useful signal.

It’s not perfect, but it’s useful. So now you’re weighting text, links, history, trust, channel quality, recency, and spam risk all at once. NOW, that gets to the principles of search and ranking systems.


So when I hear people say links are dead or traditional SEO doesn’t matter, I don’t hear insight. I hear a weak understanding of the fundamentals of information retrieval.

AI didn’t erase the need for the value of search engines. It shifted them into a different, but equally-critical, layer.

Chat interfaces are still a function of information retrieval. They still need ranking. They still need filtering. They still need trust signals. They still need ways to deal with spam, authority, and recency. And LLMs by themselves are not reliable enough to replace that for one simple reason: they don’t know when they don’t know.

That’s why I think the smart move right now is not to abandon traditional SEO. It’s to understand it more deeply and continue investing even more into the fundamentals. Because the companies building the next generation of AI systems need PageRank and web graph signals, ranking systems, news aggregation, and stronger search engine infrastructure behind the scenes.

And when that happens, the people who kept building on solid search ranking signals are going to be in a much stronger position than the people who assumed the fundamentals stopped mattering.