By Om Malik
I first met Matthew Prince in 2010, the day he launched Cloudflare at TechCrunch Disrupt. His original pitch for Cloudflare was “Content Delivery Network (CDN) for the masses” that would take a few minutes to set up. Others like Akamai offered similar services, but for big-budget customers. Cloudflare was going after a burgeoning ecosystem of apps and relatively young services. That simple elevator pitch hid the real “why” of Cloudflare nicely—in the future, Prince and his co-founders Lee Holloway and Michelle Zatlyn wanted to build a network that would not only offer simplicity and speed but also protect their customers from emerging internet threats such as denial-of-service attacks.
Having seen the emergence of Akamai and a few other similar companies, it was clear Cloudflare was the one to watch. Their simpler, cloud-centric approach was novel. It was the way the modern internet was being built. It resonated with me. I decided to stay close, not only to Prince, who is a charming and eloquent communicator but also to keep close tabs on Cloudflare.
Fast-forward to today — that vision has turned Cloudflare into a $32 billion (in market capitalization) company with almost $1.3 billion in 2023 revenue. Those numbers don’t tell the full story of Cloudflare and its role in the internet’s infrastructure and smooth functioning. Just last month, the San Francisco-based company warded off a world-record 3.8 Tbps DDoS attack. DDoS attacks are very much a routine problem for large infrastructure providers.
A couple of months ago, I sat down with Matthew Prince, Cloudflare CEO, to talk about the future of internet infrastructure. We discussed the sudden global shift to everything online during the COVID-19 crisis. We also explored the balkanization of the internet and its impact on infrastructure, software, and services.
We dug into what role network performance plays in the AI revolution. The rise of artificial intelligence, particularly in the areas of training and inference, is transforming the way we think about network design and deployment of networking resources. Prince is betting on using AI-capable edge computing solutions and “on-ramps” to build this future network. I hope you enjoy this glimpse into the future of internet infrastructure as much as I did.
Here are excerpts from this conversation:
___
OM: We went from Web 1.0 to Web 2.0, which meant moving to the cloud. When mobile came around, there was another step-function change in the evolution of internet infrastructure. With AI, we are seeing the third big shift in internet infrastructure. I wonder what you see in the future and the role of companies like yours in this future that demands big infrastructure scale-up.
Matthew: I think there are two big trends that might lead to changes.
First, the role of AI in internet infrastructure design.
A lot of the attention right now in the AI space is around training, which makes sense to do in traditional hyperscale public clouds. You put a bunch of machines very close to each other, have them hyper-network together, and use that to do training. The cluster of machines are similar to what you need to predict the weather or model nuclear blasts. They're traditional, you know, high-performance computing clusters, which is effectively what the Amazons and Googles and Microsoft's the world built.
The next part, though, of AI is going to be around inference, which I think will be pushed out as close to users as possible, with more than fifty percent happening on end devices. Whether that's an iPhone or a driverless car, you don't want to have to worry about network latency.
When your driverless car sees a, you know, red ball bouncing out of a yard and a little girl with pigtails chasing after it, you want that decision to be made on the device itself. So I think you're going to see AI make a decision that happens locally on devices. Of course, some devices that need to be so inexpensive. They won't have longer battery life. In that case, being able to hand off to the network, and being able to do that inference locally, is going to matter.
However, some models will be too big for devices or require too many resources. So handing off inference to the network will matter. Locality is important for performance, but even more so because the further you are from local, the more foreign the AI feels. It needs some sense of local customs, norms, and rules.
[In AI terms, inference is the process that uses trained artificial intelligence models to make predictions or decisions based on new, unseen data. It uses patterns learned from training data to interpret and respond to new information. End devices such as the latest iPhones and MacBook computers come with chips that allow inference. Apple Intelligence portends a future where machines carry some amount of “intelligence.”
AI might take a cue from BigMac. A beef Big Mac in India is a no-no. Teriyaki BigMac in Japan is a winner. What seems normal in the U.S. might not be normal in France or Vietnam. That’s why localization will be a key factor for AI services, even more so than software services and products in the past. Localization is also going to be driven by regulation and privacy rules of individual geographies. While most don’t think about the impact of local rules on the growth of AI and AI platforms, the reality is we are facing a complex situation. Matthew elaborated on this further during our conversation. -- Om]
OM: One of the problems with all these new models and AI devices is the huge network latency. Even though I work on a fast network at home or have 5G, there is network latency that causes delays in the response. We will actually have a perceptible gap between when we ask and what we retrieve. I wonder, what will the performance of OpenAI or Apple Intelligence be in the real world? What does the content delivery network (CDN) become in this age?
Matthew: We've never really thought of ourselves as a CDN. Starbucks used to talk about sort of the third place -- somewhere that was in between office and home where you did a little bit of work, you did a little bit of socializing, maybe you got a bite to eat. While it is not a perfect analogy, I think at some level, we see the same opportunity for us, to be that third place.
There's going to be a role for edge devices -- for the Apples and Androids and Samsungs of the world. Every driverless car, every ring doorbell will have some amount of AI inference that happens on those devices. But in some cases, they are either going to be too low-powered, or the models are going to be too big.
At the same time, the governments outside the US are saying we're not going to make the same what they view as mistakes that they made with the kind of original rollout of the internet. We're not going to allow it to be something where every piece of our citizens' data gets sent back to the proverbial data center capital of America -- Ashburn, Virginia. All of those things then suggest a need for a third place and that's why we have created what we call a connectivity cloud.
OM: Can you elaborate?
Matthew: AI has changed how we're thinking about networking infrastructure, it's caused us to put more inference-capable hardware close to users around the world.
We had a hunch six years ago that there was going to be some opportunity with GPUs located close to users. Four years ago, we did sort of a test balloon in a partnership with Nvidia, where we put this stuff out there. The reality was, the market just wasn't ready for it. The machines were out there. About 18 months ago we announced we're going to roll GPUs out everywhere across our infrastructure. We're seeing an incredible uptick in AI models delivered from the edge.
OM: Do you think the fundamental infrastructure -- the routers, switches, cross connects, etc., that we've long forgotten about -- will need to get reconfigured and reinvented for this new era, given the speed data will need to move and the changing nature of queries from text to voice and visual inputs? Or do you think that is a solved problem at present?
Matthew: During COVID (in April 2020), the internet traffic doubled in two weeks. It also showed us where (there were) a lot of the constraints. The internet held together, it wasn't a foregone conclusion, there were definitely places that had had some challenges in Europe, and there were days when we were very concerned that the European internet would collapse. But the network itself has actually proven itself very, very resilient.
COVID ended up being a dress rehearsal. It was a real wake-up call, showing us where the constraints are and accelerating hardware improvements and capacity upgrades. For us, that meant pushing vendors for gear, but also increasingly building our own white-box hardware switches and routers to deploy as cost-effectively as possible.
We've had to invent technologies internally to stay ahead of demand. For example, the connectivity cloud we've built is designed to scale extremely efficiently for the next decade of increased usage. During 2020 and 2021, internet traffic went up a ton, while 2022 was flat.The traffic went up 25% in 2023, and we expect to see an almost 25% increase in traffic again this year.
Had we not lived through what we did in COVID, I'm not sure that we would have made those architectural improvements, which then has put us in such a, I think, a good stead for what will be the next 10 years of increased network usage.
However, increasing regulation and the "balkanization" of the internet means having to serve things very regionally, often down to the state level in places like India, with different rules, routing, and decisions. That's a different architecture than centralized clouds were designed for. Applications will need to be hyper-regionalized to deal with the extremely complicated and growing regulatory complexity.
The 2030 internet will require everything to be architected differently.
OM: How do you now think about information, misinformation, disinformation, spam, and cyber crimes in this age, especially with the impact of generative AI on content? The complexity seems to be increasing to a new level. As a security-forward thinking company, how are you addressing this changing reality of our content, misinformation, and disinformation going forward?
Matthew: For us, this is a much easier problem than it is for people who live further up the stack than us. Because we're not directly interacting with the content, we're sort of at the bits and bytes level, not the words and letters level.
In a particular region, content is either legal or illegal. If it's illegal, then we block it in that region. That doesn't mean it's illegal in Canada if it's illegal in the US. So we (can and) follow the local laws.
Take China, for example. At some level, China should be the hardest place in the world to operate. For us, it's actually been a relatively easy place to operate because the rules are so straightforward. In order to be attached to a network like Cloudflare inside of China, you have to have what's called an ICP license.
If you follow the technical and legal requirements, you can get what's called an ICP license and you can run on a network inside of China, including Cloudflare's network. If you break any of those technical or legal requirements, they pull the ICP license, and as a result, you get dropped off of our network inside of China. You can still broadcast in the US and outside of China, but you can't be on the network inside of China.
OM: What about the U.S., where you find yourself in situations such as terminating The Daily Stormer or 8chan? (In 2017, in the aftermath of a violent rally held by white nationalists in Charlottesville, Va., Cloudflare was asked to ban The Daily Stormer, a neo-Nazi hate site. It did. This marked a significant shift from the company's content-neutral stance. 8chan was also terminated from Cloudflare. These actions caused upheaval, both in the public sphere and internally at the company, a situation well documented by The New York Times in this report.)
Matthew: Every once in a while, whether it's the Daily Stormer, 8chan or whatever the next crisis is, there's a certain set of things that fall into this narrow band where they end up being technically legal but highly immoral and somewhat dangerous.
We've had basically three instances of that over the past 12-plus years, so the meantime between incidents is four years. It's not something we have to deal with on a super regular basis because we're a little bit lower level.
I think there will definitely be regulation around the world saying that all the different levels of staff are going to need to deal with some of these content issues. There's a big fight over DNS right now, where a number of European countries are saying DNS providers need to basically shut down certain sites.
In Germany, there was a ruling that said a DNS provider, in this case 9.9999, which is Bill Woodcock's service, a recursive DNS provider, got sued and ordered to block a particular site, not just in Germany, but worldwide. I think that's sort of the fight of the future. We are always okay complying with what local law is, but the local law in one country is not the same as the local law in a different country.
So, again, we're happy to block a site if Germany says it's illegal, as long as they follow due process, it's transparent, and consistent, and they're accountable for what they do. We have to follow the laws in the places we operate. But I think the fight of the future is whether a German court can say a site is not just illegal inside of Germany, but globally. You're seeing a lot of organizations and courts say that they have the ability to regulate the internet on a global basis. I think that's a pretty dangerous precedent to set. (A good example of this is how a court in India got Reuters to pull down a report on an Indian startup. Politico has a good summary about this -- Om)
On the other hand, if Kazakhstan wants to block something within its borders, that's their sovereign right. The Chinese say we have the sovereign right to regulate the networks that are inside our borders.My initial reaction to that was, no, you don't. But they probably do -- if sovereignty means anything, it means that you kind of have that right to regulate.
If Germany says, we don’t want neo-Nazi content inside of Germany, the first generation of internet companies came out and said, ‘Well, what about the First Amendment?’ That’s pretty naive. Because, you know, Germany obviously had a very different history and a very different experience. And I think that they have a sovereign right to regulate the network.
They're inside the borders.
OM: I live very close to Cloudflare’s San Francisco office. A few years ago, I saw “1.1.1.1” spray-painted on the sidewalks. There were also some flyers on the electricity poles. Being a networking nerd, this caught my attention. I was hooked before it even launched.
[Simply put, 1.1.1.1 is a public DNS resolver operated by Cloudflare. DNS stands for Domain Name System, which is like a phonebook of the internet. A DNS resolver is a server that performs the “name to address” translation, matching an IP address to the domain name and providing the result to users like you and me – the computers that requested this information. Many people use services from their internet providers or from cloud providers such as Google. Cloudflare used 1.1.1.1 for their VPN service, known as Warp, because it was already associated with their DNS resolver service. The IP address 1.1.1.1 is memorable and easy to use.]
I use 1.1.1.1 on my phones and computers at home, but I always have a lot of questions about the big picture. So I took the opportunity and asked Matthew exactly that: What are you guys doing here? What is the big picture?
Matthew: Well, I think there's a role for the traditional hyperscale clouds, but their key performance indicator (KPI) that they measure themselves on is how much of a user's data they capture.
We come at the world in a very different direction, where we really think about connectivity and about how you move data. What we want to do is get as many different nodes of your network connected to us in any way possible, and then we want to make it as easy as possible for you to move data from one place to another.
And yet at times, that's going to need some intelligence, that's going to need some processing on it. But fundamentally that version of a cloud is very different from the hyper-scalers. That version of the cloud is much more focused around connectivity, and that's the key KPI that we have.
We think that what Cloudflare is building is this connectivity cloud, where we can make that underlying connectivity of the internet as fast, as reliable, as secure, as efficient as possible, because it's amazing that we've connected 4 billion people to the internet, but it's shameful that there are 4 billion people still not connected. And so we want to fix that. The biggest thing is reducing costs and making it more efficient and more private.
The original sin of the internet is the fact that your IP address reveals a ton about who you are, in a way that you can't block. I'm proud of the work that we've done in partnership with folks like Apple, to anonymize your traffic so that your IP doesn't reveal a ton about who you are.
So that's what we're doing in those five areas: How can we make it faster, more reliable, more secure, more efficient and more private? That's what we're trying to do. We're trying to build the network to deliver that. The things you mentioned are all that we think of as on-ramps. So how do we make it as easy as possible for people to get onto our network as quickly as possible?
OM: Well, thank you so much, Matthew, for that explanation. This has been a fun conversation, especially the regulatory stuff.
Matthew: The regulatory stuff. It sounds so boring at some level, but the conversations that we get in, that we're a part of and at the center of, the giant race that's going on right now is basically every country except China and North Korea that has let the internet in. Every country, to various degrees, regrets that. Some more than others. Russia and Iran are probably the extreme version of that. They're now all trying to say, can we build what China has?
In the case of Russia and Iran, they're spending enormous resources to say, can they basically put kind of the internet horse back in the barn. And I'm proud of the role that we have in making that difficult for them.
It's powerful that organizations like Bellingcat and the Navalny Foundation are Cloudflare customers, but so are a lot of the banks that Russia and Iran have to access in order to run their oil trading markets. And we've made it very hard for them to block one without blocking the other. And that's earned me the distinction of being personally sanctioned by the Russian government, which is a little bit surreal. But those are the really hard issues we're working on.
My wish is that the next 40 years will be as optimistic as the past 40 years have been.
?! My email was marked as Spammy when I filled in the sub form. What?