Arista Networks Earnings Calls | ANET

Select Year & Quarter
Selected Filter
No selection

Arista Networks Earnings Call Transcript - Q1 FY 2026

May 05, 2026

Operator

Welcome to the First Quarter 2026 Arista Networks Financial Results Earnings Conference Call. [Operator Instructions] As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call. Mr. Rudolph Araujo, Arista's Head of Investor Advocacy, you may begin.

Rudolph Araujo

Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer.

This afternoon, Arista Networks issued a press release announcing its fiscal first quarter results for the period ending March 31, 2026. If you want a copy of this release, you can find it on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2026 fiscal year, longer-term business model and financial outlook for 2026 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI inventory management, lead times and product innovation, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements.

These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q1 results and our guidance for Q2 2026 is based on non-GAAP and excludes stock-based compensation expense, intangible asset amortization, gains, losses on strategic investments and the income tax effect of these non-GAAP exclusions, including the recognition of direct excess tax benefits associated with stock-based awards.

A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.

Jayshree Ullal

President & CEO

Thank you, Rudy, and welcome, everyone, to our first quarter 2026 earnings call. Arista has experienced significant velocity in all our sectors in Q1 and are now commanding the #1 market share in high-speed switching in the greater than 10 gigabit Ethernet category. With that, we have overtaken many incumbent vendors according to major market analysts for 2025.

Our cloud and AI networking strategy for diverse AI accelerators continues to gain traction. Unlike typical workloads, AI workflow patterns can be long-lived, elephant flows or short-lived and simply not predictable. This implies careful attention to performance where a flow can cause burstiness for a long duration of milliseconds.

The intensity of a flow can determine the line weight throughput, the shifting traffic patterns to massive flows synchronized to all-in-all or all-reduce o burst with collective communication are all important for AI training and inference applications. I would like to take a moment to review our 3 AI fabric use cases. In scale-up mode, we have familiar technologies such as NVLink and PCIe that have enabled vertical scaling of single compute nodes or racks.

The advent of ESUN, Ethernet for Scale-Up Networking, specifications allows for increasing or decreasing computing power in a flexible manner with Ethernet to automatically adapt to workload demands. Scale-Up will be a new entry for Arista in 2027 and beyond, where we will be working closely with our customers to build AI racks with very fast interconnects for co-packaged copper, CPC, or open co-packaged optics, CPO, as well as supporting collectives and memory acceleration. Scale-out or horizontal scaling involves adding more machines to a leaf-spine fabric, moving workloads across multiple servers or nodes or even connecting other elements like storage or CPUs.

As you scale up out [indiscernible] massive data sets, bottlenecks can be resolved with collective and protocol acceleration at L2, L3, cluster load balancing, all at wire rate. The system must deliver consistent performance without degradation as more nodes participate. Arista is a shining example here with greater than 100 cumulative customers to date in 800 gigabit Ethernet deployments, and we expect the addition of 1.6 terabit in 2027 at production scale.

Scale across -- drives across the cloud and AI as the AI accelerators in a location may need to be distributed to achieve the appropriate bandwidth capacity with the optimal power. As workloads become more complex and more distributed, the bi-sectional bandwidth must scale smoothly to avoid bottlenecks and preserve performance. This demands sophisticated traffic engineering, deep routing, encryption properties and integrated optics based on Arista EOS stack and using Arista's flagship 7800R3 or R4 series.

The 7800 has established itself in this category as the premier scale across choice. You can see with Arista's accelerated networking strategy and these 3 types of AI fabrics. These are critical to deployment of diverse accelerators and frontier models.

Traditional static network topologies with hotspots, jitter that slows down job completion time or increases time to first token for inference are all not the way to go. Arista's Ethernet portfolio addresses both the synchronous flows for massive training and the low latency for concurrent swarms of real-time inference in this era of trillions of tokens, terabits of performance and terawatts of power. In 2024, you may recall, we discussed 4 Ethernet-based AI training deployments.

And of course, since then, we've expanded and exploded to countless others. This fourth customer from the group has officially moved from InfiniBand to Ethernet at production scale over the last 2 years. The high-speed Ethernet AI Leaf-Spine with flexible air or liquid cooled infrastructure overcomes the physical constraints of power and space for AI workloads.

It results in a low latency distributed AI supercomputer fabric across global regions. What is clear to me and us is our networking progress with data, control and management and multiplanar orchestration is not only central to our AI searching performance, but also important for high-speed optics transmission. At the recent Optical Fiber Conference, Arista unveiled its extended pluggable optics, XPO form factor designed specifically for optics innovations at high speed.

Now endorsed by greater than 100 vendors, salient features include record-breaking throughput, delivering 12.8 terabits per pluggable module, unprecedented rack density achieving 204.8 terabits per OCP rack unit, integrated cold plate capable of cooling up to 400 watts power per module and the universality and flexibility across a range of pluggable optics, copper as well as linear halftime or retimed interfaces. A special kudos to Andy Bechtolsheim, Arista's Chief Architect for driving from OSFP 10 years ago to this next-generation XPO, bringing structural improvements in power, footprint and cost reductions. Our enterprise business experienced strong results in Q1 2026, both in data center and campus.

Our VeloCloud acquisition is also integrating well into our branch and campus strategy, bringing more distributed enterprise use cases and a new channel motion with managed service providers, MSPs. To share some recent wins, let us hear now from Todd Nightingale and Ken Duda, our Co-Presidents, to delineate our Arista 2.0 Centers of Data Strategy. Over to you.

Kenneth Duda

Thanks, Jayshree. Arista is diversifying this business with new customer acquisitions covering a broad set of use cases, all unified by Arista's EOS stack and its ability to modernize enterprise infrastructure operating models. Our first highlighted win is a neocloud AI network.

The customer was constrained by an incumbent white box architecture that simply could not keep pace with the massive scale-out requirements of AI. Arista was selected as a commercially proven and reliable scale-out architecture with unmatched stability of EOS and the ability to connect AMD MI Series XPUs. Arista's AI leaf and spine EtherLink products were deployed at 800 gigabits to provide the incredible performance, modern AI networks require.

The AI fabric was tuned using Arista's cluster load balancing to scale out the thousands of XPUs minimizing hotspots and congestion. On the software side, the customer leveraged AVD, Arista's Validated Design framework, to automate network provisioning, which both reduces the total cost of ownership, but also provides an easy path to reliable network deployment at scale, where without AVD automation, a small mistake can cause precious days of debugging time. This was a strategic neo-cloud win with large potential for upside growth in an area where we are seeing enormous opportunity and velocity in both neocloud and sovereign cloud customers.

Our next win is in the service provider sector with a leading regional fiber-to-the-home provider serving hundreds of thousands of subscribers. As subscriber bandwidth demands have surged, this customer realized their legacy routing architecture was too rigid, too brittle and too costly to scale. They needed a solution, which would modernize their next-generation backbone and Internet peering edge.

Arista won this upgrade by proving an automation-first approach with a modern operating model, driving operational savings and increased subscriber reliability. On the hardware side, we deployed popular 7280 routing platforms using EOS' FLX capabilities, which unlock deep buffering, a rich control plane software stack and full Internet route scale. On the software side, Arista's AVD framework, again, automates router provisioning to reduce the time it takes to turn up services while also reducing errors.

Here, we saw great results from our technology partnerships with Palo Alto Networks, ensuring the routing edge integrated securely and seamlessly with our overarching security architecture. And here, Arista's core value proposition of lower operating costs and greater reliability drove a competitive win. Now I'll hand it off to Todd.

Todd Nightingale

Thanks, Ken. Our third win is in the insurance services sector. Following a year of strategic collaboration, the customer wanted to modernize their infrastructure with a streamlined, automated foundation capable of delivering granular real-time insights to secure and monitor critical applications.

Here, observability was truly the key. Arista secured this comprehensive win after executing a flawless proof of concept, proving our architecture significantly exceeded operational standards. To achieve deep network observability, the customer deployed our R3 series for filter and delivery roles on our monitoring fabric, DMS.

Additionally, they deployed campus switches to radically simplify out-of-band management. Leveraging rich telemetry capabilities of EOS, the customer unlocked advanced features like VXLAN header stripping and transition to a fully automated declarative operational model. Our final win is within the manufacturing sector where we're seeing amazing momentum.

Here, we have a customer operating more than 100 factory sites globally, servicing consumer, health care, aerospace, defense and AI infrastructure customers. This was a true mission-critical use case and their legacy campus network have become the bottleneck for achieving real 24/7 production. Shifting traffic patterns, manual provisioning and importantly, a lack of visibility and forensics into microbursts and drops for keeping them from achieving their goals.

Arista won an extensive bake off against 2 established vendors, both of whom proposed campus design that could not match what Arista delivered, a universal leaf spine campus based on open standards running a single EOS binary across campus, data center and WAN. The Cognitive Campus solution leveraged 100-gig campus spine, high-powered PoE leaves and Arista WiFi 7. CloudVision drove provisioning, configuration and life cycle end to end with consistent tooling across the network infrastructure.

Here, it really was Arista's modern operating model that drove differentiation in the engagement, hitless production upgrades, latency analyzer for microburst visibility and true packet drop forensics. The teams were able to significantly reduce production impacting maintenance windows and expose events that had previously caused line interruption. In all 4 of these examples, Arista's support team stood out to customers for its best-in-class service, well known for troubleshooting issues with customers long after Arista gear is no longer suspected to be at fault.

Arista's modern operating model also played a key role, especially the AVD tooling that Ken mentioned, for architecture, validation and deployment. We're excited about the momentum across the entire enterprise business and especially the diversification that it brings to Arista Thanks, Jayshree.

Jayshree Ullal

President & CEO

Thank you, Todd. Thank you, Ken. It was so fantastic to hear of happy customer outcomes.

We had another fitting example of that at our Innovate 2026 event here in the headquarter facility held in March. The energy and enthusiasm of our greater than 250 customers who attended was truly infectious and inspiring. I want to especially give a shout out to Ashwin Kohli and Dhivya Wagner's teams who have already improved our outstanding Net Promoter Score from 87 to 89 ratings translating to a 94% customer approval.

This really exemplifies the lowest security vulnerabilities in the tech industry. It enhances our ability to better cope with the many risks that AI is creating. As I look ahead at the year, our Arista 2.0 momentum continues to march on and resonate.

Our demand is actually the best I've ever seen in my Arista tenure. The supply, however, is a slightly different and opposite tail. We are experiencing industry-wide shortages across the board, be it wafers, silicon chips, CPUs, optics and, of course, memory that I referred to last quarter, coupled with elevated costs to procure these.

Clearly, our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or 2, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multiyear purchase commitments. We anticipate gross margin pressure due to mix and trade-offs we are making to pay more to assure supply continuity to our customers.

Nevertheless, it gives us confidence to increase our forecasted growth slightly to 27.7%, aiming now for $11.5 billion for 2026. We also increased our AI target now to $3.5 billion this year, thereby more than doubling our AI sales annually. And with that good news, over to you, Chantelle for the financial details.

Chantelle Breithaupt

Thank you, Jayshree. I continue to be impressed by our company's ability to deliver such a breadth and depth of networking innovation. It is a core tenet that underpins our strong financial return to shareholders.

Q1 to detail our most recent financial outcomes. To start off, total revenues in Q1 were $2.71 billion, up 35.1% year-over-year and above our guidance of $2.6 billion. Growth was seen across the customer sectors led by our AI and specialty providers customers within the quarter.

International revenues for the quarter came in at $418.9 million or 15.5% of total revenue, down from 21.2% last quarter. This quarter-over-quarter decrease was primarily influenced by Americas-based sales to our large global customers. The overall gross margin in Q1 was 62.4% within the guidance range of 62% to 63% and down from [indiscernible] in the prior quarter.

This quarter-over-quarter decrease is due to the lower mix of sales to our enterprise customers in the quarter. Operating expenses for the quarter were $396.8 million or 14.6% of revenue, down slightly from last quarter at $397.1 million. Our R&D spending came in strong at $271.5 million or 10% of revenue, despite a slight sequential decrease due to the timing of new product introduction costs, Arista continues to demonstrate its commitment and focus on networking innovation.

Sales and marketing expense was $103.5 million or 3.8% of revenue, down from 4% last quarter, representative of the highly efficient Arista go-to-market methodology. Our G&A costs came in at $21.8 million or 0.8% of revenue, down from $26.3 million last quarter, reflecting our strong base cost productivity within a pure-play networking business model. Our operating income for the quarter was $1.29 billion or 47.8% of revenue.

Let me pause here to thank the greater Arista team for all of their efforts and resulting excellent execution in a dynamic environment. Other income and expense for the quarter was a favorable $110.8 million, and our effective tax rate was 21.1%. Overall, this resulted in net income for the quarter of $1.11 billion or 40.9% of revenue.

Our diluted share count was 1.27 billion shares, resulting in a diluted earnings per share for the quarter of $0.87, up 31.8% from the prior year. Now turning to the balance sheet. Cash, cash equivalents and marketable securities ended the quarter at approximately $12.35 billion.

In the quarter, we did not repurchase our common stock. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors.

Now turning to operating cash performance for the -- for the quarter, we generated approximately $1.69 billion of cash from operations in the period, the strongest in the history of Arista. This was driven by a robust earnings performance, coupled with an increase in deferred revenue. DSOs came in at 64 days, down from 70 days in Q4 due to the linearity of shipments within the quarter.

Our inventory turns improved slightly, landing at 1.7 versus 1.5 in the prior quarter. We ended the quarter with $2.38 billion in inventory, up from $2.25 billion last quarter. This marginal increase is a calculated investment in the mix of raw materials to fulfill our growing demand.

Our purchase commitments at the end of the quarter were $8.9 billion, up from $6.8 billion at the end of Q4. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters as a reflection of the combination of demand for our new products, component variability and the lead times from our key suppliers.

This could also result in quarters of elevated inventory balances ahead of the deployments. Our total deferred revenue balance was $6.2 billion, up from $5.37 billion in the prior quarter. The majority of the deferred revenue balance is product related.

Our product deferred revenue increased approximately $643 million versus last quarter. We remain in a period of ramping our new products, winning new customers and expanding the use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances.

As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days were 54 days, down from 66 days in Q4, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $54.5 million.

We continue the construction work to build expanded facilities in Santa Clara. In Q1, we incurred approximately $40 million in CapEx related to this program and estimate it will reach $180 million in 2026. These Q1 results have provided a strong start to our fiscal year 2026.

As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 27.7% revenue growth, delivering approximately $11.5 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI fabric goal from $3.25 billion to $3.5 billion. I would like to take this opportunity to remind the audience that the timing and outcome of customer projects with acceptance terms can create quarterly and sequential dynamics that do not follow prior year trends.

For gross margin, we reiterate the range for the fiscal year of 62% to 64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon. Given this challenging supply backdrop, I am proud of our sourcing team's execution, which strongly contributes to the gross margin outlook holding in our guidance range. We feel confident that we can source the necessary supply to meet our customers' needs.

Our operating margin outlook remains at approximately 46% for the fiscal year, with the tax rate expected at 21.5%. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory and cash flow from operations due to the timing of component receipts on purchase commitments. More specifically, now our guidance for the second quarter is as follows: now with the added quarterly metric of diluted earnings per share, revenues of approximately $2.8 billion, gross margin between 62% and 63%, operating margin between 46% and 47% and diluted earnings per share of approximately $0.88 with approximately 1.27 billion diluted shares.

Our effective tax rate is expected to be approximately 21.5%. In closing, we are optimistic about the fiscal year ahead. The industry has many times demonstrated the pattern of landing on Ethernet as a winning technology, and that is where Arista shines best.

We appreciate our customers' choice of working with us to achieve their business outcomes. Now Rudy, back to you for Q&A.

Rudolph Araujo

Thank you, Chantelle. [Operator Instructions] Thank you for your understanding. Regina, please take it away.

Operator

[Operator Instructions] Our first question will come from the line of Simon Leopold with Raymond James.

Simon Leopold

Great. I wanted to explore your commentary around the scale across opportunity in particular. And I guess what I'm trying to get a better sense of is how much revenue, if any, did that contribute last year?

And how material is that to the $3.5 billion forecast you're giving this year? And how should that trend longer term?

Jayshree Ullal

President & CEO

Sure, Simon. I think last year, on scale across, we were just beginning. So I think they were small numbers.

And majority of the numbers were really scale out. That's sort of our heritage and that's where we excel. If I were to anticipate how it would be this year, again, scale up is virtually 0 and nonexistent because it really only comes to play after the ESUN spec.

So consider that more a'27, '28 kind of number. So I think the number will be really shared between scale across and scale out. I don't know if I can say it's 50-50 or 70-30 or 60-40, but scale across will definitely contribute at least 1/3 of our AI number.

Operator

Our next question will come from the line of George Notter with Wolfe Research.

George Notter

Maybe just continuing the discussion on scale up. We are starting to see rack design wins. One of your competitors in the ODM space, I think, has got a couple of designs that they've announced at least.

And I know you're kind of pointing towards ESUN as being kind of a key catalyst in generating business there. But can you talk a little bit about where you are in terms of designs with customers, progress? Anything you can tell us there would be great.

And in fact, I think a few quarters ago, you said you had 5 to 7 scale-up rack designs that you were at least working on. Maybe you can update that.

Jayshree Ullal

President & CEO

Yes, that's correct, George. I think there is no doubt in our mind that we will have a number of racks and number of scale-up use cases in 2027. Maybe some of them will be in early trials, but majority of them are looking at really starting with 1.6T and 1.6T chips will really happen in 2027.

There may be a few, a handful of them that tried some experimental stuff at 800 gig. But we continue to see at least 5 to 7 rack opportunities. Some of them are multiple racks with the same customer.

We're actively designing with them. There's a huge amount of liquid cooling designs with very dense cabling options, acceleration of collectives and memory features we have to work on for low latency. So I definitely feel we're an active engineering phase with Ken and Hugh's teams this year.

But unlike the ODMs, I think we're held to a higher bar, and we have to just make sure that this thing is production worthy and specification adhering to ESUN. So I would say today's scale-up is mostly limited to NVLink from NVIDIA and maybe some PCI switching. But majority of the Ethernet scale-up will only really happen in '27 and '28.

Operator

Our next question will come from the line of Antoine Chkaiban with New Street Research.

Antoine Chkaiban

So with the supply outstripping demand, I'm wondering how much does your current supply allow you to grow this year and next? Is the updated top line growth guide of 28% growth, a good reflection of how much supply you've secured for this year? And what could that number look like next year based on how much supply you think you can get as of today?

Jayshree Ullal

President & CEO

Antoine, I think the supply chain problem, and Todd, maybe you can add to this, is not a 1- or 2-quarter phenomena. We now think it's a 1- or 2-year phenomena. When you -- at first, we thought it was memory.

Now it's all the wafer fabrication facilities. Every chip is challenged, and you can see how Chantelle has leaned in with the purchase commitments for multiple years. So while we will continue to improve it, this is a reflection of not just demand, but how much we can ship this year.

And as we continue to ship this year, we can give you better visibility on next year. But I can just tell you, we see multiyear demand, and we are going to do everything, including hurt our gross margins to supply to that demand this year and next year because we believe that we certainly don't want to keep GPUs idle and AI infrastructures underutilized because Arista didn't supply the network. So -- can the number get better this year?

I think this reflects our best attempt at a good number. We started out -- we started at 20% -- 25% growth? Yes.

So we started out at 20%, were at 25%, now we are at 27.7%. Could we improve to the tail end of the year? We'll see.

But the amount of de-commits we're seeing doesn't feel good. So we think a lot of this will continue into next year and keep us constrained for the next couple of years.

Operator

Our next question will come from the line of Aaron Rakers with Wells Fargo.

Aaron Rakers

Jayshree, last quarter, you had alluded to kind of engagements with other hyperscale cloud titan customers? I think you also pointed to maybe having 1 or 2 new 10% customers this year. I'm curious of where we stand today?

Any updated thoughts on adding 1 or 2 new customers at 10% plus? And maybe qualitatively, just talk about your engagements you're having beyond your 2 big cloud titans across the hyperscale vertical.

Jayshree Ullal

President & CEO

Yes, absolutely. First of all, 2 big ones. We never take them for granted.

Microsoft and Meta, they're all-time favorites. They've been on 10% and greater customers for over a decade. And the partnership could never be stronger, and it continues to get better both in cloud and in AI.

In terms of the new entrants, we still expect at least one, maybe two -- and maybe I should caveat this by saying, certainly, in demand, we see 1 or 2. We shall see, Todd, how we do on shipments to see if we can achieve the greater than 10%. The 2 of them have very interesting characteristics.

They exhibit what I would call the 3 use cases I just alluded to, scale up, scale out and scale across where we really have a fabric notion of creating -- so far, we've been working with them a lot on the front end, and now we get to complement that on the back end, definitely for scale out and scale across and maybe even a little bit of scale up in some of these use cases. The other thing we're seeing with a lot of these use cases is the lack of power insights and the ability and demand to distribute and get a more multi-tenant scale across is very high in these 2 use cases. A third common thread, we're seeing across of them, much as we all talk about ODM and white boxes, they deeply appreciate EOS and the features and the reliability and the observability and the -- just the fact that we have a robust, highly scalable Layer 2, Layer 3 stack commands a lot of superior advantages.

So I believe the diversity of these cloud titans is largely due to the fact that we have great hardware and software combined. Ken, you want to say a few words on that?

Kenneth Duda

It's just been an incredible journey to live through this and see the level of infrastructure build we're getting and how well positioned our hardware and software road maps are to address these ever evolving more advanced use cases. It's just a blast to get work on the stuff.

Jayshree Ullal

President & CEO

That's always fun when your job is a blast. So Ben, I still see 1, maybe 2 10% customers. And Todd, hopefully, we can ship it -- sorry, Aaron.

Operator

And our next question will come from the line of Ben Reitzes with Melius Research.

Benjamin Reitzes

There you go, Jayshree, here I am. So -- yes, I wanted to ask around the product constraints. Are you able to say what the number was in the quarter and what it's taking away in terms of the $2.8 billion guide?

Is it safe to say things would have been $100 million or $200 million higher for both? And then if you don't mind, just if you can touch on why the gross margin should go back up to 63%. What is it that you guys are doing that at gives us confidence that it can actually expand a tad from here?

Jayshree Ullal

President & CEO

Yes, I think I'll just -- I think that -- I don't think the commentary about the demand outstripping the supply as of Q1, Q2. I think we're talking about looking ahead Q3, Q4 into next year. So I don't think there's something outside of what we've guided or what we've delivered in the first half.

I think in the sense of the margin. So the margin is a mix of things, right? And I think that all the team members are executing in full force.

I think the supply chain is doing everything they can on ensuring that we have the best supply at the best price. And so we've incorporated that. I think that the mix of customers -- the only chance for mix expansion or margin expansion would be due to mix.

And so I think that's the opportunity as we look to see what we can deliver in the second half then. I think that would be the opportunity.

Todd Nightingale

The teams are also doing everything they can to make sure we control our costs, especially on the manufacturing side, and that includes bringing on secondary providers, calling new components, et cetera, to make our supply chain more resilient and more cost effective in the long run.

Jayshree Ullal

President & CEO

And one thing to clarify also on gross margins. So we view this as a partnership with our customers. So while we would consider and have raised prices a little bit, unlike our competitors, we haven't done 2 pricing increases.

We haven't done major price increases. And the price increases really come into play once our backlog starts to reduce, right? So you won't see the impact of that.

So our gross margins are a strong factor of cost going up and are still eating a lot of the costs and giving our customers the benefit and promise of the pricing we said we would give to them.

Operator

Our next question will come from the line of Michael Ng with Goldman Sachs.

Michael Ng

I was just wondering if you could talk about whether or not Arista is seeing networking attach opportunities for customers that are using TPU or TPU-like architectures. And then -- anything you could comment about as it relates to growing neocloud traction. Is that something that you think may be a little bit underappreciated by the analyst community?

Jayshree Ullal

President & CEO

Yes, Michael, you're absolutely right. I'll take your second question first. It's easy to talk about the titans because the numbers are so ginormous, right?

But the neoclouds are a very important sector because they don't always have the staff to do everything they want to do, and they really lean on Arista's design expertise, EOS expertise, network design configurations we can provide them, a family of 22 products we have in AI. So yes, I would agree with you. It's an underappreciated and the neocloud was very strong this quarter if I recall, Chantelle, for us in the specialty and cloud providers.

What was the other question? You had 1a, 1b?

Chantelle Breithaupt

The TPU.

Jayshree Ullal

President & CEO

Yes, the TPU. So in general, we are seeing diverse accelerators. Last time I spoke about the AMD accelerators.

This time, I will definitely give a nod to the TPUs because in particularly scale across use cases, we're seeing multi-tenants connecting to different AI accelerators, including TPUs as well. So I think the diversity of accelerators is creating tremendous multi-accelerator opportunity and multi-protocol features that we can provide for them in our network.

Operator

Our next question will come from the line of Sean O'Loughlin with TD Cowen.

Sean O'Loughlin

Great. Congrats on the results, and thanks for letting me join in on the fun here. Jayshree, I wanted to get your thoughts on -- we've been talking a lot about agentic AI and the demands that it's placing on maybe some of the more general purpose infrastructure that we -- has been maybe in the background over the last couple of years.

You've talked in the past about a 2:1 pressure on front-end networking created by back end. First, I guess, is that still the correct way to think about it? And second, as a genic workflows become more common, is there any additional demand from your perspective, having a single-image EOS platform on the front and the back end?

Or is the front and back end still pretty siloed?

Jayshree Ullal

President & CEO

Yes. Well, first of all, Sean, welcome to your first call. It will be fun, join the fun.

So agentic AI, it's kind of a buzzword, but let me sort of break it into how -- the biggest killer application we see in agentic AI right now is still training. And indeed, it's going to move to more distributed inference. And we'd also like to see agentic AI move into a lot of enterprise use cases, all of which we're seeing, by the way, but I would say large, medium, small.

The largest killer agentic AI application is training, the medium is enterprise and the smallest -- medium is inference and the small is obviously enterprise. The -- in terms of back end versus front end, we are now seeing way more back-end activity, particularly with our large AI titans and cloud titans because there is just so much scale they need to prepare for the billions of parameters and tokens, and this is where a lot of -- so much so that I think the front end, they might come back and refresh, but they're almost ignoring right now in favor of the back end. Having said that, though, by virtue of the back-end deployments, I don't know if we any more see a 2:1 to the front end, but we at least see a 1:1.

And the 1:1 can be wide area, CPU and storage. Those are probably the 3 common use cases. Not all the customers are up and lifting everything and doing all 3, although we've had cases where some of them did an upgrade at the front end before they went into the back end.

But usually, they will have to come back to that because the minute you put that kind of performance pressure and scale on the back end, you almost have to do something in the front end. But at the moment, I would say it's more one-to-one. And at the moment, I'd also say the scale across in the back end has become a bigger use case than we imagined this time last year.

Kenneth Duda

The other thing I have to mention here is just how good it feels to be -- have the same set of products in the same common operating system management suite and operating model across the front end and back-end. This lowers cost for the customer, simplifies their design process to get that leverage, and we're one of the few vendors who can do that.

Jayshree Ullal

President & CEO

I think only.

Kenneth Duda

Yes, I think so.

Jayshree Ullal

President & CEO

I think only. Yes, absolutely. Good point, Ken.

Operator

Our next question will come from the line of Meta Marshall with Morgan Stanley.

Meta Marshall

Appreciate the question. Maybe just a question on XPO monetization or just how it helps you kind of continue to gain share with customers or just mind share with customers by being so front footed with the technology.

Jayshree Ullal

President & CEO

Yes. Thank you, Meta. I think, as you know, we're not a classic optics vendor.

But almost always, whenever we are selling our switches, you have to connect to something. And usually it's some form of copper or optics. So -- and these innovations with OSFP, I remember this super well where everybody was saying, "Oh, no, no, we can just use QSFP, has proven to be not only a contribution for Arista, but really for the industry-wide." And that's still how we see it with XPO as well.

While the industry has been talking a lot about co-packaged optics, these are still science experiments, and they're very proprietary with individual vendors doing their own thing. We embrace open CPO a few years from now, but we think XPO has a 10-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity. So all the scale up racks we're talking about wouldn't be possible without XPO or CPC or any one of those technologies.

So we see this as -- just as the last decade was greatly influenced by OSFP. The next decade will be greatly influenced by XPO. And remember, 99% of the optical market today that we connect to is all pluggable optics.

So this is a very crucial invention and innovation, not just for Arista, but the industry at large.

Kenneth Duda

I think this is a great example of how Arista enables an ecosystem and then we profit as that ecosystem grows. And with XPO unlocks is a standard interoperable [indiscernible] way to get to 4x of network density in liquid cooling, which is absolutely critical for these AI use cases. Without that, you have this huge bottleneck at the front panel, the amount of extra rack space is required to get through OSFPs.

It's -- so we're really enabling the future growth of our industry this way, which we benefit and others benefit as well.

Jayshree Ullal

President & CEO

Yes. It's stunning to me. I remember, when I first talked to Andy and Vijay, they said, "Oh, we think we'll get about 20 signatures", and then it was 40, and now it's north of 100.

So it tells me the whole consortium is coming together for things like Ethernet, IP and standardization of optics.

Operator

Our next question will come from the line of Tal Liani with Bank of America.

Tal Liani

Can you hear me?

Unknown Executive

Yes, Tal, we can hear you.

Tal Liani

I promised myself to be nice today. So I have a good question for you.

Jayshree Ullal

President & CEO

I promised to be nice too.

Tal Liani

Deferred revenues. Deferred revenues doubled in the last year. And it went up -- if I combine short term, long term, it went up $826 million.

It went up significantly in the last 4 quarters. What needs to happen? What are the conditions for -- to recognize deferred revenues?

Meaning what needs to happen for deferred revenues to be recognized over the next few quarters? Is it about data center going live in traffic goes into data centers? Or what are the sources for the deferred revenue increase?

Jayshree Ullal

President & CEO

Right, right. Tal, so I really do like you. So I'm going to be nice to you not because I have to, but because I like to.

So I think if you remember 10 years ago, Tal, we had a similar phenomena where in the cloud, the whole Leaf spine design was brand new, nobody really knew how to build it or monetize it and they were building some of the world's largest networks for Azure, et cetera, right? And we had new products, they had new designs. They had done traditionally the access aggregation core and we're now moving to the [indiscernible].

And we had some fairly lengthy qualification cycles. So I would say there's a customer aspect of it and a product aspect to it. The customer aspect to it is they need to have the space, they need to have the facilities.

They need to have there -- in this case, GPUs now back and then it used to be CPUs, they got to have their rack and stack. And in many cases, by the way, we're running into examples where they -- it's literally they need to manually install the cables, and that takes several months, right? Thousands of people have to do that.

So there's certainly a customer acceptance piece of it, which starts with being ready. There's also a new product. Many of these new products in the Arista EtherLink family, particularly for the AI are brand new, brand-new chips, brand-new software, the familiarity with it, particularly in the back end was scale out and scale across is new to them.

So there's a level of testing and level of making sure it works with the rest of their ecosystem, including the front end that is super important, and Arista bears a huge responsibility to that as well. So all just to tell you that the length of time to qualify this, which used to be 2 to 4 quarters has extended more like 6 to even 8 quarters. So it's gotten much longer.

Chantelle, do you want to add something?

Chantelle Breithaupt

Yes. The other thing -- thank you, Jayshree, is that we do recognize some of it every quarter. So it's not like it's one balance, that's aging and growing tall.

We recognize things every quarter, things come in and things are recognized to the P&L. So I just want to make sure you understand that that...

Jayshree Ullal

President & CEO

It's not piling. Some things go in and some things come out. Yes.

Does that make sense, Tal? Tal, you're on mute?

Kenneth Duda

No, no. He mutes in after his question.

Jayshree Ullal

President & CEO

Oh, he does. Okay. All right.

Operator

Our next question will come from the line of Amit Daryanani with Evercore.

Amit Daryanani

I guess, Jayshree, you folks have kind of positioned XPO as the next OSFP. And I'd love to kind of understand that XPO ramps from the OSFP demos to potentially deployments in '27, how do you see change in the optics architecture within AI clusters? And then maybe specifically for Arista -- does that change the growth profile or your content per AI rack or cluster as we go forward?

Jayshree Ullal

President & CEO

Yes. Thank you, Amit. I think you should look at XPO as a partner to OSFP.

So at 400 gig and 800 gig you'll be fine with OSFP. And as we go to higher speeds in '27, '28 or even beyond, OSFP will run out of steam, and this will be the new connector of choice. So the migration to higher speeds equals the migration to XPO, particularly for scale out and scale across.

Within a rack and scale up, there's still a number of choices. I think within short distances of 2 to 3 meters, you're still going to see a lot of co-packaged copper and I think XPO in terms of density will be another alternative. But I don't rule out open CPO as well over there.

They're really looking to maximize the density in a minimum amount of space. So I think XPO will be particularly prevalent in scale out and scale across and will be one of the choices in scale-up.

Operator

Our next question comes from the line of Ryan Koontz with Needham.

John Jeffrey Hopson

This is Jeff Hopson on for Ryan. I appreciate the question. On the scale cross, it seems like that would be a really good fit for all Arista's capabilities.

And I know you mentioned it would maybe be around 1/3 of revenue this year. But is this something where scale across could even be larger than scale out over the next couple of years?

Jayshree Ullal

President & CEO

Ryan, or rather Jeff, I think the answer to that would lie on how well we do with both. And what form factors are used for both. So majority of the scale across today is a very premier valuables, heavy-duty routing platform, the 7800.

So if we do lots of that, it could get well beyond the 30%. But some of them may do it with fixed boxes too or fixed switches and choose to add a lot of cable in which case, it wouldn't go well above that. So we don't know what we don't know.

But I would agree with you that scale across is by far the most significant and differentiated opportunity that really highlights Arista's prowess in both platforms and software.

Operator

Our next question comes from the line of Samik Chatterjee with JPMorgan.

Samik Chatterjee

Jayshree, maybe slightly related to the last question here. Just trying to think about, you said most of the cloud revenue near-term is going to be scale out and scale across as we wait for scale up to ramp. How are you thinking about your market share when it comes to scale out versus scale across in the early days of scale across, what are you seeing in terms of market share?

And are you seeing customer decisions being led in scale across by sort of the incumbent and scale-out? Or is it a different decision altogether in terms of how they're designing vendors for in-force scale across?

Jayshree Ullal

President & CEO

Good question, Samik. You're making me think. So I would say If it's greenfield deployment, then they tend to think of it together because they're not only building the sites, but they're thinking of the interconnect across them.

And therefore, market share is generally strong in both. In some cases, where Arista has not been a historical participant within the data center, we now have an opportunity to offer the scale across multi-tenant even in a non-greenfield situation and let's say, in a brownfield, where now they've got disparate data centers or AI clusters that we now have to bring in. And so once again, I think Arista is really fitting example to be in scale across for both those use cases, but has the additional opportunity in a brand-new data center to be in all use cases, if that makes sense.

So it's giving us a chance to participate with different types of accelerators and different types of models because people aren't getting the power and they're having to distribute the data centers. And as a result of distribution, you need more traffic engineering, routing, multi-tenancy. So I would say scale across is the common denominator in all our use cases and scale up and scale out maybe nice options and brand-new greenfields.

Operator

Our next question comes from the line of Karl Ackerman with BNP Paribas.

Karl Ackerman

Jayshree, you are doing more networking design today more than ever. Does that change your ability to monetize your services to capture more of the work of the other value that you're adding to these applications? And I guess as you address that, given the large mix of services revenue within deferred, could services revenue accelerate faster and represent perhaps 25% or 30% of sales going forward?

Jayshree Ullal

President & CEO

I don't think so, Karl. I think we're a product company, and majority of our revenue generation and interest in Arista as a company for all the designs we're doing comes from our product heritage. And it's not like we charge for services.

In fact, we work closely with our partners also, we will recommend network designs. We will support services and certainly, things like we are the gold standard for worldwide support. But I don't expect services as a function of our revenue to go up.

I continue to be -- see ourselves as a product-led company.

Operator

Our next question comes from the line of Matt Niknam with Truist.

Matt Niknam

I just want to go back to gross margins. So I know we were sort of in that 62-ish range. They dipped about 170 bps year-on-year.

And I wanted to dig into whether it was primarily mix related or maybe if you can quantify whether the -- how significant the memory and cost-related impacts were, if there's any color you can provide.

Chantelle Breithaupt

Yes, I think it's a great question. I would say the majority, if you look at -- even if you look at prior quarter or prior year, the majority of the difference is mix of the customers. And just to clarify, our larger customers have a lower gross margin accretion.

And so that mix is the primary driver. And then the secondary, although not as significant would be things depending on the quarter, depending how deferred moving tariffs or the memory cost or the silicon cost depending on the quarter. So secondary driver, but the primary drivers mix of the customer segments.

Operator

Our next question comes from the line of David Vogt with UBS.

Unknown Analyst

This is Andrew for David. From a high level, with $2.4 billion almost of inventory and almost 2 years in COGS of purchase commitments, how should we think about the supply constraints and where that inventory and purchase commitments are not satisfactory to meet demand? Where are the holes in your inventory?

Kenneth Duda

I wouldn't say we have holes in our inventory, but we have surging demand, especially on the newest platforms, which of course, is driving our need for the most modern silicon from our providers and it's driving need for an expanded amount of memory even more than we were expecting before the year began. So that's driving us to be a buyer in the market. Luckily, we've got pretty good spending power.

We're a very reliable partner in these scenarios. And so we partner closely with these vendors. But there's no doubt that like the newest platforms that we're delivering, especially in the AI space is driving needs of ours in the high end of our portfolio.

Jayshree Ullal

President & CEO

Yes. And just to add to that, David, the real hole is lead times. We are experiencing such significant wafer fab shortages that we're not getting the chips in time.

So more than a hole, I would just say our purchase commitments are multiyears because we're having to deal with forecasts that are out multiple years so that we get them in time because the lead time of these chips is so long. So I think that's the biggest hole, lead times.

Kenneth Duda

Yes, we are experiencing 52-week lead times pretty reliably with reservation needs beyond that, and our customers certainly do not want to wait that long.

Operator

Our next question comes from the line of James Fish with Piper Sandler.

James Fish

Chantelle, maybe for you, the guide raise was primarily all on AI. Are you guys prioritizing these shipments or what's given the hesitancy around sort of the non-AI noncampus at this point and leaving that roughly flat still? And Jayshree, just for you, just as we think about the mix here on gross margin, what are you guys seeing in terms of Blue Box adoption now?

And are you seeing any sort of net pull-in of demand just given you have a lot of smart customers here and they're very much aware of the supply chain constraints.

Chantelle Breithaupt

Yes. Thank you -- Thanks, James. I'll start with mine first in the sense of the order of your question.

So I don't think we're saying because we're raising the revenue and attribute to AI that we're not excited about all the other customer segments. I think you heard both Jayshree and I talk about -- we're very happy with how the year started, what we're seeing across all 3 customer segments. We're very happy what we're seeing in enterprise, which I wouldn't say is quite AI yet.

So let's cover that as a non-AI bucket that you referred to. So wait and see, we're in Q1, reporting Q1. We'll see how the year goes.

But we're very confident across all 3 that we're seeing strong demand. So I think I would leave it in the sense of let's see where we get to in our future quarter guidance. Jayshree?

Jayshree Ullal

President & CEO

And I would agree with that. Just to remind everybody, we've raised now from $10.5 billion or whatever we said last September to $11.5 billion. And yes, a high degree of that is AI, but we have aggressive commitments on the campus to go to a $1.25 billion quarter and continue to service and grow our data center and cloud as just as well.

So all 3 are growing, but certainly, AI is taking the news headline. Regarding Blue Box adoption, one of the customer use cases you actually heard about was moved from -- that you heard from Ken moved from white box to blue box. And their goal right now is their desire to move to blue boxes, it works, number one.

It scales too. It actually does the job for us with AMD accelerators, number three. And down the road, they may use open operating systems, but they were very pleased with the diagnostics capability, the platform, SDK, where we literally rewrite every piece of software and bit twiddle all the Broadcom chip transistors very, very well and the EOS features.

Down the road, they may use some open OSes as well, but that would be a really good example of a blue box that has EOS today and may go down to other OSes. And we continue to see that, particularly in the Neoclouds. We've always seen a bit of that in the cloud and AI titans because they know how to work with open OSes.

So we've had that hybrid strategy always, but we're certainly seeing more of that in the Neoclouds now.

Rudolph Araujo

Regina, we have time for one last question.

Operator

Our final question will come from the line of Ben Bollin with Cleveland Research.

Benjamin Bollin

Jayshree, you referenced inference a little bit earlier, so it's kind of a smaller use case right now. I'm interested on your thoughts on where you think enterprise is in terms of their ability to consume inference and create agents. And then how that develops over time and where you think the network and edge networks are today and their ability to support those use cases.

Basically, just do we get the sustained investment period because what you're seeing now bleeds and becomes much more significant in enterprise? And how long lasting that might be?

Jayshree Ullal

President & CEO

Yes. No, Ben, I tend to agree with your thesis that while today we are in a training fever, that a more distributed AI -- generative AI paradigm with inferences, which means you don't always need the GPU. You're going to have high-end CPUs and you're going to have a smaller set of parameters and tokens to manage and you're going to have specific agentic AI use cases and applications.

We're seeing very, very early trials and stages. Nothing super big yet. But we are seeing -- I mean, they're not in the hundreds of thousands of GPUs like you see on the AI titans.

But we are frequently seeing our customers in certain high-tech sectors want to deploy clusters that are 1,000 -- few thousand, definitely not 10,000, but in hundreds of thousands. And they tend to be exactly, as you said, not training, but more inference based -- more agentic AI edge inference based as well. So I think we'll see more of that.

This is -- this is the calm before the storm, if you will. And as we -- as the AI gets more distributed, I think it doesn't need GPUs alone, it's going to need more high-performance compute. And many of them seem to feel to us like high-performance compute, HPC use cases that are sort of getting revived for AI.

So I agree with your thesis, Ben. I think it's going to take a couple of years to fully happen.

Rudolph Araujo

This concludes Arista Networks First Quarter 2026 Earnings Call. We have a presentation posted that provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and for your interest in Arista.

Operator

Thank you for joining. Ladies and gentlemen, this concludes today's call. You may now disconnect.