May 05, 2026
Good afternoon. My name is Audrey, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2026 Earnings Conference Call. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs.
Please go ahead.
Good afternoon, everyone, and welcome to Astera Labs First Quarter 2026 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Desmond Lynch, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate.
These forward-looking statements reflect management's current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent Annual Report on Form 10-K. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties and assumptions, all results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call, except as required by law.
Also, during the call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. For example, the overview of our Q1 financial results and Q2 financial guidance are on a non-GAAP basis. These non-GAAP financial measures are provided in addition to and not as a substitute for financial results prepared in accordance with U.S. GAAP.
A discussion of why we use non-GAAP financial measures whose difference is primarily stock compensation, acquisition-related costs and related income tax effect, and reconciliations between our GAAP and non-GAAP financial measures and financial outlook are available in the earnings release we issued today, which can be accessed from our website through the Investor Relations portion of our website. With that, I'd like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first quarter conference call for fiscal year 2026. Today, I'll update you on AI infrastructure market trends, our Q1 results and recent announcements.
I'll then turn the call over to Sanjay to discuss Astera Labs' growth profile. I'd also like to welcome Des, our CFO, joining this call for the first time. Des will cover our Q1 financials and Q2 guidance.
Since our last earnings call, AI infrastructure spending has clearly accelerated. Hyperscalers, AI labs and sovereign entities are signaling the industry's build-out is still in its early stages, underpinned by strong monetization and ROI. We expect these strong secular trends to be a tailwind for Astera Labs' growth over the long term.
Astera Labs delivered strong results in Q1 with revenue and non-GAAP EPS above our outlook. Revenue for the quarter was $308 million, up 14% from the prior quarter and up 93% versus Q1 of last year. Revenue growth was broad-based, spanning across our signal conditioning and fabric switch product portfolios as we continue to diversify our business profile with new design wins across multiple customers and product categories.
Our PCIe 6 business across both AI fabric and signal conditioning was strong in Q1 with revenue expanding to more than 1/3 of our total revenue. We have now shipped millions of PCIe Gen 6 ports to date, demonstrating the robustness and maturity of our PCIe portfolio. Product smart cable modules for Ethernet AECs continue to perform well as new program designs ship in volume, while others ramp to mature levels across GPU, XPU and general purpose systems.
On the scale-up fabric front, our initial design wins with Scorpio X-Series in smaller radix configurations shifted from preproduction shipments to initial volume ramp during the first quarter. Building on this momentum, today, we announced the expansion of our Scorpio product line of AI fabric switches for both scale-up and scale-out use cases. Scorpio X-Series portfolio now supports up to 320 lanes for high radix scale-up networking, and Scorpio P-Series PCIe 6 portfolio now spans 32 to 320 lanes for diverse system topologies, making it the broadest in the industry.
Our new flagship Scorpio X-Series 320 lane has been purpose-built to maximize AI economics by leveraging hardware-accelerated hypercast and in-network compute engines to boost collective operations by up to 2x. In-network compute offloads critical accelerator to accelerator communication and computation directly onto the switch, dramatically reducing the networking overhead during large-scale training and inference. These hardware capabilities are delivered through enhancements to our COSMOS software, which can now integrate deeper into our customers' software stacks, providing not only diagnostics and telemetry, but also directly improving AI platform performance.
Scorpio's advanced hardware and software capabilities are a result of Astera Labs' deep system-level understanding of AI architectures and close customer collaborations, creating a durable competitive moat. We are excited to report that we are now shipping initial volumes of our new 320 lane Scorpio X with production volumes ramping in the second half of 2026. Scorpio X-Series also has widening interest in design activity with hyperscalers, edge AI inference providers and enterprise infrastructure builders to address high-bandwidth, AI clustering use cases.
Scorpio P-Series continues to grow through 2026, and we expect initial shipments to at least 2 additional major hyperscalers towards the end of 2026 with broader deployment in 2027. On the optical front, we made good progress during the quarter as we continue to work through the qualification process at a large AI platform provider with our ultra-high precision optical fiber coupler product, which we expect to ship in volume starting in 2027. We are actively expanding our volume manufacturing capabilities to support the ramp of both scale-out and scale-up GPU applications.
Beyond the early commercial traction of our merchant connectors, our high-density fiber coupler technology will be a critical piece of our long-term optical road map for NPO and CPO applications. Finally, our Leo memory controller is on track for an early ramp of CXL attached memory with Microsoft Azure M-Series virtual machines. And during the quarter, we captured a new custom design win for a KV Cache offload application with shipments expected in 2027.
As we look to the second half of 2026, robust demand reflects secular AI infrastructure spending, deep customer partnerships and expansion towards higher-value solutions within our portfolio. This trend is quickly increasing our silicon dollar content opportunity beyond $1,000 per XPU within AI racks and positions Astera Labs to outperform our end market growth rates. As a result, we expect strong revenue growth to continue through 2026 and into 2027, driven by the proliferation of AI fabrics and the industry's transition to PCIe 6, 800 gig and 1.6T Ethernet connectivity.
Based on the momentum we are seeing in 2026, we are strategically investing to drive strong continued growth. Our acquisition of aiXscale Photonics has created immediate design opportunities, and our Israel design center is fully integrated and working with customers on new programs. We have expanded our product portfolio and increased dollar content per accelerator while diversifying our customer base with additional design-ins.
We are making progress within large market opportunities, including optical engines and interconnects, UALink fabrics and custom solutions for NVLink and AI inferencing. Most of all, I'm proud of the stellar team we have built through worldwide hiring and thoughtful acquisitions, the progress we have made and the results we are delivering together. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years.
Thanks, Jitendra, and good afternoon, everyone. Today, I will provide an update on our recent execution, followed by an overview of the meaningful market opportunities that will fuel Astera Labs growth over the next several years. Astera Labs' mission is to deliver a purpose-built intelligent connectivity platform with a portfolio of standard, custom and platform-level solutions across copper and optical interconnects for rack-scale AI infrastructure deployments.
As AI deployments advance to production at scale and operational efficiency, infrastructure teams face a new set of constraints, multitrillion parameter models, agentic workflows, multistep reasoning distributed across heterogeneous compute infrastructure to name a few. The industry needs connectivity solutions purpose-built to address these workloads. Higher radix to simplify topologies, intelligent fabric capabilities to reduce communication overhead, open and platform-specific optimization and data center-grade diagnostics to maintain uptime when a single fault can cost millions of dollars in idle compute.
Let me now walk through our approach to address the evolving needs and our future strategy, starting with our standard products. We continue to see strong momentum across both AI fabric and signal conditioning portfolios. We strengthened our mission-critical position with the introduction of our flagship Scorpio X-Series 320 lane scale-up fabric switch and the overall expansion of our Scorpio Switch portfolio.
The Scorpio X-Series 320 lane high radix AI fabric switch replaces multiple legacy switches to enable large scale-up cluster sizes in a single hop and reduces overall latency. Several new features such as in-network compute, reduced time to first token and tokens per watt performance. The newly expanded Scorpio P-Series PCIe switch portfolio now spans from 32 lanes to 320 lanes to enable diverse accelerator optionality and system topologies.
Our AI fabric portfolio is poised to expand further into 2027 with the introduction of UALink-based products for AI scale-up platforms. In early April, the UALink consortium published a new specification, which defines in-network compute chiplets, manageability and 200-gig performance. UALink 2.0 delivers these advancements with an open vendor-neutral approach and confirms that scale-up switching is not simply hardware, but an AI-aware fabric actively helping the system compute and drive performance.
This evolution plays into Astera Labs' strength as demonstrated by the industry-leading feature set that are being deployed through our Scorpio portfolio expansion today. The maturity of the ecosystem is also accelerating with customers and suppliers working tightly to deploy initial programs in 2027. On the signal conditioning portfolio, our Aries products will expand to support PCIe 7 and our Taurus portfolio into 1.6T Ethernet, positioning us at the forefront of the next connectivity upgrade cycle.
Turning to our optical business. Astera Labs' signal connectivity business is driven by the rapid shift of AI systems towards rack scale architectures and higher compute capabilities where scaling performance increasingly depends on high bandwidth, high radix, low-latency interconnects. These requirements will expand our AI connectivity opportunities across both copper and optical interconnects.
Astera Labs is well positioned to lead this transition by extending its proven value chain approach from copper into optics. Over the past couple of years, we have been systematically investing to broaden our internal capabilities across advanced analog and mixed signal design, DSP, electronic ICs, photonic ICs and optical packaging capabilities while also deepening our supply chain relationships. Together, these capabilities will enable high-volume deployment of a complete scale-up optical engine.
We are focused on 3 areas pertaining to scale-up optics. One, high-density detachable, [ re-fieldable ] fiber attached solutions using the core technology from our aiXscale acquisition. We expect to ship these connectors in volume starting in 2027; chipsets in support of NPO that will enable multi-rack AI clusters starting in 2027; and eventually fully optically enabled Scorpio X fabric switches with CPO supporting larger domains, higher egress densities and bandwidth.
Next, let me talk about our custom solutions business that also continues to make meaningful progress as we work to develop new products and close on new designs. Once again, tight collaboration with hyperscaler customers, coupled with a diverse set of foundational technology and operational capabilities have been essential to our initial success. These opportunities represent a new multibillion-dollar market opportunity for Astera Labs.
First, we are engaging with multiple customers to enable NVIDIA [Audio Gap] fusion scale-up architecture for hybrid racks. Our strong historical execution, delivering intelligent connectivity solutions for NVIDIA-based systems positions us well to develop and design within these new custom programs. Second, we are seeing new custom solution opportunities within the memory space for KV Cache applications.
We are happy to report that we have won a new design, leveraging a customized version of our Leo CXL controller to maximize performance within these AI use cases. Overall, we are pleased with the initial traction we have seen on the custom solutions front and have conviction that this opportunity set will continue to broaden and become a meaningful business for Astera Labs over the next few years. Finally, we continue to demonstrate solid momentum with our platform business as we ultimately look to expand beyond our add-in cards and smart cable modules to enable broader rack scale solutions for customers.
As we have grown from an I/O component supplier to an AI fabric solution provider over the past couple of years, customers are looking for Astera Labs to bring additional value to the AI rack at a system level. In conclusion, Astera Labs is at a key inflection point in the company's journey as we begin to ship production volumes of our scale-up AI fabrics. We are also making great strides towards broadening our business across new product categories, including optical and custom solutions as our partners look for us to deliver more value in next-generation systems.
Therefore, we will continue to strategically and thoughtfully invest as we position Astera Labs to deliver growth rates above our end market benchmarks over the long term. With that, I will turn the call over to our CFO, Des Lynch, who will discuss our Q1 financial results and our Q2 outlook.
Thank you, Sanjay, and good afternoon, everyone. I'm pleased to be joining you today for my first earnings call as the CFO of Astera Labs. I look forward to partnering with Jitendra, Sanjay and the rest of the leadership team as we continue to drive long-term value for our shareholders.
Today, I will begin by reviewing our Q1 financial results and will then discuss our Q2 guidance, both presented on a non-GAAP basis. Revenue in the first quarter of 2026 was $308.4 million, which was up 14% versus the previous quarter and up 93% year-over-year. We saw revenue growth across our signal conditioning and switch fabric portfolios, supporting both scale-up and scale-out connectivity for AI fabric and reach extension applications.
Our Scorpio product family performed well in Q1, driven by strong demand for PCIe Gen 6 switching applications and continued expansion of designs across various platforms. During the quarter, Scorpio X-Series products began shipping in initial production volumes. Looking ahead, we expect Scorpio X-Series shipments to increase in Q2, along with initial shipments of our new Scorpio X 320 lane and then ramp to full volume production in the second half of 2026.
Aries revenue grew on strong early adoption of our PCIe 6 solutions for both scale-out and scale-up signal conditioning. In total, PCIe Gen 6 revenue across AI fabric and signal conditioning contributed more than 1/3 of total company revenue in the quarter. Taurus also delivered solid results driven by broad adoption of AEC to extend reach in both AI and general purpose compute platforms.
Non-GAAP gross margin for the first quarter was 76.4%, up 70 basis points sequentially, primarily driven by a lower mix of hardware sales across our signal conditioning portfolio. Non-GAAP operating expenses for the first quarter were $123.9 million, reflecting continued R&D investment to support our expanding product road map, including a full quarter of our aiXscale acquisition and a partial quarter of our newly formed Israel Design Center. Within Q1 non-GAAP operating expenses, R&D expenses were $96.2 million, sales and marketing expenses were $12 million, and general and administrative expenses were $15.7 million.
Non-GAAP operating margin for the first quarter was 36.2%. We will continue to invest strategically to drive above-industry revenue growth over the long term while maintaining strong and durable profitability. For the first quarter, interest income was $11.6 million.
Our non-GAAP tax rate was 11%, and non-GAAP fully diluted shares outstanding were 181.2 million shares. Non-GAAP diluted earnings per share for the quarter was $0.61. We ended the quarter with cash, cash equivalents and marketable securities totaling $1.18 billion, flat versus Q4 as cash from operations of $74.6 million was offset by cash paid for acquisitions.
Now turning to our outlook for the second quarter. We expect revenue to be between $355 million and $365 million, up 15% to 18% sequentially, driven by continued strength across our AI fabric and signal conditioning portfolios. Aries revenue growth is expected to be driven by continued strong adoption of PCIe 6 across AI platforms, supporting both scale-up and scale-out connectivity.
Taurus growth is expected to be driven by increased volumes for AI scale-out connectivity. And in AI fabric, we expect robust growth driven by the continued early-stage ramp of our Scorpio X-Series products for large-scale XPU clustering applications as well as continued growth in our P-Series solutions and customized GPU platforms. We expect second quarter non-GAAP gross margin to be approximately 73%.
This outlook includes an estimated 200 basis point noncash impact related to a recently executed warrant agreement with one of our customers. We expect second quarter non-GAAP operating expenses to be between $128 million and $131 million. Interest income is expected to be approximately $11 million, and we expect our non-GAAP tax rate to be approximately 12%.
We expect our Q2 share count to be 184 million diluted shares outstanding. Overall, we are expecting non-GAAP fully diluted earnings per share to be between $0.68 and $0.70. This concludes our prepared remarks.
And once again, we appreciate everyone joining the call. I will now turn the call back to our operator to begin Q&A. Operator?
[Operator Instructions] We'll take our first question from Harlan Sur at JPMorgan.
Great job on the execution by the team. As your customers compute workload inflection from training to inference in the second half of last year, essentially very focused now on monetization, right? We saw that as inferencing workloads evolved one shot to reasoning to now agentic, right?
This created new silicon opportunities, right? It created new storage tiers. It created more demand for high-performance CPUs.
Obviously, storage and CPUs communicate via PCIe, like so right in the sweet spot of your technology and product leadership, right, that's one example. Your CXL solutions targeted at KV Cache applications, maybe another example. But kind of help us understand how the transition to more inferencing-based workloads, especially agentic-based workloads, has potentially helped to create new opportunities for the team and potentially expand your SAM opportunity.
Harlan, thank you. This is Jitendra. Let me try to take a stab at that.
You point out very correctly that inferencing has created a lot of focus in the industry and a lot of additional opportunities. The good news is that at Astera, we've been focused on these AI applications from the start. And we helped the training workloads when the training workloads were still the mainstream.
And now we are helping the inferencing workloads equally well. The KV Cache offload is a great opportunity where we mentioned earlier that we picked up a new design for a custom application. For KV Cache offload, that's really a key part of AI inferencing.
I also want to draw your attention to the newly introduced Scorpio X 320 lane family that supports in-network compute and hypercast. Both of these are extremely important technologies to reduce the networking overhead and then deliver additional performance for training as well as inferencing. Not only that, we enable these hardware accelerated modes through our COSMOS software, which now not only gives our customers the ability to do diagnostics and telemetry, but allows them to uniquely improve the performance of their system for their inferencing workloads using these unique capabilities that we have worked in tight collaboration with our customers.
We'll move to our next question from Blayne Curtis at Jefferies.
I'll echo the congrats on the nice results. Maybe you can -- in terms of the Scorpio ramp, I know last quarter you talked about it being 20% of revenue. It's a big ramp.
I'm assuming that's the biggest driver into June. I was wondering if you can kind of frame just how big that is. And then I'm curious, particularly this 320 lane product that's ramping, like what are the milestones?
And what's left to do? You've sampled it, but to get that to production in an AI server, I'm just kind of curious what's left there.
Blayne, it's Des. Thanks for your question. We've been very pleased with the performance of our Scorpio product family.
It's certainly been a large driver of our growth in the sort of first half of the year. We continue to expect to see Scorpio P continuing to ramp driven by scale-out opportunities. And in Scorpio X, this is really a greenfield opportunity for us associated with scale-up connectivity.
The small radix solutions are ramping today, and we do expect to see the layering in of the high radix configurations in the second half of the year. Given the size of the opportunity and the associated dollar content, we would expect to see that Scorpio will become our largest product line by the end of the year, which is strong performance for a product line that was only 15% of total company revenue last year. And as we go throughout the year, I would expect to see X-Series revenue exceeding P-Series.
But overall, we're very pleased with the performance of the Scorpio product family and the outlook of the business.
Blayne, to your second point about other milestones, we are already shipping, as Des mentioned, the Scorpio X -- the newly introduced Scorpio X family. And you'll be able to see and touch and feel this at Computex where we will be demonstrating this live in our booth.
We'll move next to Joe Moore at Morgan Stanley.
You talked quite a bit about your optical strategy. Can you -- I guess, can you talk about the time frame where you see optical scale-up becoming more relevant? And do you have the building blocks that you need to progress from copper to optical in that space?
Do you need tuck-in type technologies? And do you need to invest a lot more? Just a general sense of what it's going to take to transition from copper to optical over the next several years?
Thanks for the question. This is Sanjay here. Yes.
So we have been working for the last couple of years building all the foundational things that are required for optical enablement, all of the mixed signal technology that's required, all of the electronic IC as well as we have did the acquisition with aiXscale that brought in the pluggable connector as well as the PIC technology. So in general, I want to say we have made tremendous progress in preparation for the optical opportunities that are coming up on us. For us, in terms of time line, what we believe is that the NPO-based opportunities, or the Near-Package Optics, would be the first one to ramp, and that will start happening in 2027.
We will also be ramping our pluggable connector technologies for CPO, mostly for scale-out next year, 2027, with more of the mainstream deployments for CPO happening in the 2028 time frame. So in general, for us, between the components that we are building that go inside the NPO, the detachable connector technology for folks that have their own CPO solutions as well as our own Scorpio X devices that will come in to support both NPO variants and CPO variants, we believe it's been all coming together nicely for us. One key consideration, of course, that we've been working is the supply chain and getting all of the commitments in place so that we can not only provide the technology that's required for NPO and CPO, but also make sure that we are able to ship to revenue.
And I think overall, there's quite a bit of work and progress that we have done and enabling us to start ramping in 2027.
We'll take our next question from Ross Seymore at Deutsche Bank.
Congrats on the strong results and guide. I just want to talk about a small part of your business today, but something that sounds like it could grow a little faster than we thought before, and that's specifically your Leo product line. Given the dominance or resurgence of the CPU demand and memory being such a large cost and bottleneck these days, how is the demand trajectory and growth potential changed in your view on your ability to kind of do the cooling and the sharing on the memory side and CXL in general?
Yes, we are definitely seeing increased traction for CXL, not only for the general purpose compute application where we started, but also for AI inferencing, as we touched upon earlier. Just kind of staying with general purpose compute first, we are seeing additional demand from our customers. We are on track for deploying this with Microsoft Azure for the M-Series instances at the data center.
So that's in private beta now expected to go into general availability end of the year. And we see additional customers also kind of following suit for this particular high-memory type application. In addition, we are also excited by the new KV cache offload or AI inferencing opportunities where some of our customers have already designed us in.
In fact, we picked up our second design win of a custom application for CXL earlier this quarter. And we are working with our customers, which is an additional new hyperscaler. We are working with them on at-scale performance tests and expect that one to ship for revenue in 2027.
We'll go next to Tore Svanberg at Stifel.
Congrats on the record quarter. And Des, welcome on board. I wanted to follow up on what you said about Scorpio mix as we approach the end of the year, especially in relation to Aries because obviously, Aries is now ramping in PCIe Gen 6.
Next year, obviously, there's going to be a lot of mix networking topologies. So I understand Scorpio will be the sort of biggest product by the end of the year. How should we think about '27 between Aries and Scorpio because obviously, there are significant drivers for both.
Tore, thanks for the question. Yes, we've been very pleased with the growth rates of our Scorpio product family. As I mentioned earlier, really excited about the continued growth opportunity ahead of us.
That said, we still expect to see strong growth within the Aries product line. We expect to continue to grow our sort of leadership position there. We expect to see strong growth given the PCIe 6 portfolio.
It's just the fact that Scorpio will continue to be our sort of largest and fastest-growing sort of business within the company here.
Next, we'll move to Ananda Baruah at Loop Capital.
And yes, congrats on the great execution here. I guess the question, guys, would be, what's a good way, particularly with all the additional context you've given around Scorpio X and Scorpio P lanes progressing through the back half of '26. As we move forward post '26 and clusters get bigger and presumably high-radix switches have more ports, should we expect Scorpio X and Scorpio P switches to continue to increase the lane count?
And if so, what -- is there any useful anecdotal way to think about like how that may occur? Or should we just think that, that sort of can continue in some perpetuity?
Thanks for the question. We can talk for an hour just on that topic. But let me say this.
The AI fabric switches have become a very important part of our overall strategy, and we are investing heavily in not only the current generation that we have announced, but also upcoming devices. We are going to continue to focus on PCI Express because that is a large part of the business today. But we are also working on UAL products that will form the basis of the next generation of these devices.
In terms of the lane count, et cetera, we work very, very closely with our customers to understand what their deployment profile is going to look like because it's really important to target the right lane counts and, radix for these devices because if you don't, then the customer sizes get limited and if you over-index, then you come up with a solution that is not competitive. Fortunately, we have very, very good partnerships with our customers, and they are telling us what the deployment looks like. And I also want to add to that, that as the [Technical Difficulty] sizes increase, not only is it important to have a switch, actually, it is also important to have the right media types for the deployment.
So for our family of switches, we will continue to support copper connectivity as we have so far. But as Sanjay mentioned earlier, increasingly, we will enable optical connectivity as well, starting with NPO with the next generation of switches and then going to CPO. And it is very important to understand that as a switch company, it gives us a perfect opportunity to deploy optical solutions.
And that's something that we will completely leverage and make sure that we support an end-to-end connectivity with our switches, including copper, NPO and CPO.
We'll take our next question from Natalia Winkler at UBS.
Congratulations on the results. I was wondering if you can add a little bit more color on the NVLink Fusion opportunity for you guys. Specifically, how do you see from a standpoint of portfolio, maybe where it would be most interesting for you?
And also from the standpoint of competitive landscape given some of the partnerships that NVIDIA has for the NVLink Fusion as well.
Yes. Thanks for the question. So in general, if you look at our business, you can broadly divide that into 3 categories.
There is a standard business that we're doing; the custom business; and then, of course, the module and the solution business that we have. Clearly, an area that we see tremendous opportunity for us going forward is the custom solutions under which we are developing the NVLink Fusion type of devices. And this actually is proving to be pretty interesting.
We do have several opportunities. We're very deep in engagement for an initial design win in collaboration with NVIDIA and then a hyperscaler. So that project is going well.
So we do expect that to start contributing revenue in 2027 as some of the GPUs that are designed for this kind of use case, which is called as a hybrid rack situation, where the GPU or the XPU still talks native protocols, which could be a protocol like PCIe or UALink and others. But then when they need to leverage and cross over and talk to an NVLink type of ecosystem, then they would need a product that's based on NVLink Fusion that we are developing. So in short, I would say that we are very deep in engagement from a development -- silicon development standpoint.
So we do expect that this will start providing some meaningful revenue in 2027 and then growing from there. The second part of your question was competitive situation. I mean, obviously, this is an ecosystem that NVIDIA is creating with NVLink Fusion.
There are others. But for us, the main thing is that we have been engaged with real customers, real applications. And to that standpoint, we will continue to focus on that and do what we need to do and not get distracted by any competitive press releases.
We'll go to our next question from Sebastien Naji at William Blair.
Congrats on the strong results. My question is on the Scorpio business and maybe a little bit of a follow-up to one of the prior questions. But with your announcement of the new 320 lane Scorpio switches for both the X and P-Series, how should we be thinking about ASPs for the higher radix solutions?
Is it right to think that your dollar content is correlated directly to the lane count? Or is there another way to think about your dollar content? Just any details there?
Yes. So in general, what I would say is the bigger the switch, the higher the ASP. That's the way industry works.
But also, please keep in mind is that these switches are more like AI fabric class device, which are a lot more than just the number of lanes, right? We talked about in-network compute. We talked about hypercast.
We talked about several features that we have that are unique and critical for deploying the AI clusters, whether for training or more and more for inference application where things like latency becomes super important. So when it comes to ASP, obviously, it's a combination of how -- what features are enabled and not just based on the port count. But we do see that our content continue to increase.
And to that standpoint, we are expecting and going forward with the design wins we have, over $1,000 worth of content per accelerator. And that is, of course, significant, and it's growing rapidly for us if you consider the path that we have taken so far from offering retimers now to offering complete AI fabric. And with the future products with optically enabled switch and so on, you can only imagine that this content would grow from a dollar per accelerator standpoint.
We'll go next to Quinn Bolton at Needham.
Let me offer my congratulations as well. I guess you mentioned just the KV Cache offload custom design. I'm wondering if you might be able to put any sort of numbers around it in terms of like dollar content per CPU or dollar content per gigabyte or terabyte of memory that's attached.
Is there a way we can think about how that opportunity or how to size that opportunity?
Yes. So these are going into new inference applications. So there are multiple use cases and platforms that we see for this.
So in that context, this would be a significant opportunity for us to execute and deliver on. In terms of exact dollar association with this, I would say it's probably a little bit early because some of the platforms and architectures are being finalized. But in general, for us, like was highlighted earlier, inference and KV Cache is a significant opportunity.
We have the IP, not just for memory, but also for things like KV Cache acceleration and so on as part of our portfolio right now. So to that standpoint, we will increasingly develop products that provide more function and capability to ensure that memory is available for KV Cache use cases. And I will also say that the ASP also would be -- continue to be pretty meaningful when you think about the cost of the memory.
In other words, these controllers will always fade compared to the amount of money that people are paying for the memory itself. So in some ways, what I'm trying to say is that these are not ASP challenged, and we will continue to make sure that we extract the most value out of these products.
We'll move to our next question from Karl Ackerman at BNP Paribas.
This is Sam Feldman on for Karl Ackerman. So you mentioned Near-Package Optics as a preliminary solution to CPO. From Astera Labs point of view, do you believe customers view XPO as a viable option to extending pluggable optics?
And does Astera plan to participate in the XPO NFA?
That's a great question. We clearly work very closely with our customers to understand what solutions they are looking for. XPO is a pluggable technology that has come about recently, and we will certainly participate in that.
But not all of our customers, at the moment, are looking to intercept XPO. So the customers that are looking to intercept with NPO, we will certainly support them because it gives you a way to have a very high egress density without the constraints of front plate density. The customers that want us to work directly on CPO, we absolutely will work with them.
As Sanjay mentioned earlier, we are engaged in that opportunity that should ship here in 2027. And for customers that are looking to do XPO, we will engage with them as well. But right now, our focus has been on NPO and CPO so far.
We'll take our next question from Suji Desilva at ROTH Capital.
Welcome, Des. Just a bigger picture question. I mean you mentioned the word custom quite a bit on this call more than in the past.
When you first IPO-ed, Hopper was there in Aries and that was fairly standard. Are we past kind of the point or evolving to the point where standard products are not as applicable to you because each platform is different? And should we think of all products having some customization?
Or where is the line there? Just trying to understand.
Yes. I'm glad you asked the question. So if you think about infrastructure and AI use cases, they all are bespoke, and they all are unique between platform and between customers.
But having said that, if you look at the software-defined architecture we have with our products, these are devices even our standard products, Aries and Taurus, Scorpio and so on, they provide a ton of customization that customers leverage through the COSMOS interface. And COSMOS allows them to not only monitor but also customize. And now with the new devices that we announced today, they can do a lot more from a performance and other key offload feature enablement and so on.
So all in all, customization has been our story through software-defined architectures and offered through our standard products. But when we talk about our custom business, the business model is different, right? So we are developing a product for a given customer under a business model that would include NREs and other ways of paying for the development and of course, the product revenue that comes when the product starts shipping.
So in general, what we see is that as we're getting into bigger devices, whether it's for fabric class or other connectivity type of technology that goes beyond sort of what we have done so far, having the custom solution portfolio is important, and we are approaching that with our customers by also offering a variety of foundational technology that we've been building for the last couple of years. So in general, we would see custom being an important growth driver for us. But at the same time, please think about our business in a way where the standard products will continue to be a very important part of our overall portfolio.
We will do custom, but we won't be -- we will be very systematic about it. We will not take any opportunity that comes our way because sometimes the custom business can lead into -- could be so unique to one customer and with a lot of risk and margin and so on. So to that standpoint, we will try to make sure that we are systematic and thoughtful about the opportunities that we pursue on the custom side.
We'll go next to Mehdi Hosseini at Susquehanna.
This is Bastien filling in for Mehdi. Congrats on the quarter and welcome, Des. I guess I wanted to follow up on UALink.
Can you share an update on the adoption process and the expected time line for UALink-based switches? And what do you expect the dollar content to be? How should we think about the difference between kind of the PCIe switching pricing and the UALink pricing?
Yes. So I think within the last 3 months or so, maybe 6 months or so, we've had a couple of announcements from our hyperscaler customers on what the intercept is. Both Amazon as well as AMD have said that their respective ASIC and GPU will launch sometime in 2027, and we'll certainly be prepared to intercept that launch with our UALink switch.
In terms of the comparison of our UALink switch to PCI Express, maybe a couple of things to state. First, as we go into the new generation of devices, both the complexity as well as the speed of these devices is going up, sometimes in case of lane count, other times in case of radix. So the value that we are able to charge for these devices will be substantially higher in terms of what we are able to do for the PCI Express switches.
The second thing that I'll mention is the media attach also tends to change. So we may go from kind of majority PCI Express to a blend of majority copper to a blend of copper and NPO with the next-generation switches that also gives us a meaningfully large opportunity in terms of revenue and the TAM that we are able to address. Finally, leading up to a CPO, which is a really rich opportunity with a very large TAM that we are able to address all because we have the platform in the form of Scorpio X switches.
And we'll move next to Tore Svanberg at Stifel.
Yes. I just had a quick follow-up on capacity. So your inventory days, I think, came in at 75 days.
It seems like a little bit at the lower end. But I guess, are you sort of feeling good about being able to continue to at least double revenues here and next year based on the capacity commitments you have today?
Tore, it's Des here. Yes, based upon our current view of demand, we do have supply in place through the end of the year, and we're very comfortable with what our inventory sort of holdings are here. Like others within the industry, we continue to see pockets of supply challenges.
But what we've done here is really a nice job of diversifying our back-end supply chain, and we've been able to make sure that we have the sufficient supply in place to make sure we can meet the revenue commitments. So no concerns just now, and we continue to work with our supply chain partners for supply going into 2027.
And that concludes the question-and-answer session. I'll turn the call back over to Leslie Green for closing remarks.
Thank you, Audrey, and thank you, everyone, for your participation and questions. Please do refer to our Investor Relations website for information regarding upcoming financial conferences and events. Thanks so much.
And this concludes today's conference call. Thank you for your participation. You may now disconnect.