Leslie Green; Investor Relations; Astera Labs Inc
Jitendra Mohan; Chief Executive Officer, Director; Astera Labs Inc
Sanjay Gajendra; President, Chief Operating Officer, Director; Astera Labs Inc
Michael Tate; Chief Financial Officer; Astera Labs Inc
Harlan Sur; Analyst; JPMorgan
Blayne Curtis; Analyst; Jefferies
Joseph Moore; Analyst; Morgan Stanley
Tore Svanberg; Analyst; Stifel
Tom O'Malley; Analyst; Barclays
Ross Seymore; Analyst; Deutsche Bank
Quinn Bolton; Analyst; Needham
Atif Malik; Analyst; Citi
Suji DeSilva; Analyst; Roth Capital Partners
Richard Shannon; Analyst; Craig-Hallum Capital Group LLC
Operator
Good afternoon. My name is Jale, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q4 Earnings Call. (Operator Instructions)
I will now turn the conference over to Leslie Green, Investor Relations of Astera Labs. Leslie, you may begin.
Leslie Green
Thank you, Jale. Good afternoon, everyone, and welcome to the Astera Labs fourth-quarter 2024 earnings conference call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President and Chief Operating Officer and Co-founder; and Mike Tate, Chief Financial Officer.
Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings we file from time to time with the SEC, including the risk set forth in the final prospectus relating to our IPO and our upcoming filing on Form 10-K.
It is not possible for the company's management to predict all risks and uncertainties and that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking stakes.
In light of these risks, uncertainties and assumptions, the results, events, or circumstances reflected in forward-looking statements discussed during this call may not occur, and actual results could differ materially those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law.
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute or superior to financial results prepared in accordance with US GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website.
With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera. Jitendra?
Jitendra Mohan
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our fourth quarter conference call for fiscal year 2024. Today, I'll provide an overview of our Q4 and full year 2024 results, followed by a discussion around the key secular trends and the company-specific drivers that will help Astera Labs deliver above-industry growth in 2025 and beyond. I will then turn the call over to Sanjay to discuss our medium- and long-term growth strategy in more detail. Finally, Mike will provide details of our Q4 2024 financial results in addition to our financial guidance for Q1 of 2025.
Astera Labs delivered strong Q4 results and set our sixth consecutive record for quarterly revenue at $141 million, which was up 25% from last quarter and up 179% versus Q4 of the prior year. Revenue growth during Q4 was primarily driven by Aries PCIe retimer, and Taurus Ethernet smart cable module product families. Within these product families, we saw additional diversification, driven by strong demand for both AI scale-up and scale-out connectivity. Leo and Scorpio momentum also continued with both products shipping in preproduction volumes during the first quarter to support our customers' qualifications across multiple platforms for a variety of use cases.
Looking at the full year, our strong Q4 finish culminated in an outstanding 2024, which saw full year sales increase by 242% year over year to $396 million. The value of our expanding product portfolio across both hardware and software was reflected in our robust fiscal 2024 non-GAAP gross margin of 76.6%. Over the past 12 months, we have aggressively broadened our technology capabilities by investing in our R&D organization to solve next-generation connectivity infrastructure challenges.
We successfully increased our head count in 2024 by nearly 80% to 440 full-time employees. In Q4, we also closed a small, but strategic acquisition that included a talented group of architects and engineers to help accelerate our product development, strengthen our foundational IP capability and provide holistic connectivity solutions for our hyperscaler customers at a rack scale. Our revenue growth in 2024 was largely driven by Aries products, along with a strong ramp of Taurus in the fourth quarter.
We expect 2025 to be a breakout year as we enter a new phase of growth, driven by production revenue from all four of our product families to support a diverse set of customers and platforms. In 2025, Aries and Taurus retimers are on track to continue their strong growth trajectory. Also, Astera Labs is poised to be a key enabler of CXL proliferation over the next several years with the volume ramp of our LEO family expected to start in second half of '25.
Finally, our Scorpio smart fabric switches will begin ramping this year with new and broadening engagements for scale up with our X-Series and scale out with our P-Series switches. In time, we expect Scorpio fabric switches to become our largest product line, given the size and growth of the market opportunity for AI fabrics.
Across our industry, hyperscalers are pushing the boundaries of scale up compute to support large language models that continue to grow in capacity and complexity. Recent algorithmic improvements have shown the potential to deliver AI applications with better return on investment for AI infrastructure providers. These innovations enable increased adoption and broader use cases for AI across the industry.
The secular trends underlying our business are projected to be robust in 2025, driven by growing CapEx investments by hyperscalers in AI and cloud infrastructure. Hyperscalers are deploying internal ASIC-based rack-scale AI servers that use end-to-end scale network to deliver larger higher performance in more efficient clusters.
The scale of networks require every accelerator to connect with every other accelerator with fully nonblocking high throughput and low latency data pipes. This drives the need for more and faster interconnects and the homogeneity of such a system allows for many optimizations and innovations.
Our PCIe-based scale up clusters are innovative Scorpio X Series and Aries retimer family are perfectly suited to provide a custom-built interconnect solution. In addition to PCIe-based scale-up opportunities, we are excited about the next potentially broader opportunity with Ultra Accelerator Link, or UALink. This is an impactful initiative by the AI industry to develop and open scale up interconnect fabric for the entire market.
Early in 2025, significant progress has been made advancing the development of UALink to provide the industry with a high-speed scale-up interconnect for next-generation AI clusters. The UALink Consortium recently expanded its Board of Directors to include several more technology leaders, including Alibaba Cloud and Apple.
Given our intimate involvement within this open standard, we are seeing overall engagement accelerate for Astera Lab's next-generation high-speed activity solutions. In summary, we anticipate the market opportunity for high-speed connective to increase at a faster rate than underlying AI accelerator shipments.
We look to take full advantage of these robust trends by broadening our existing product portfolio with differentiated hardware and software solutions across multiple protocols and interconnect media. We are accelerating our pace and level of development driven by our customers to deliver new products that address the rapidly growing market opportunity ahead of us.
With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss our growth strategy in more detail.
Sanjay Gajendra
Thanks, Jitendra, and good afternoon, everyone. 2024 was a significant year for Astera Labs as we diversified our business across multiple vectors. We launched our Scorpio fabric switches generated revenue from all four of our product lines and transitioned into high-volume production for Aries and Taurus smart cable modules.
We also started ramping multiple new AI platforms based on internally developed AI accelerators at multiple customers to go along with continued momentum with third-party GPU-based AI platforms. This expansion to internal accelerator-based platforms took off in the third quarter and helped us establish a new revenue baseline for our business with continued growth in the fourth quarter.
As we look into 2025, we see strong secular trends across the industry, supported by higher CapEx spent by our customers, broadening deployment of AI infrastructure driven by more efficient AI models and company-specific catalysts that should drive above-market growth rates for Astera Labs.
Specifically for 2025, we expect three key business drivers: one is the continued deployment of internally developed AI accelerator platforms that incorporate multiple Astera Labs product families, including Aries, Taurus, and Scorpio. As a result, we'll continue to benefit from increased dollar content per accelerator in this next-generation AI infrastructure systems.
Based on known design wins backlog and forecast from multiple customers, we see strong continued growth of our Aries, PCIe Gen 5 products in 2025. As AI accelerator cluster size scale within the rack and rack to rack, we see meaningful opportunities to drive reach extension with our Aries retimer solutions in both chip onboard and smart cable module formats.
Our Taurus product family has demonstrated strong growth over the past several quarters, paving the way for solid revenue contributions in 2025. We continue to see good demand for Taurus' 400-gig Ethernet solutions utilizing our smart cable modules, both for AI and general-purpose compute infrastructure applications. Looking ahead, we view the transition to 800 gig Ethernet at the late 2025 event for a broader market opportunity in 2026.
Additionally, design activity for Scorpio X-Series products across next-generation scale-up architectures in AI accelerator platforms is also showing exciting momentum as we continue to broaden our customer engagements. The Scorpio X-Series is built upon a software-defined architecture and leverages our COSMOS software suite and support a variety of platform-specific customization, which enables valuable flexibility for our customers.
We are pleased to report that we have received the first preproduction orders for our Scorpio X-Series product family. The second driver for our 2025 business is the expected production ramp of custom AI racks picked on industry-leading third-party GPUs. We are shipping preproduction quantities to support qualification of design, utilizing our Scorpio P-Series and Aries PCIe Gen 6 solutions to maximize GPU throughput while leveraging our customers' own internal networking hardware and software capabilities. These programs are driving higher dollar content opportunities for Astera Labs on a per rack and per accelerator basis, and we expect volume deployments to begin in the second half of this year.
At the DesignCon 2025 trade show, we demonstrated the Better Together combination of PCIe fabric switch, PCIe retimer and 100-gig per lane Ethernet retimer solutions utilizing our COSMOS software suite. This provides deeper levels of telemetry to pinpoint connectivity issues in complex topologies, while enabling tighter integration of COSMOS APIs into our customers' operating stack.
We also showcased the first public demonstration of end-to-end interoperability between Scorpio fabric switches, Aries retimers, and Micron's PCIe Gen 6 SSDs. This demonstration highlighted the maturity of our PCIe Gen 6 solutions, growing PCIe Gen 6 ecosystem and our performance leadership by doubling the maximum storage throughput possible today and setting a new industry benchmark.
The third driver for our 2025 business is general compute in the data center. We expect revenue growth from general compute-based platform opportunities, featuring new CPUs, new network cards and SSDs with our Aries PCIe retimers, Taurus Ethernet SCM and the Leo CXL product families.
Though general compute is a smaller portion of our business compared to AI servers, we benefit from the diversity with multiple layers of growth. Overall, we are excited by the many opportunities and secular trends in front of us to drive 2025 revenues. We're also encouraged by our customers and partners, increasing their trust in us and opening new opportunities for new product lines to support their platform road maps. As a result, we'll continue to aggressively invest in R&D to further expand our product and technology portfolio as we work to increase our total addressable market.
We'll build upon our semiconductor, software and hardware capabilities to address comprehensive connectivity solutions at a rack scale to ensure robust performance, maximum system utilization and capital efficiency.
As we look to 2026 and beyond, our playbook remains the same: one, stay closely aligned with our customers and partners; two, innovate exponentially in everything we do; and three, continue to be laser focused on product and technology execution.
Our long-term growth strategy is to aggressively attack the large and growing high-speed connectivity market. We estimate our portfolio of hardware and software solutions across retimers, controllers and fabric switches will address a $12 billion market by 2028. Significant portions of this market opportunity such as AI fabric solutions for back-end scale-up applications are greenfield in nature.
With a diverse and broad set of technology capabilities, we are partnering with key AI ecosystem players to help solve the increasingly difficult system-level interconnect challenges of tomorrow. By helping to eliminate data networking and memory connectivity bottlenecks, our value proposition expands and will drive our dollar content opportunity higher.
In conclusion, we are motivated by the meaningful opportunity that lies before us and will continue to passionately support our customers by strengthening our technology capabilities and investing in the future.
Before I turn the call over to our CFO, Mike Tate, to discuss Q4 financial results and our Q1 outlook, I want to take a quick moment to thank our customers, partners, and most importantly, to our team and families for a great 2024. With that, Mike?
Michael Tate
Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q4 financial results and Q1 2025 guidance will be on a non-GAAP basis. The primary difference to Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q1 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call.
For Q4 of 2024, Astera Labs delivered record quarterly revenue of $141.1 million, which was up 25% versus the previous quarter and 179% higher than the revenue in Q4 of 2023. During the quarter, we enjoyed strong revenue growth of both our Aries and Taurus smart cable module products supporting both scale up and scale out PCIe and Ethernet connectivity for AI rack-level configurations.
For Leo CXL and Scorpio smart fabric switches, we ship preproduction volumes as our customers work to qualify their products for production deployments later in 2025. Q4 non-GAAP gross margins was 74.1%. It was down from the September quarter levels due to a product mix shift towards hardware-based solutions with both our Aries and Taurus smart cable modules. Non-GAAP operating expenses for Q4 were $56.2 million, up from $51.3 million in the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity.
As previously mentioned on this call, we closed a small acquisition toward the latter half of the quarter, which also contributed to slightly higher spending during the period. Within Q4, non-GAAP operating expenses, R&D expense was $37.8 million. Sales and marketing expense was $8.1 million and general and administrative expenses was -- I'm sorry, $10.4 million. Non-GAAP operating margin for Q4 was 34.3%, up from 32.4% in Q3, which demonstrated strong operating leverage as revenue growth outpaced increased operating expenses. Interest income in Q4 was $10.6 million.
On a non-GAAP basis, given our cumulative history of non-GAAP profitability, starting in Q4, we will no longer be accounting for a full valuation allowance on our deferred tax assets. As a result, in the fourth quarter, we realized an income tax benefit for this change, resulting in a Q4 tax benefit of $7.6 million and an income tax benefit rate of 13%, which compares to our previous guidance of an income tax expense rate of 10%. Non-GAAP fully diluted share count for Q4 was $177.6 million -- I'm sorry, shares and our non-GAAP diluted earnings per share for the quarter was $0.37.
Excluding the impact of the Q4 tax benefit just noted and based on a 10% non-GAAP income tax rate during the quarter, as previously guided, non-GAAP EPS would had been $0.30. Cash flow from operating activities for Q4 was $39.7 million, and we ended the quarter with cash, cash equivalents and marketable securities of $914 million.
Now, turning to our guidance for Q1 of fiscal 2025. We expect Q1 revenues to increase to within a range of $151 million and $155 million, up roughly 7% to 10% from the prior quarter. For Q1, we expect continued growth from our Aries product family across multiple customers over a broad range of AI platforms. We look for our Taurus SCM revenue for 400-gig applications to also provide strong contribution to the top line in Q1. Our Leo CXL controller family will continue shipping in preproduction quantities to support ongoing qualification ahead of volume ramp in the second half of 2025.
Finally, we expect our Scorpio product revenue to grow sequentially in Q1 driven by growing preproduction volumes of designs for rack scale systems. We continue to expect Scorpio revenue to comprise at least 10% of our total revenue for 2025 with acceleration exiting the year. We expect non-GAAP gross margins to be approximately 74% and as the mix between our silicon and hardware modules remain consistent with Q4. We expect first quarter non-GAAP operating expenses to be in a range of approximately $66 million to $67 million.
Operating expenses will grow in Q1 is largely driven by three factors: one, continued momentum in expanding our R&D resource pool across headcount and intellectual property; two, seasonal labor expense step-ups associated with annual performance merit increases and payroll tax resets; and three, a full-quarter contribution of the strategic acquisition we executed in the latter part of Q4.
We continue to be committed to driving operating leverage over the long term via strong revenue growth while reinvesting into our business to support the new market opportunities associated with next-generation AI and cloud infrastructure projects. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10%, and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share in a range of approximately $0.28 to $0.29. This concludes our prepared remarks.
And once again, we appreciate everyone joining the call. And now, we will open the line for questions. Operator?
Operator
(Operator Instructions) Harlan Sur, JPMorgan.
Harlan Sur
Good afternoon, and congratulations on the strong results and execution. The one big inflection in accelerated compute in AI, as you mentioned, is the ramp of numerous AI ASIC SPU programs like you've got you an EPU, a Google, Trainium at Amazon, MTI at Meta, also multiple new programs in the not-too-distant future. It looks like these customer and program ramps are growing actually faster than the overall merchant GPU market.
So the question is, what percentage of your business last year came from merchant GPU AI systems versus ASIC-based systems. And where do you expect that mix to be, say, exiting this year? I mean, ASIC programs seem to be on a faster growth trajectory. You have a very, very strong attach here across your products as you guys mentioned. I'm just wondering if the team agrees with me on this view.
Michael Tate
Yeah. We're very excited about the addition of these internal AI accelerator programs. Also, in particular, on those programs to the extent you're doing the scale-up connectivity, the unit volumes up in a meaningful way. compared to the merchant GPU designs that we see right now. As we look into 2024, the first half of the year was predominantly merchant GPU since that was the first to really adopt our product lines.
And then in Q3, that's the first quarter it inflected up with the internal AI accelerators, especially, you see that with our tourist and our Aries SCM business inflecting up. Q3 was a partial quarter and then Q4 was a full quarter. So it really sets up a nice baseline of revenues.
Now, if you look into 2025, we see both contributing growth. The first half of the year will be more predominantly the AI internal AI accelerator programs. But if we get into the back half of the year, the transition on the merchant GPUs will also be very strong for us. This is where you'll see the custom rack configurations start to deploy, and that's where we see a big dollar increase in our content per GPU with Scorpio starting to ramp.
Harlan Sur
I appreciate that. And on the balance sheet, inventories were up almost 80% sequentially in the December quarter. That's an all-time high for the team. I think it's up 60% versus the average inventory level over the last four quarters. Is the significant step-up reflective of a strong multi-quarter shipment profile across the overall portfolio or maybe reflective of a step in more of your board level solutions, or is it kind of a combination of both?
Michael Tate
Well, if you remember, in Q3, our revenues were very strong. We were up 47% sequentially. A lot of that strength developed during the quarter. So we drew down our inventories pretty significantly in Q3.
Now, in Q4, we had time to build back to our more normalized level. So this level of inventory is actually where we feel most comfortable.
We always want to be in a position to support upsides from our customers. And because most of our programs are sole sourced. This level reflects the growth in our business now.
Operator
Blayne Curtis, Jefferies.
Blayne Curtis
Hey, good afternoon. Thank you for taking my question. I had two. I was just kind of curious, you mentioned the strength in Taurus. I don't know if you're going to tell as to how maybe big that was in December. And then, Mike, I wanted to ask on gross margin. I know it's a mix of hardware, and you're seeing strength for at least your basic customer there with multiple products.
I'm just kind of curious as you think about '25 as Scorpio ramps, how that mix shifts and what the impacts are on the shape of gross margin for this year?
Michael Tate
Yeah. So for Q4, you see the margins did tick down. We did highlight the Taurus and Aries in the module form factors grew as a percentage of our total revenues and the upside in the quarter was, in particular, from Taurus as well. So that's the margin going down to 74.1%, which is reasonably close to what we had expected. Now as you go into 2025, we still see good contribution from Taurus and Aries SCM modules.
But as we make it through the year, the Aries board on chip as well as Leo and Scorpio are a positive for us as well. So we think Q1 -- in Q2, we should have a consistent margin profile of around 34%. And as we've highlighted, margins will be trending down closer to the longer-term model 70%, but it all depends on the mix of our hardware versus silicon.
Blayne Curtis
And then I was just kind of curious if you could -- now that you're a quarter removed from launching Scorpio. Just kind of comment on the design momentum you've had in terms of the number of engagements. And I'm just kind of curious in terms of from a competitive landscape-wise, for scale up outside of NVLink? What else is out there that you're seeing that you're competing against with those products?
Sanjay Gajendra
Blayne, this is Sanjay here. Let me help answer that. So for us, as you know, Scorpio has got two series, P-Series for PCIe and head no use case, which tends to be pretty broad. And then X series is for the GPU clustering on the back-end side. So overall, since we launched, we continue to pick up multiple design opportunities at this point, both for P-Series as well as for series.
X Series tends to be a more sort of a longer design-in and qualification just because it's going into the back-end GPU side, but the front end. The P-Series is what we expect to start contributing meaningful revenue starting second half of this year as the production volumes take off. We have been shipping preproduction for P-Series already. And then for X-series, we started receiving our first preproduction orders.
So to that standpoint, what I want to share is that the momentum on Scorpio has been definitely more than what we expected, largely driven by the feature set that we have implemented, where Scorpio is the first set of fabric devices for PCIe and back-end connectivity that's been developed ground up for AI use cases.
So to that standpoint, the customers see the value in the features we have and the performance that we're delivering.
Jitendra Mohan
Blayne, this is Jitendra. Your question on what else is out there. Of course, clearly, NVLink is the four that is most commonly deployed most widely deployed within the NVIDIA ecosystem, and we did play there. Other than that, there is, of course, the PCIe-based scale-up network that Sanjay just talked about. And the other alternative is Ethernet-based scale of networks.
And the difference between the two is really Ethernet is a very commonly used standard but it was not designed for scale up. So the latencies are quite high and you don't quite get the performance of Ethernet, which is why we see many of our customers gravitate towards PCIe-based systems, which are inherently better suited for this application. Now what we see happening in the future is the industry might try to get behind UALink, which is a developing standard that we are very excited about.
And with the UALink, you get the benefit of both Ethernet speeds as well as the lower latency and the memory-based IO of the PCA like protocols. Now over time, we do expect Scorpio to become our largest product line just because the market for scale of interconnect is so large. So very excited about what's coming in this space.
Operator
Joe Moore, Morgan Stanley.
Joseph Moore
Great. Thank you. I know there's a lot of attention on the DeepSeek innovations that we saw a couple of weeks ago. Can you talk about what you've seen, how you would position that with regards to other innovations that we've seen? Do you see it as inflationary to the long-term opportunity in AI? Just would love to get your perspective on this.
Jitendra Mohan
Joseph, this is Jitendra again. So let me start on that. First of all, you're right, there is a lot of discussion, a lot of articles that have been written about DeepSeek. So I'm not sure exactly what they will be able to add. But what I do want to point out is what matters most is what our customers, the hyperscalers think about DeepSeek.
And in the face of that announcement, they have all gone and increased the CapEx spending. So that really shows that the hyperscalers believe in the future of AI and the continued demand for GPUs and accelerators is likely to continue. And I can kind of give you my perspective on why that is. If you break it down into two, first, if you look at inference, for inference, DeepSeek has shown that algorithmic improvements will drive the cost of inference lower. And we have seen time over again when the cost goes down, the adoption goes up.
It happened with PCs. It happened with phones. It happened even with servers when virtualization kicked in, and we do think that AI will follow a similar trajectory, so it get more adoption. And then if you look at training, just consumers like you and I, we are all looking for better results from these models.
And by embracing some of the innovations that the DeepSeek team has put forward, the quality of results from these models will go up and that again is beneficial for the overall AI ecosystem. So our focus has always been on enabling AI infrastructure with both the third-party GPUs as well as AC platforms. And to the extent that any of the dynamic exchange, we stand to benefit from the trends.
Joseph Moore
It's very helpful. And then for my follow-up, you talked about Leo ramp in the second half of the year. I know you're seeing quite a bit of interest in kind of memory bandwidth boosting kind of capabilities. Can you help us size how important that could be in the second half?
Michael Tate
Yes. So we've been working with our customers as a next-generation CPUs that support CXL come to market. These are going to initially be very high memory data intensive applications, high-performance compute type applications. So we'll see those start to deploy in the back half of the year. Ultimately, longer term, we do expect the CXL technology to be very beneficial for more mainstream general compute. So we hope to see that play out in 2026 to '27.
Jitendra Mohan
And to add to that, yes, to add to what Mike -- just to give you a little bit broader picture too, I think there are -- at this point, based on the fact that we have been working closely with the hyperscalers and the CEO vendors for quite some time now. It's become pretty clear that there are three or four sort of applications that are driving the ROI or the use cases for CXL. So at this point, we understand that the first one is getting deployed in the second half of this year, we do expect additional use cases and the associated opportunities to come along probably in 2026 and beyond.
Operator
Tore Svanberg, Stifel.
Tore Svanberg
Thank you, and congrats on the strong results. I had a question on Scorpio. I think you said you still expect it to be more than 10% of revenues in calendar '25. Obviously, that number is now higher. Is that going to be predominantly the P-Series, or are you going to get some contribution already from X-Series in calendar '25?
Michael Tate
We expect contribution for both. The P-Series will be first to launch and then the X-Series we do expect contribution in the latter part of the year.
Tore Svanberg
Great. And as a follow-up on the Taurus, could you just talk a little bit about the revenue profile there? How diversified is it by the customer and use case? And how do you think about that business first versus second half? Because obviously, if it is more diversified in nature, then obviously, maybe second half will be even greater than first half. But yes, just some profile of that revenue base right now.
Michael Tate
Yes. So we're shipping both 200 gig and 400 gig, 400 gig is what really launched here in Q3 and Q4. And we have multiple designs across different types of configurations, and we also support different cable providers and different form factors. The 400-gig opportunity is still relatively limited opportunity set out there. So we've been focusing primarily on our lead customer.
This should continue to provide good strong growth for both in 2025 driven by both AI and general purpose. And then in the latter part of the year, then we see the market start to transition to 800 gig.
Operator
Tom O'Malley, Barclays.
Tom O'Malley
Hey, guys. Thanks for taking my questions. My first was in your prepared remarks, you mentioned that over time, Scorpio would become your biggest product line. I don't think you've mentioned that before. Perhaps you could talk about the timeframe the thinking about that product line taking over? Is this some that we should be seeing potentially as early as 2026. And is that a function of just Scorpio growing faster than you had originally thought or perhaps areas coming down to some extent, just understanding why you made that comment in the preamble.
Jitendra Mohan
Yeah. So I think to add some color to that, the ASP profile of a retimer class device and a switch device tends to be very different, meaning on the switch side, we do get a significantly higher ASP. And if you look at, at least for the customized AI racks are being deployed, we are actually adding a CPO socket to go along with the retimer socket. And given that attach rate configuration. What we also see is that the dog content per GP will go up.
But in general, the switch is a much bigger TAM out there. And then we get to play both in the front end with the PCs and the back end. Back end tends to be obviously a lot more fertile in many ways because you have many GPUs talking to each other. And we benefit from having a high ASP device like the exit switches and them being deployed in a scale that's much more significant compared to any of the products that we have released so far. It does not mean that retimers or CXL controllers are going to go away.
It simply means that the TAM that we're able to address with Scorpio tends to be larger. And given the market momentum and opportunities that we're seeing, including some of the road map products we're developing, we feel confident that going forward, that will continue to evolve and become a flagship product, both from a technology and revenue standpoint.
Tom O'Malley
Super helpful. And then my follow-up was just, I think it was Mike's commentary, on one of the first questions here on the call. You talked about kind of the year 2025 on the Aries side. talking about how in the first half of the year, you would see more internal AI efforts followed by the second half of the year being more merchant GPU. That comment was a bit surprising to me given we're going through a big product transition now at the large customer of yours.
So is there any change in the way that you see the ramp of 2025 versus where you did before? I would have anticipated maybe the merchant GPU being a bit stronger earlier in the year? Just any reason behind those comments that caught me a little off guard.
Michael Tate
Sure. Yes, so the -- first of all, the merchant GPU drives both Scorpio and Aries. So the big incremental piece of the merchant GPUs is the score field content which is all new for us. The designs that we have are complex in nature, they're all new. So the -- to get them -- to productize and ramp we're looking at that to start off in the back half of the year.
Right now, in the first half of the year's preproduction. These are all for custom configuration. So the customization adds a little bit of lead time to the volume rates.
Operator
Ross Seymore, Deutsche Bank.
Ross Seymore
Hi, guys. Thanks. A couple of questions. First one is a little bit higher level. And it's on diversification. You guys talked from a product diversification point of view with Scorpio being over 10% of your revenues going forward. But there's also an admittedly concentrated batch of hyperscalers, there's also the customer concentration. So whether it be on the customer side or the product side or both? Anyway, you can give us a little bit of framework of how 2024 ended, and how you think 2025 will differ from a diversification lens?
Jitendra Mohan
Yeah. So overall, 2024, in fact, 2025, as will, we are shipping to all the hyperscalers across multiple different product sets that we have. So there should not be any doubt or question about that. But having said that, there are some nuances that are important to keep in mind when you're dealing with the data center market. The first thing, like we always say, customer concentration is an occupation hazard in the data center market just because there are only a handful of hyperscalers.
The second thing to keep in mind is that the hyperscalers differ in terms of their maturity when it comes to internal accelerator chip development, some are more advanced than the others. For us when we are designing and counting revenue from internal accelerator program, obviously, there will be a difference between the hyperscalers where we get to play both on the merchant as well as internal Excel data programs compared to hyperscalers where we only have the merchant silicon opportunity. So there is that no one. The other one that is also true is that the appetite for new technology deployment differs from hyperscaler to hyperscaler. So there will be some hyperscalers that are pretty aggressive in terms of deploying new technology.
Others take some time. So you can expect that in a given window of time, there will be a situation where our revenue would be coming more from a given hyperscaler versus the other. It does not mean that the second hyperscale is not a potential customer for us. It simply means that they take more time to deploy something just given their own workloads and other things that they're tracking. But overall, our goal is to make sure that the revenue contribution we get reflects the share that each hyperscaler has, meaning if a given hyperscaler has got a certain percentage of the market in terms of cloud services, then we expect that we are seeing similar numbers in our share.
Ross Seymore
That's very helpful. Mike, one for you. You mentioned earlier about the gross margin, why it was down a little bit below your guide in the fourth quarter, it stays flat in the first. It sounds like you said stayed flat in the second as well. Does that kind of hardware module mix shift in the second half? It sounded like from what you said that does go back away from the hardware side to a little bit more towards the chip level. Was I hearing that correct, or any sort of update on that would be helpful.
Michael Tate
Yes. We do see growth in the hardware, but it should stay at a similar level as rest of the business. So it's not growing as a percentage of the revenue in the first half of the year.
Operator
Quinn Bolton, Needham.
Quinn Bolton
Hey, guys. I wanted to come back to Joe's question on DeepSeek. And obviously, one of the benefits is greater deployment of AI models probably means a shift towards inferencing. Do you guys see any entrancing side, the need for clusters that are as large as we've historically seen on the training side. And if we don't -- if we see greater adoption of sort of more inferencing clusters, but of smaller size. Is that a positive or a negative for the connectivity TAM?
Jitendra Mohan
So let me take a crack at that. So I think, first of all, at the high level, I can say that our business is not strongly dependent on inferencing versus training. We stand to benefit from both of those. Now the point that you made is valid that you don't need as large of a cluster for inference as you need for training. Having said that, if you look at the DeepSeek announcement or even for that matter, other folks that are announcing as well, this chain of thought models which actually require far more compute than they have required historically.
So over time, we do expect that the unit of compute will become a kind of a rack level AI server. What typically used to be a two-year, a four-year server will now be at a rack level. And when you go to a rack level, you come up with these different connective challenges that we are very well positioned to address as we are doing with many of our different product lines.
So all in all, as this unit of compute goes to rack level, we will see higher opportunity. We already are participating in other form factors with our Scorpio P and Aries retimer type products. And so overall, we don't see this as a headwind or a tailwind. We just tend to benefit from both of those.
Quinn Bolton
Got it. That's very helpful. And then second question is just on the Scorpio P family. You mentioned that, that growth really driven by the sort of custom versions of merchant GP-based platforms. Wondering, do you guys have engagements for Scorpio P on the ASIC platforms as well?
Sanjay Gajendra
Yes, absolutely. So we are -- for the customers that we engage right now, we do see opportunities both for P-Series and X-Series as it relates to ASIC platforms. P-Series is more for the head node connectivity to interconnect GPUs with CPU, storage and networking and X-Series for the back end to interconnected GPU themselves.
Operator
Atif Malik, Citi.
Atif Malik
Hi. Thank you for taking my question. My first question is on co-packaged optics. Jitendra, Sanjay, there's a bit of discussion in terms of its timing. Can you share your thoughts on the volume of that and what impact it could have on your retimer and PCI opportunity?
Jitendra Mohan
So let me start. First of all, at the high level, we don't expect CPO to negatively impact our business in the near future. The current products that we have or even the next generation of products that we have. The way we look at it at 200 gigabit percent per lane at the rack level, the activity will remain largely copper. And as a matter of fact, I think the industry will work very hard to even keep it at copper at 400 gigabit per second.
The reason for that is our customers really prefer copper. The reason for that is it is easier to deploy. When we go to optical specifically CPO type solutions, you reduce a lot of additional components into what used to be a purely silicon-based pack, which has its own challenges for reliability as well as serviceability aspects. So in general, what we have seen from our customers is if they can stay with copper, they will stay with copper. If they cannot stay with copper because the bandwidths are too high and so on, they will go to pluggable optics.
And when in plug optics do not become feasible, only then do you go to co-package optics. As a result, we see the first instance of co-package optics happen when the data rates are the highest, the line speeds are the highest and the density of interconnect with the highest, which typically happens in Ethernet switches. So our view is that that's where CP will get deployed first, and that's not an area that we play in today. So it's not likely to impact our near- to medium-term revenues. And for the longer term, we will continue to explore different media types and keep watching the space for the solutions that our customers might like from us.
Atif Malik
Great. And Mike, can you talk about how should we think about OpEx for the year?
Michael Tate
Yeah. We -- like we highlighted, we're going to continue to invest aggressively in the business. Q1 is a bigger step-up difficult given for the reasons we outlined, including the small acquisition we did. So the rate of growth will hopefully normalize a little bit. But right now, we really believe it's the time to press our advantage and invest in the business.
We do have a goal of hitting a long-term operating margin model of 40% operating margins. And that will be driven more by inflections in revenue growth versus our controlling our investment in R&D.
Operator
Suji DeSilva, Roth Capital.
Suji DeSilva
Hi, Jitendra, Sanjay, Mike. First question, just to be clear on the customer focus. I know it's almost entirely hyperscale. I'm just curious if you've been approached by trying to engage the emerging AI companies versus the large established, or if that doesn't fit in your business model?
Jitendra Mohan
It absolutely fits. And just to be very clear, right, we talk about hyperscalers because they are the ones that are deploying the big infrastructure right now in terms of AI training and so on. But as a company, we are tracking both the OEM space for enterprise application as well as the emerging AI players in various different formats, right? So we are tracking it, but what we also believe is that the first generation of systems that are being rolled out right now take something that's directly available from a company that's providing GPUs or folks that are integrating that into OEM level boxes. So that is the first generation.
Probably that will continue probably with a little bit more customization in Gen 2 before we start seeing more sort of hardware level decisions being made in future generations. So overall, we are tracking it, but the fact of the matter right now is bulk of the TAM that's available is through the hyper skills, and that's where we talked about. But at the same time, every OEM out there that's building AI servers. They are buying components from us, either directly or through base boards that they procure from GPU platform providers.
Suji DeSilva
Okay. Appreciate the color there. And then my other question is on UALink. I'm just wondering what you think the biggest challenges or timing impacts of cutting over from Ethernet, PCIe, UALink? And most importantly, for Astera, what's the shared content uplift implication, maybe UALink gaining traction?
Jitendra Mohan
No, it's a great question. So UALink promises to be a very good initiative for the industry to try to bring everybody together for a standardized scale of cluster. So we see a lot of benefit for Astera Lab because of our prominent position on the board. Over time, we will develop a full UALink portfolio to address the connectivity requirement at that level, that our customers will have.
In terms of the timing, the standards group is working to release the standard, the final specification that is supposed to happen at the end of Q1. So we expect the earliest products to hit the market sometime in 2026, which is when we will start to see first instance of UALink.
Operator
Richard Shannon, Craig-Hallum.
Richard Shannon
Hi, guys. Thanks for taking my question as well. Will ask a question to kind of the competitive dynamics here, kind of going in between your retimer and P-Series switch products. I guess, obviously, you have a really strong position with retimers. Coming into the market in the P-Series, we've got another competitor who's very strong there.
And while you've got COMOS, the software barriers are switching there, switches are generally thought of as more of a secure kind of a chip. And so I'm wondering how the competitive dynamics play out here. Do we see kind of an attach rate similar to what you're seeing in the P-Series to the areas anytime soon?
Sanjay Gajendra
Yeah. So first of all, this is a big market. So let's keep that in mind when you start thinking about a switch type of socket. There are many sockets, Gen 6, Gen 5, and so on, right? So there is several different areas that we can go.
And surely, the market is seeing that the competitors are seeing that. And you have folks that are playing there. The angle that we have taken, there is a software across most point that you made, which brings in a lot of diagnostic telemetry and fleet management type of features, scales across all of our products, Scorpio, retimers, and everything that we do, right? But the bigger reason why Scorpio is gaining traction is how it's architected, the incumbent switches were ultimately designed for storage applicants. So obviously, the feature set that were incorporated were more tuned for addressing and attaching to an SSD drives.
What we have done is essentially created a device for the first time where the data flows are created for GPU to GPU traffic, the bandwidth optimization and other capabilities that are needed. So in general, the functionality itself is a lot better, and that's being appreciated by our customers.
And based on that, we continue to build on the portfolio, and we do expect that we will be playing a significant role in that market. And in terms of the timing itself, yes, so there is a timing aspect here, obviously, Gen 6 windows now -- I mean, as far as we know, we are the only ones in the market providing PCIe in six switches. So many times in connectivity, what happens is the first vendor to provide a solid product that is scaling and getting qualified. Those things go a long way in terms of maintaining and building a competitive barrier.
Richard Shannon
Okay. Excellent. My second question is on the Scorpio X-Series. Wondering if you can kind of give us a picture looking forward here about the size of the scale-up domains that hyperscalers are looking to do. I can't remember the exact number what NVIDIA does for theirs today, but I imagine it's going to go up quite a bit here. And to what degree does the size play into your commentary about the Scorpio eventually being your largest product line at the time.
Jitendra Mohan
I play a crack at that. This is Jitendra. So we have mentioned previously that not counting NVLink NVIDIA, we expect the market for X family, the TAM to be $2.5 billion or more than that by 2028. And if you look at what it is today, it's effectively nearly zero. So it's a very, very rapidly growing TAM, and it's one of our -- the largest TAM that we have, which is why we are bullish on the prospects of stone becoming the largest product line over time.
Operator
There are no further questions. I turn the call back over to Leslie Green for closing remarks.
Leslie Green
Thank you, Jale, and thank you, everyone, for your participation and questions. We look forward to updating you on our progress.
Operator
This concludes today's conference call. You may now disconnect.
免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。