For years, AI models have been powered by one type of hardware: graphics processing units, or GPUs. Originally designed for rendering video game graphics, GPUs turned out to be highly effective at handling the complex mathematical calculations required for artificial intelligence. Unlike traditional computer processors (CPUs), which execute tasks sequentially, GPUs process thousands of small tasks in parallel, making them ideal for training machine learning models.
As AI advanced, demand for computing power surged. Nvidia, the dominant supplier of GPUs, became the backbone of AI computing. Tech companies responded by building massive GPU clusters—racks of thousands of Nvidia chips linked together to handle the increasing complexity of AI training. This approach has worked, but it comes with significant trade-offs. Managing thousands of GPUs is expensive, energy-intensive, and requires specialized software to coordinate workloads across the entire system. More critically, as AI models scale up, GPUs are struggling to keep pace. The inefficiencies of distributed computing are becoming a bottleneck.
What if AI hardware didn’t need to be built this way at all?
Instead of piecing together thousands of smaller chips, what if a single, massive processor could handle an entire AI workload? Eliminating the need for complex distributed systems, reducing energy waste, and dramatically simplifying AI model development?
That’s exactly the bet one company is making. Rather than relying on the decades-old GPU model, it has developed an entirely different kind of processor—one designed from the ground up to accelerate AI. Its technology has already been adopted by national research labs, Fortune 500 companies, and leading AI developers. And as demand for AI compute continues to skyrocket, this company’s approach could fundamentally reshape the industry.
Its name?
Cerebras Systems.
Since launching its first AI processor, Cerebras has rapidly gained traction among research institutions, government agencies, and private enterprises. With demand for AI compute growing at an unprecedented rate, the company has seen explosive revenue growth, major customer wins, and increasing adoption of its alternative to Nvidia’s GPU-based approach.
Cerebras Systems has grown rapidly, securing its place as a leader in AI computing. The company’s revenue and manufacturing capacity have scaled dramatically in response to rising demand for its AI processors.
2021: Raised a $250 million Series F at a $4.1 billion valuation, generating $12.5 million in revenue.
2022: Expanded into key industries like pharmaceuticals and government research, reaching 11.7% gross margins.
2023: Revenue surged more than tenfold to over $250 million, driven by strong demand for its CS-3 system. To keep up, the company expanded its chip production capacity by 10x. Gross margins improved to 33.5%.
2024: Reported $136.4 million in revenue, with full-year projections at $272 million—marking 245% year-over-year growth. Margins rose to 41.1%, though discounts to its largest customer, G42, caused a slight dip.
Supercomputing Expansion
Cerebras has secured large contracts to build AI-focused supercomputers, increasing adoption across industries.
Condor Galaxy: Built two supercomputers in partnership with G42, each valued at $100 million. Seven more are planned, bringing the total deal to $900 million.
Nautilus Partnership: Launched a computing facility in late 2023 with 80 of its AI processors, creating a small but powerful supercomputer.
Customer Wins
Cerebras’ technology is used by government agencies, research institutions, and private companies tackling complex AI challenges.
Government & Research Labs: Clients include Argonne National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratory.
Private Sector: Companies like GlaxoSmithKline (genetic research), TotalEnergies (climate modeling), and Mayo Clinic (medical AI) are using its systems to speed up AI-driven discoveries.
Breakthrough Performance
Cerebras has demonstrated that its hardware can perform AI-related tasks significantly faster than conventional systems, reducing the time required for complex calculations.
In 2023, its systems simulated airflow in scientific experiments 470 times faster than traditional computers.
In 2024, a research team used Cerebras’ processors to run molecular simulations 179 times faster than the world’s most powerful supercomputer.
AI models for cancer research were processed 300 times faster than existing computing setups.
Beyond research labs, Cerebras’ customers have reported dramatic efficiency gains. AstraZeneca reduced AI-driven research timelines from weeks to days, while GlaxoSmithKline trained genetic models 160 times faster than on traditional hardware.
Cerebras’ rapid momentum is a testament to its ability to solve major AI computing challenges. However, its reliance on G42 for the majority of its revenue remains a potential risk, making diversification a key priority for long-term growth.
Cerebras Systems generates revenue through a mix of hardware sales, professional services, cloud-based offerings, and emerging subscription models. This diversified approach allows it to serve a wide range of customers, from government research labs to enterprise AI teams.
Hardware Sales
Selling AI hardware is Cerebras’ core business, with its CS-3 systems priced in the millions per unit. These systems are powered by its proprietary Wafer Scale Engine, a processor that is much larger and more powerful than standard chips used in AI computing. The company markets its hardware to organizations that require high-speed AI processing, particularly in industries where data security and performance are critical, such as pharmaceuticals, government labs, and cloud infrastructure.
One of its largest contracts is with G42, an Abu Dhabi-based AI firm that has committed to purchasing $1.43 billion worth of Cerebras’ systems and services. G42 currently accounts for 87% of the company’s revenue, highlighting both the strength of this deal and the risks of relying heavily on a single customer.
Cerebras offers specialized AI consulting and optimization services, which help customers maximize the performance of their systems. These services include:
Data Preparation – Helping clients structure their data to improve AI training results.
Model Customization – Assisting in the design of AI models tailored to specific business needs.
Performance Optimization – Providing expert guidance to ensure AI systems run efficiently.
These services account for 25-33% of new customer engagements, deepening long-term relationships with clients and generating additional revenue.
Cerebras also provides access to its AI computing power through cloud-based solutions, allowing customers to use its hardware without buying it outright. The company’s cloud services include:
On-Demand Access – Customers can rent Cerebras’ systems by the hour, similar to leasing time on cloud-based GPUs.
Pay by Model – Instead of paying for computing time, customers can pay a flat fee for Cerebras to train AI models of a specified size and complexity. The company claims this service is priced competitively with Amazon Web Services (AWS) and can deliver a fine-tuned AI model in as little as two weeks.
Cerebras’ cloud offerings are positioned as a cost-effective alternative to traditional AI infrastructure, reportedly providing up to eight times faster training speeds than cloud-based GPU clusters at about half the cost.
To attract a broader customer base, Cerebras is developing a subscription model where customers can install its hardware on-site without purchasing it upfront. Instead, they pay a licensing fee for access. This approach aligns with a growing industry trend where businesses prefer flexible, pay-as-you-go pricing instead of large capital investments in AI hardware.
Cerebras’ revenue model has scaled quickly alongside demand for its hardware:
In 2024, revenue is projected to reach $272 million, a 245% increase year-over-year.
Gross margins have expanded from 11.7% in 2022 to 33.5% in 2023, reaching 41.1% in early 2024.
Recent volume discounts offered to G42 have compressed margins slightly but have helped drive larger deals.
By combining high-margin hardware sales with recurring revenue from services and cloud access, Cerebras is building a more sustainable business model. However, reducing its dependence on G42 and expanding its customer base will be critical for long-term stability.
Cerebras’ valuation has surged in the secondary market, reflecting growing investor enthusiasm. In 2021, the company was valued at $4.1 billion despite generating only $12.5 million in revenue—a multiple based on future potential rather than financial performance. By early 2024, its estimated valuation of $4.7 billion was more grounded, supported by a projected $272 million in revenue, implying a 17x revenue multiple—well within the range of public AI semiconductor companies (2x to 27x).
However, in recent months, secondary market prices have climbed even higher, pushing Cerebras’ estimated valuation to $7.12 billion. This brings its revenue multiple up to 26.2x, placing it at the very top of the valuation range for comparable companies. While this could suggest overvaluation, it also signals increasing confidence in Cerebras’ long-term potential, particularly as demand for AI computing continues to grow. If AI hardware adoption follows a trajectory similar to Nvidia’s, Cerebras’ opportunities could be far greater than current revenue figures suggest.
AI Training: Faster Model Development
Cerebras has built its business by focusing on AI training—the process of developing machine learning models. This remains a critical and growing market, as larger and more advanced models require increasingly powerful hardware.
Training AI models like OpenAI’s GPT-4 typically requires thousands of GPUs running for weeks. Cerebras’ approach, which replaces these distributed systems with a single large processor, reduces complexity, cuts costs, and speeds up development cycles.
Expanding into AI Inference
While AI training is Cerebras’ primary focus, an even larger market is emerging in AI inference—the process of running trained models in real-time applications. This includes chatbots, autonomous systems, and AI-powered search engines.
Most inference workloads today rely on GPUs, but as AI models grow, GPUs are becoming a bottleneck. Cerebras’ technology allows AI models to generate results up to 20 times faster than traditional GPU-based setups, offering a potential competitive advantage if broadly adopted.
Cloud-Based and Subscription Growth
Cerebras is expanding beyond hardware sales by offering flexible deployment options:
Cloud services: Customers can rent Cerebras’ AI computing power on demand, reducing the need for upfront hardware investment.
Pay-by-model: Instead of renting processing time, customers can pay a fixed fee for Cerebras to train an AI model for them. This service is priced competitively with Amazon Web Services (AWS) and can deliver fine-tuned AI models within weeks.
Subscription model: Cerebras is developing an on-premise subscription offering, allowing customers to install its hardware under a licensing agreement rather than purchasing it outright.
By offering cost-effective cloud alternatives—at half the price of equivalent GPU clusters—Cerebras is making its technology accessible to a wider range of companies.
Scaling Beyond G42
Despite its rapid growth, Cerebras remains heavily reliant on G42, which accounts for 87% of its revenue. While this has been a strong driver of early success, long-term sustainability will require a more diversified customer base.
Cerebras is taking steps to expand beyond G42, including:
A multi-year partnership with Aleph Alpha to develop sovereign AI models for the German government.
Collaborations with Dell Technologies and Qualcomm, opening the door to broader AI ecosystem integration.
If Cerebras successfully expands its customer base, it will strengthen its long-term stability and reduce the risk associated with a single dominant buyer.
Final Outlook
Cerebras’ recent valuation surge suggests that the market is beginning to price in the company’s long-term potential rather than just its current revenue. While this raises concerns about overvaluation, history has shown that the demand for transformative AI computing can far exceed expectations, as seen with Nvidia’s meteoric rise.
If Cerebras’ technology gains widespread adoption—particularly in inference and cloud-based AI—its true market opportunity could be far larger than its current financials suggest. However, expanding beyond its reliance on G42 and proving that its technology can compete at scale will be critical tests for sustaining long-term investor confidence.
Cerebras operates in an increasingly crowded AI hardware market, competing with both established giants and emerging challengers. The company differentiates itself through its wafer-scale architecture, which allows AI models to train more efficiently than traditional GPU-based clusters. However, widespread adoption remains a challenge due to the dominance of Nvidia, the entrenched software ecosystem surrounding GPUs, and the growing number of alternative AI hardware solutions.
Competing with Nvidia, AMD, and Intel
Nvidia remains the undisputed leader in AI computing, with a market cap of $2.6 trillion and an estimated 70 to 95 percent market share in AI chips. Its GPUs power the vast majority of AI models, including those used by OpenAI, Meta, Microsoft, and Google. A key advantage for Nvidia is its CUDA software ecosystem, which has locked in most AI developers and enterprises. While Cerebras’ hardware can outperform Nvidia’s GPUs in certain workloads, switching costs remain high for developers deeply embedded in CUDA.
AMD has positioned itself as Nvidia’s closest competitor, with its MI300X GPU gaining traction as an alternative for AI workloads. With the upcoming MI325X and MI400 expected to compete with Nvidia’s next-gen GPUs, AMD presents a more familiar option for companies that want to avoid Nvidia but prefer a GPU-based architecture.
Intel, traditionally focused on CPUs, has entered the AI race with its Gaudi3 chip, which it claims is 50 percent faster than Nvidia’s H100 for training AI models. Intel is also set to receive $20 billion in U.S. government funding under the CHIPS Act, strengthening its domestic manufacturing capabilities—a key differentiator in an industry reliant on Taiwan’s TSMC.
Challengers in AI Training
Several startups are also working on AI chips designed to compete with Nvidia and Cerebras, each with a unique approach.
SambaNova Systems focuses on full-stack AI hardware and software solutions, recently launching Samba-1, a trillion-parameter model running on its SN40L chip. SambaNova’s ability to support both training and inference makes it a direct competitor to Cerebras.
Graphcore developed the Intelligence Processing Unit, claiming up to 16 times faster AI training speeds than Nvidia GPUs. However, the company has struggled with adoption, reporting a 46 percent revenue drop in 2022 and considering acquisition offers from SoftBank.
Rain AI specializes in neuromorphic processing units, designed for maximum energy efficiency. While details on its performance remain scarce, its backers—including OpenAI’s Sam Altman—signal confidence in its long-term potential.
Rivals in AI Inference
As Cerebras expands into AI inference, it faces competition from companies developing specialized hardware for running trained AI models in real time.
Groq built language-processing units optimized for inference, running Meta’s Llama 3 model at 350 tokens per second—20 times faster than Microsoft’s Azure data centers while being 8 times cheaper.
EtchedAI released the Sohu chip, claiming it is an order of magnitude faster and cheaper than Nvidia’s Blackwell B200 GPUs and 20 times faster than Nvidia’s H100 chips.
Hailo focuses on edge AI, designing inference processors that run on small devices, such as the Raspberry Pi 5, rather than data centers. While Cerebras targets high-performance AI workloads, Hailo’s success in edge computing could limit its ability to expand into decentralized AI.
Hyperscalers and Cloud Providers
Another challenge for Cerebras is that major cloud providers are now building their own AI chips, reducing dependence on third-party hardware.
AWS launched the Trainium2 chip, rumored to be twice as fast as Nvidia’s H100 while being 30 to 40 percent cheaper.
Microsoft Azure is developing its Maia 100 AI chip, which could reduce its reliance on Nvidia while competing with third-party solutions like Cerebras.
Google Cloud has invested heavily in Tensor Processing Units, with its sixth-generation Trillium chips claiming five times better compute performance than earlier versions.
Meta is developing its own inference accelerator, MTIA v2, as part of its $40 billion AI infrastructure expansion.
Alternative Architectures
Beyond GPUs and AI accelerators, some companies are pursuing radically different computing frameworks.
Lightmatter and Ayar Labs are developing optical computing solutions, using photonics to enable faster, more energy-efficient AI processing.
Cortical Labs is exploring biological computing, using living brain cells instead of silicon transistors to power AI models.
While these technologies are still in early development, they could introduce entirely new ways of handling AI workloads in the future.
Where Cerebras Stands
Cerebras’ biggest advantage is its wafer-scale chip design, which eliminates the need for massive, complex GPU clusters. Its systems provide simpler scaling, faster AI training, and potentially superior inference performance compared to GPUs. However, the company still faces significant hurdles.
Nvidia’s software dominance makes adoption difficult, as most AI developers are already trained on CUDA.
Hyperscalers are moving away from third-party chips, developing their own AI processors.
Specialized inference competitors like Groq and EtchedAI are quickly gaining traction.
If Cerebras can prove its efficiency and cost-effectiveness at scale, it has a real opportunity to carve out a strong position in the AI computing market. However, winning over developers, expanding beyond G42, and proving its inference capabilities will be critical for long-term success.
Cerebras has seen a dramatic rise in valuation, driven by increasing investor confidence in its AI hardware. In 2021, the company raised its Series F at a $4.1 billion valuation, despite generating only $12.5 million in revenue. By 2024, its valuation had climbed to $4.7 billion, with revenue projected to reach $272 million, bringing its revenue multiple down to a more sustainable 17x.
However, secondary market activity has surged in early 2025, pushing Cerebras’ estimated valuation to $7.12 billion. This has driven the revenue multiple up to 26.2x, approaching the higher end of public AI semiconductor valuations. While this rapid appreciation raises concerns about overvaluation, it also underscores the growing demand for next-generation AI computing and suggests that investors see potential for Cerebras to capture a significant share of the market.
Investor Base and Key Backers
Cerebras has attracted some of the most well-respected investors in venture capital, with continued support from Benchmark Capital, which has backed the company across multiple funding rounds. Benchmark is known for making concentrated, high-conviction bets on industry-defining companies, and its ongoing participation signals long-term confidence in Cerebras’ potential.
In addition, the Abu Dhabi Growth Fund and G42 have become major financial backers. With nearly unlimited capital and a strategic interest in AI infrastructure, these investors provide Cerebras with the resources to scale aggressively. The close relationship with G42 has already translated into $1.43 billion in committed hardware purchases, making it Cerebras’ largest customer.
Other notable investors include:
Alpha Wave Ventures – A major backer of AI and deep tech companies.
Altimeter Capital – A growth-stage investor known for pre-IPO technology investments.
Coatue Management – Specializes in AI and computing infrastructure.
Eclipse Ventures – Focused on hardware and frontier technology.
Sequoia Capital – One of the most prominent venture firms, involved in earlier rounds.
Pros | Cons |
---|---|
Wafer-scale technology enables simpler scaling and faster AI training compared to traditional GPU clusters. | Heavy reliance on G42, which accounts for 87 percent of revenue, creates concentration risk. |
Positioned to capitalize on the AI hardware market, expected to reach $250 billion by 2030. | Secondary market valuation has surged to $7.12 billion, raising concerns about potential overvaluation. |
Expanding beyond AI training into inference, a larger and faster-growing market. | Nvidia’s CUDA ecosystem remains a significant barrier to adoption, making it difficult for developers to switch. |
Cloud services and subscription models lower adoption costs, broadening access to its technology. | Hyperscalers like AWS, Google, and Microsoft are developing their own AI chips, reducing demand for third-party hardware. |
Strong revenue growth, with 2024 revenue projected at $272 million, reflecting 245 percent year-over-year growth. | Competition in AI hardware is intensifying, with startups like Groq and EtchedAI gaining traction. |
Backed by top-tier investors, including Benchmark, Altimeter Capital, and the Abu Dhabi Growth Fund, ensuring access to capital. | Manufacturing is dependent on TSMC, exposing the company to geopolitical risks and potential supply chain disruptions. |
Supercomputing expansion with large-scale deployments, including $1.43 billion in commitments from G42. | High power consumption compared to newer energy-efficient chips, which could become a competitive disadvantage. |
AI models trained on Cerebras hardware have demonstrated major performance gains, outperforming Nvidia GPUs in specific workloads. | Switching costs for enterprises already invested in Nvidia or AMD hardware make adoption slower. |
Confidential IPO filing positions Cerebras to capitalize on public market interest in AI computing. | Long-term business model remains uncertain as large AI firms increasingly build custom in-house hardware solutions. |
Cerebras is one of the most actively traded private stocks, with frequent transactions occurring on platforms like Hiive and EquityZen. The Hiive trading chart shows a steady rise in share prices, reflecting strong demand. However, as the company moves closer to a public offering, these opportunities will become more limited.
Recent listings suggest a wide range of prices, with some shares trading as high as $55.00 per share, implying a valuation of over $11 billion. However, a fund offering on Hiive is currently available at $37 per share, making it the lowest recent price seen in the secondary market. This creates a potential opportunity for investors looking to buy in at a discount compared to other available listings.
Platforms Offering Cerebras Shares
Hiive – A marketplace for private company shares, where Cerebras has seen frequent trading. The Hiive fund offering at $37 per share stands out as the lowest recent price.
EquityZen – Another secondary market platform with active listings for Cerebras stock. Current prices are hovering around $40/share.
IPO and Future Availability
Cerebras has confidentially filed for an IPO, and as it moves closer to a public listing, private market access will likely disappear. For more details on Cerebras’ IPO potential and other upcoming public offerings, see our article IPOs to Watch in 2025.
While the IPO timeline remains uncertain, investors considering secondary market purchases should be aware that pricing could fluctuate significantly leading up to a public debut.
If you choose to invest, there are a few parting words of advice I’d like to offer:
Accredited investors only: Private market opportunities require accreditation.
Trust the platforms: Augment and Hiive are reputable operators with a strong track record.
Expect delays: Private market transactions can take time to close, and not every deal goes through. Don’t be discouraged—other opportunities will follow.
As always, if you want us to clarify anything in this material, shoot us an email at [email protected] and we’ll respond as soon as we can.