Vaire Computing is developing Near-Zero Energy Chips to unlock the future of computing. As Moore’s Law slows and AI demand accelerates, conventional architectures are constrained by unsustainable energy and thermal limits. Reversible Computing provides a rare breakthrough: a way to decouple energy use from compute growth, enabling the hardware required for the machine intelligence era.
This session introduces Vaire’s approach and core technology, presents results from the first test chip fabricated in standard CMOS, and outlines the technical findings that validate energy recovery at the logic level. The discussion highlights two implications: co-designed IP to overcome thermal bottlenecks in custom ASICs, and system-level deployment to improve tokens-per-joule and total cost of ownership.
The talk concludes with the scaling law behind the technology and the roadmap toward production silicon later this decade. Early silicon results demonstrate that energy recovery is achievable in standard processes today, establishing the foundation for a new class of compute architectures capable of sustaining AI’s growth.

Alex Fleetwood
Alex Fleetwood is Chief Solutions Architect at Vaire Computing, a deep-tech startup pioneering near-zero energy AI chips based on reversible computing. A serial founder with multiple successful startups under their belt, Alex has built and scaled interdisciplinary teams at the cutting edge of AI, hardware, and systems design. At Vaire, he is responsible for strategic partnerships, go-to-market strategy, and leading engagements with hyperscalers, government agencies, and foundries.
Vaire Computing
Website: https://vaire.co/
Vaire Computing is building Near-Zero Energy Chips to unlock the future of computing. New chip architectures are inevitable as Moore’s Law slows while the AI era drives unprecedented demand.The energy consumption of conventional computing is rising rapidly and is unsustainable. Reversible Computing offers a rare breakthrough: a way to decouple energy use from compute growth, enabling the hardware required for the machine intelligence revolution.
Recent findings from Vaire's first test chip, fabricated in standard CMOS, verify energy recovery and provide a pathway to production silicon this decade. The technology unlocks novel architectures and topologies due to radically reduced heat dissipation.
Vaire brings together world experts in Reversible Computing alongside experienced chip designers, engineers, and product leaders. With operations in London and the Bay Area, the team has raised seed funding from Lifeline and 7Percent Ventures and strategic angels.
Generative AI is fundamentally changing how datacenters are built, putting three types of silicon center-stage: GPUs, custom AI ASICs, and advanced networking processors. Driven by these technologies, the datacenter processor market soared to $147 billion in 2024 and is expected to double by 2030, largely thanks to explosive growth in GPUs and specialized AI ASICs.
While GPUs remain the reference for AI training and inference, hyperscale providers, eager to reduce their dependence on Nvidia, are increasingly co-designing specialized AI ASICs with chipmakers like Broadcom, Marvell, and Alchip. These ASICs sacrifice some versatility to achieve superior performance and energy efficiency, creating opportunities for a thriving startup scene featuring companies like Groq, Cerebras, and Tenstorrent, and spurring major waves of venture investment and mergers. Crucially, chiplet architectures, which combine multiple smaller chip components into a single, optimized package, are now key to driving GPU and ASIC performance upward, beyond what traditional single-chip designs can deliver.
As AI models become ever larger and require responses within milliseconds, networking silicon has become just as critical as processors themselves. DPUs, smart network cards, and advanced switches now coordinate massive arrays of accelerators, making both scale-up and scale-out networks a pivotal part of datacenter performance.

Adrien Sanchez
Adrien Sanchez is Senior Technology & Market Analyst, Computing at Yole Group.
Adrien produces technology & market analyses covering computing hardware and software, AI, machine learning and neural networks.
Prior to Yole Group, he worked at AW Europe (Belgium), where he focused on image recognition & comprehension for ADAS. He also worked at ACOEM (France) on real-time sound classification using deep learning and edge computing.
Adrien graduated with a double degree at Grenoble Institute of Technology PHELMA (Grenoble INP Phelma, France) and Grenoble Ecole de Management (GEM, France), and he earned an MSc on AI at Heriot-Watt University (Edinburgh, UK).

Hugo Antoine
Hugo Antoine is a Technology & Market Analyst, Computing and Software at Yole Group.
Hugo develops technology & market analyses covering computing hardware, software, and Artificial Intelligence (AI).
He holds a master's degree from Ecole des Mines de Saint-Etienne (France), with a focus on microelectronics and computing at the Centre of Microelectronics in Provence (France). In addition, he pursued an AI specialization at Ecole Polytechnique de Montreal (Canada). Furthermore, he completed a dual-degree program in innovation management at emlyon business school, highlighting his expertise at the intersection of technology and business.
Ultra Ethernet is a suite of technologies designed to enhance Ethernet for use in AI and HPC. This talk will describe the motivation for and goals of the Ultra Ethernet Consortium, discuss the AI and HPC problems that it addresses, and go into some technical details of the Ultra Ethernet 1.0 solution, including Ultra Ethernet Transport -- the high-performance transport protocol designed by UEC specifically to support AI and HPC.

Hugh Holbrook
Hugh is the Chief Development Officer at Arista Networks, and is responsible for AI and Cloud platforms and systems software engineering at Arista. Hugh has been at Arista since 2005. Hugh serves on the Steering Committee of the Ultra Ethernet Consortium and chaired its Technical Advisory Committee. He is the inventor of Source-Specific Multicast (SSM), chaired the IETF working group that standardized it, and has authored multiple RFCs including the PIM-SM protocol spec. He has a BS, MS, and PhD from Stanford University in Computer Science.
Ultra Ethernet Consortium
Website: https://ultraethernet.org/
Ultra Ethernet Consortium (UEC) is bringing together leading companies for industry-wide cooperation to build a complete Ethernet-based communication stack architecture for high-performance networking. Artificial Intelligence (AI) and High-Performance Computing (HPC) workloads are rapidly evolving and require best-in-class functionality, performance, interoperability and total cost of ownership, without sacrificing developer and end-user friendliness. The Ultra Ethernet solution stack will capitalize on Ethernet’s ubiquity and flexibility for handling a wide variety of workloads while being scalable and cost-effective.
Ultra Ethernet Consortium is founded by companies with long-standing history and experience in high-performance solutions. Each member is contributing significantly to the broader ecosystem of high-performance in an egalitarian manner. The founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta and Microsoft, who collectively have decades of networking, AI, cloud and high-performance computing-at-scale deployments.
This demo will showcase how Innodisk’s AccelBrain AI software stack powers on‑premise, private large language model (LLM) deployment and then extends those capabilities to the edge. We’ll show how AccelBrain seamlessly integrates with our APEX-E100 and APEX-P200 platforms, enabling efficient, secure, and real‑time AI processing at the edge—without relying on the cloud.

Don Yu
Don Yu is the Special Assistant to the General Manager at Innodisk Corporation (TAIDEX: 5289). He joined the industrial PC field in 2002 and dedicated his sales profession in Korea, Pan-Asia, and the ANZ region from 2002 ~ to 2012. He works closely with local distributors, system integrators, value-added resellers, and Reps. He builds excellent relationships with clients and related teams and has several successful ODM/Customization project records in various industries and applications.
Starting in 2012, his job role was changed from a sales perspective to a product management perspective. He led his former company to be one of the significant AIoT, 5G/ Networking AIoT, and Medical solution providers worldwide.
With 20 years of experience in the industrial PC field, Mr. Don Yu joined Innodisk in 2021 and assisted Innodisk in migrating to the path of AI.
Mr. Don Yu's vision is to contribute to the Earth by leading a team and providing more intelligent solutions to improve our environment and make it a better place to live.
Innodisk
Website: https://www.innodisk.com/index
Innodisk is a global leader in industrial-grade memory, storage, and AIoT solutions. Headquartered in Taiwan with a strong international footprint, Innodisk has held the largest market share in industrial-grade storage since its founding in 2005 and ranks among the world’s top providers of industrial memory modules.
As AI technology continues to evolve, Innodisk leverages its deep expertise, innovative engineering, and integrated hardware-software approach to deliver customized solutions that power the future of AIoT. Through close collaboration with industry partners, we are accelerating the adoption of intelligent applications across sectors—paving the way for a smarter, more connected world.
Explore our solutions and success stories at www.innodisk.com.
Scaling AI accelerators only happens with extreme density and high-speed performance interconnects. Samtec’s Si-Fly® HD co-packaged interconnect systems provide the highest density 224 Gbps PAM4 solution in today's market. Electrically pluggable co-packaged copper and optics solutions (known as CPX) adjacent to AI accelerators are achievable on a 95 mm x 95 mm or smaller substrates. This proximity eliminates long PCB traces, greatly reducing loss while preserving serviceability in a pluggable form factor.
In the Demo Stage presentation, Samtec technical experts will summarize real-world performance data of CPX implementations in various 224 Gbps PAM4 signal channels typically found in AI acceleration platforms. Additionally, an preview of AI scale-up and scale-out networks using CPX technologies will also be presented.

Matthew Burns
Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25 years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. He currently serves as Secretary at PICMG. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.
Samtec
Website: http://www.samtec.com/AI
Founded in 1976, Samtec is a privately held, $822 MM global manufacturer of a broad line of electronic interconnect solutions, including High-Speed Board-to-Board, High-Speed Cables, Mid-Board and Panel Optics, Precision RF, Flexible Stacking, and Micro/Rugged components and cables. With 40+ location severing approximately 125 countries, Samtec’s global presence enables its unmatched customer service.

Steve Majors
Steve leads product strategy, technical execution, and operations at DreamBig where he is responsible for Chiplet Platforms that accelerate memory and networking for AI Infrastructure, Data Center, Automotive, and Robotics.
Prior to joining DreamBig, Steve spent 12 years in various leadership roles at Intel, including high-performance networking for Data Center product lines. Before Intel, Steve served in various leadership roles at NetEffect (RDMA startup acquired by Intel), Rockwell Semiconductor Systems (spun out as Conexant Systems and Mindspeed Technologies IPO), Motorola (Somerset Design Center), and Harris Semiconductor (now part of Renesas Electronics).
Steve holds a BS in Electrical Engineering, from the Florida Institute of Technology.
DreamBig Semiconductor
Website: https://www.dreambigsemi.com/
DreamBig Semiconductor is a chiplet-based networking company dedicated to providing comprehensive, high throughput solutions for the AI, Datacenter, Edge Compute and Automotive markets. Headquartered in San Jose, CA, USA, we have over 200 employees globally and partnerships with the largest semiconductor companies in the world. Founded by RDMA experts and chiplet innovators, DreamBig removes the bottlenecks to on- and off-chip networking, transforming the most important applications on the planet.
Our flexible chiplet architecture allows us to build innovative solutions in months, not years, while our open software stack means our solutions work as separate chips, chiplets, or even just IP.

Anahita Mouro
Transformational Quality Leader with over 15 years of experience in mission-critical industries and technical infrastructure with a proven track record in driving operational excellence and product reliability and quality improvements through innovative strategies, exceptional leadership, and meticulous execution. She is adept at enhancing organizational efficiency, reducing costs, and fostering a culture of continuous improvement. Anahita is a certified ASQ Six Sigma Black Belt and Quality Engineer with deep expertise in managing large-scale projects and leading cross-functional teams to achieve strategic goals.
Passionate about building and leading high-performing technical teams and enable their success and growth.