Family Office Investment Briefing — April 2026

Follow-Up From
the Previous Call

AI Data Center Infrastructure — Research Briefing

01
How AI Model Weights Work
02
Data Center Cost Anatomy
03
Cold Climate Energy Savings
04
SMR / Nuclear Blockers
05
DCIM Market Deep Dive
06
DCIM Investment Thesis
This deck is the follow-up to the AI data center conversation. Six research questions came out of that call — we've gone deep on all six. In order: how model weights actually work and why they drive infrastructure demand; what a data center actually costs to build and run; why cold climates like Quebec and Iceland are structurally advantaged; whether SMRs are a near-term investable story (they're not, and here's why in detail); and then the deep dive on DCIM software — the management layer that every AI-grade facility needs and where the software investment opportunity sits. Let's start at the foundation.
01 — Foundation

Neural Networks: How AI Sees the World

Signal Flow: Input → Hidden Layers → Output

~100B
connections in a large model
96
transformer layers (GPT-4)
1.76T
estimated parameters (GPT-4)

What Each Node Does

Each node receives inputs, multiplies each by a weight, sums them, then decides whether to "fire" — passing a signal forward or suppressing it.

output = f(w₁x₁ + w₂x₂ + w₃x₃ + … + b)

Layers = Levels of Abstraction

1
Early layers detect simple patterns — syllables, punctuation, word shapes
2
Middle layers compose meaning — grammar, facts, relationships between concepts
3
Final layers produce the answer — next word, classification, sentiment

The weights are the model. Training adjusts billions of these connection strengths until the network reliably produces correct outputs. The next slide explains exactly how — and why it drives data center demand.

Before we can make sense of why AI data centers are so much more expensive than traditional ones, we need to understand the fundamental unit of computation. A neural network is a system of nodes arranged in layers — each node receives inputs, multiplies them by learned weights, and passes a signal forward or suppresses it. The diagram here shows a simplified four-layer network. What makes this infrastructure-relevant: a production-scale model like GPT-4 has an estimated 1.76 trillion parameters and 96 transformer layers. Every inference — every question answered — requires billions of those weight values to be loaded from memory and processed. The compute requirement scales with model size times usage volume. This is why data center demand is not linear with AI adoption — it's multiplicative. Bigger model, more users, more infrastructure. That's the chain we're following for the rest of this deck.
01 — Foundation

How AI Model Weights Work

The Dials Analogy

Imagine a room with billions of tiny dials, each set between "amplify this signal" and "suppress this signal." Your question travels through this room dial by dial — transformed into an answer on the other side.

The weights are those dials. Every dial setting is a number. The entire intelligence of the model lives in these numbers.

How the Dials Get Set: Training

1
Feed model a sentence with the last word missing: "Capital of France is ___"
2
Model guesses randomly at first. Wrong? Dials nudge slightly toward "Paris"
3
Repeat trillions of times on massive text datasets until dials settle into a stable, useful configuration
4
That final configuration — billions of numbers — is the trained model. That's what you download as "open weights."

Why Context Changes the Answer

The word "bank" activates different dials depending on surrounding words. Mention a lake → dials associated with rivers amplify. Weights never change — which dials get activated depends on full context. This is why larger context windows matter.

7B–70B+
Parameters in a modern model ("dials")
$100M+
Training cost: adjusting dials trillions of times
~280 GB
Memory to load a 70B model at full precision
Memory savings: FP32 → INT4 quantization

What "Open Weights" Actually Means

When Meta releases Llama "open source," they share the final dial settings — a file of billions of numbers. What they don't share: training data, training code, or their safety layer (RLHF).

This is why it's called "open weights" not "true open source." The dials are public. What shaped them is not.

Why weights = infrastructure demand: Serving 1M users requires moving billions of weight values through memory constantly — every query, every second. This drives demand for HBM bandwidth, power-hungry GPUs, and the cooling to manage them. Bigger model × more users = more data center spend. It's physics.

The dials analogy is the clearest mental model I've found for this. Every number in the model is a dial — a setting that amplifies or suppresses a signal as it travels through the network. Training is the process of adjusting those dials trillions of times on massive text datasets until the system reliably produces correct outputs. Two investment implications from this. First: the weights ARE the intelligence. A 70B parameter model requires approximately 280GB of memory just to load at full precision. That memory bandwidth requirement — moving billions of weight values through high-bandwidth memory on every inference — is the core driver of GPU demand and by extension data center power and cooling spend. Second: when companies release "open weights," they're sharing the final dial settings but not the training data or methodology that produced them. The intelligence is portable. What created it is not. This is the distinction between "open weights" and "true open source" — a question you'll want to probe in any AI company's IP defensibility argument.
02 — Economics

Data Center Cost Anatomy

CapEx Breakdown — 100 MW Hyperscale Facility

Standard: $900M–$1.5B  |  AI-Optimized: $2B+

Electrical (switchgear, UPS, distribution)40–50%
Building shell & structural15–25%
Land acquisition15–20%
Cooling (HVAC, chillers, liquid)15–20%
IT infrastructure / fit-out15–25%

Where the Power Goes (Operational)

Compute (servers / GPUs)40–50%
Cooling systems30–40%
Power distribution losses, UPS, lighting10–15%
Networking equipment5–10%

PUE: Power Usage Effectiveness. 1.0 = perfect (all power to compute). Hyperscalers target 1.1–1.2. Legacy facilities run 1.4–1.6. Every 0.1 improvement = significant OpEx reduction.

$20M+
Cost per MW, AI-optimized
$7T
Global DC investment through 2030 (McKinsey)

AI vs. Standard: What Changes

ItemStandardAI-Grade
Cost / MW$9–12M$20M+
Rack density4–10 kW100–200 kW
Cooling typeAirLiquid required
Retrofit cost$200–400/kW

Hidden cost: Supply chain lead times. Transformers: 2–4 year lead times (US national security designation). Switchgear: ~3 years. Specialized electricians: booked 12–18 months ahead at $120–150K+. Developers who don't order before breaking ground fall years behind schedule.

$7T global split (McKinsey 2025):
60% → Chips & compute (Nvidia et al.)
25% → Energy, cooling, electrical
15% → Land, construction, site

The 25% non-GPU infrastructure slice = $1.75 trillion.

The standard data center is a $900M to $1.5B build for 100 MW. An AI-optimized facility is $2 billion or more — roughly double. Most investors focus on GPU costs, which is the 60% slice going to chips and compute in McKinsey's $7T estimate. The more interesting investment opportunity is in the remaining 40%: the $1.75 trillion going to energy, cooling, electrical, land, and construction. The hidden number here is supply chain lead times. Transformers — the electrical kind, not the AI kind — have 2 to 4 year lead times because they've been designated a national security item in the US. Switchgear runs about 3 years. Specialized electricians are booked 12 to 18 months ahead. A developer who doesn't order electrical infrastructure before breaking ground falls years behind schedule. This creates a durable structural advantage for operators who have the relationships and balance sheet to commit capital early. That early-commitment premium is real and persistent — not a one-time edge.
03 — Site Strategy

The Cold Climate Advantage

Free Cooling: The Core Mechanism

Cold climates run "free cooling" — using outside air or chilled water instead of mechanical chillers. Since cooling is 30–40% of total electricity, this is the single largest operational lever available.

30–40%
Cooling energy reduction vs. typical US markets
$150M
Annual savings for a 1 GW facility (Alaska estimate)

PUE by Location — Lower Is Better

Hot Climate (TX, AZ)
1.4–1.6
Temperate US / Europe
1.2–1.3
Canada (QC, BC)
1.1–1.2
Nordics / Arctic
1.05–1.15

QScale Q01, Quebec: PUE 1.2 without chillers for most of winter. Free cooling available 80% of the year.

Canada: The Strongest Play

  • eStruxture: CAD $1.8B raised for 90 MW CAL-3 complex, Calgary.
  • Bell AI Fabric: 500 MW supercluster in Kamloops, BC — cold climate + BC Hydro renewable power.
  • Bell Canada AQUILON: 44 cooling units deployed across 20 sites → 80% cooling energy reduction.
  • ROOT Data Center: KyotoCooling rollout → 50% power reduction vs. local air-cooled peers.
  • Federal support: Up to $700M (AI Compute Challenge) for private-sector AI data centers.
  • Demand: Canada Energy Regulator projects DC power demand up 160% by 2030 from AI workloads.

Climate risk inversion: WEF 2025 estimates extreme heat adds $81B/year in additional DC costs globally by 2035, rising to $168B by 2065. Hot-climate facilities bear the majority. Cold-climate operators' advantage widens as warming continues.

Caveats: Alaska has high electricity costs that partially offset cooling savings. Remote locations require fiber investment and on-site staffing. Best combined basis: Iceland and Quebec (cold + cheap renewable power + political stability).

Key Metric
PUE — Power Usage Effectiveness
Total Facility Power ÷ IT Equipment Power
1.0
Perfect (theoretical) — all power reaches compute, zero overhead
1.1–1.2
Hyperscaler target — achievable in cold climates without chillers
1.4–1.6
Legacy average — 40–60¢ wasted for every $1 of useful compute

Every 0.1 PUE improvement = proportionally less wasted power across the entire facility. The location gaps in the chart above translate directly into tens of millions in annual OpEx savings at hyperscale.

Cooling is 30 to 40% of total electricity cost in a data center. In a cold climate — Quebec, BC, Iceland, Nordics — you can run free cooling for much of the year, using outside air or chilled water loops instead of mechanical chillers. The PUE comparison tells the story directly: a hot-climate facility in Texas or Arizona runs 1.4 to 1.6. A Nordic facility runs 1.05 to 1.15. Every 0.1 PUE improvement is a proportional reduction in wasted power across the entire facility — at hyperscale, that's tens of millions in annual OpEx. The best combined basis is Quebec and Iceland: cold temperatures plus cheap renewable hydro or geothermal power plus political stability and existing fibre networks. The climate risk angle also strengthens this thesis over time. WEF projects extreme heat adds $81B per year in additional global DC costs by 2035, rising to $168B by 2065. Hot-climate operators bear the majority of that burden. Cold-climate operators' advantage doesn't just persist — it widens as warming continues. QScale's Q01 facility in Quebec runs PUE of 1.2 without mechanical chillers for most of winter, with free cooling available 80% of the year. That's the benchmark.
04 — Power Sources

SMR / Micro-Nuclear: The 2030+ Story

Small Modular Reactors (SMRs) are under 300 MWe, factory-built and shipped to site. The data center pitch: clean, co-locatable, always-on power. The reality: five structural blockers make this a next-decade story.

Current State (2025–2026)

  • Only Russia & China have operational SMRs (Russia since 2020, China pebble-bed since 2021).
  • Canada closest in West: BWRX-300 at Darlington, ON — construction licence May 2025. Cost: CAD $7.7B (~€15,870/kW).
  • US: NuScale only NRC-approved (77 MWe/module, approved May 2025). First deployment ~2030. Earlier deal collapsed on rising costs.
  • Google: Contracted Kairos Power for multiple SMRs; first online 2030.
  • DOE: $400M each awarded to TVA and Holtec (Dec 2025) for early deployment.
  • Europe: Most designs lack regulatory approval; unlikely to deliver electricity at scale before 2050.

Investment Takeaway

SMRs are a 2030–2040 story, not today. Near-term investable plays: HALEU enrichment supply chain, specialized reactor components, licensing consultancies, and grid interconnection infrastructure. IEA projects 40 GW SMR capacity by 2050 — much higher with regulatory reform. Watch Darlington BWRX-300 as the Western proof-of-concept milestone.

The Five Blockers

Critical
Regulatory: Each design needs independent safety review. 127 designs globally, no harmonized licensing. US NRC process: years + hundreds of millions in fees per design.
Critical
Economics: Need hundreds–thousands of identical units to be cost-competitive. Chicken-and-egg: mass production requires regulatory harmony; regulatory harmony requires proven deployments.
High
HALEU Fuel: Many advanced designs require High-Assay Low-Enriched Uranium. No commercial supply chain at scale. New transport containers, enrichment infrastructure, and modified regulations all needed.
High
Waste: SMRs may produce more radioactive waste per unit of energy than conventional reactors. More dispersed sites = harder logistics and more public opposition.
Medium
NIMBY + Political: NEPA reviews, local opposition lawsuits, and political reversals (see: Germany) can kill projects mid-development regardless of technical merit.
I want to be direct about this one. The SMR narrative in AI infrastructure coverage is significantly ahead of the reality. The honest summary: only Russia and China have operational SMRs today. The nearest Western proof-of-concept is Darlington in Ontario, which received its construction licence in May 2025 and won't be generating electricity for years. The five blockers on the right of this slide are structural — not technical problems that the next engineering breakthrough will solve. Regulatory harmonization alone requires years of international coordination. The HALEU fuel supply chain doesn't exist at commercial scale. The economics require mass production, which requires regulatory harmony, which requires proven deployments — a classic chicken-and-egg problem. The waste issue is genuinely underappreciated: some SMR designs produce more radioactive waste per unit of energy than conventional reactors, and dispersed small sites make waste logistics harder, not easier. The near-term investable angle is not the reactors themselves. It's the supply chain: HALEU enrichment capacity, specialized reactor components, licensing consultancies, grid interconnection infrastructure for eventual sites. Watch Darlington's construction milestones as the earliest credible signal for when this becomes a real Western investor story.
05 — Software Layer

DCIM: The Operating System of AI Infrastructure

What DCIM Actually Does

Unified view across physical infrastructure (power, cooling, space, cabling) and IT assets. The software layer that manages everything the GPU doesn't touch.

1
Asset management — Real-time inventory, location, power draw, lifecycle of every device
2
Capacity planning — Space, power, and cooling available; scenario modelling for expansion
3
Energy monitoring — PUE tracking, rack-level metering, waste identification
4
Environmental monitoring — Temperature, humidity, airflow, cooling loop status
5
Change management — Automated approvals, ticket routing, ServiceNow / ITSM integration

The AI shift: Legacy DCIM was built for 5–10 MW air-cooled facilities. A 500 MW AI campus with 100–200 kW/rack liquid cooling loops and thousands of GPUs demands a fundamentally different architecture. Every AI density upgrade creates a replacement cycle.

Market Size Trajectory

$1.4B
2020 baseline
$3.5–4.7B
2025
$8.8B
2032 projected
$13.9–20.6B
2035 projected
14–17%
Global CAGR
37%
APAC CAGR
60–80%
Software margins

AWS entered DCIM in May 2025 — launching a dashboard that manages on-prem and colo alongside AWS footprint. Category-validating signal. Also creates urgency: pure-plays without clear differentiation face pricing pressure or forced M&A.

Software thesis: 60–80% gross margins attached to a $7T infrastructure buildout. Every new AI data center built needs DCIM. Every upgrade cycle creates replacement demand. This is not discretionary spend.

DCIM is the operating system of a data center — the software layer that manages power, cooling, space, cabling, and IT asset inventory. It's been a relatively sleepy market for 20 years. AI is creating a forced replacement cycle. Here's why: legacy DCIM tools were designed for 4 to 10 kilowatts per rack in air-cooled facilities. AI-grade facilities run 100 to 200 kilowatts per rack with liquid cooling loops that the legacy tools simply cannot monitor or manage. Every operator upgrading to AI-grade compute has to re-evaluate their DCIM stack — not because they want to, but because their existing tools are blind to the new infrastructure. The market goes from $1.4 billion in 2020 to an estimated $13 to $20 billion by 2035. Fourteen to seventeen percent CAGR globally. Thirty-seven percent in APAC. The software thesis is straightforward: 60 to 80% gross margins on mandatory spend in a $7 trillion infrastructure buildout. AWS's entry in May 2025 with their own DC dashboard validates the category — if Amazon thought DCIM was discretionary, they wouldn't be building a product. It also creates urgency: pure-plays without strong differentiation now face a well-resourced competitor.
05 — Vendor Analysis

DCIM Vendor Deep Dive — Sunbird, ServiceNow, FNT & Others

Pure-Play Software (Investment Focus)

Sunbird Software
Series C — $20M (Feb 2025)

Best-in-class 3D visual asset tracking and drag-and-drop capacity planning. Backed by Insight Partners. Spun out from Raritan/Legrand — real customer relationships from day one.

→ M&A or Series D likely within 18 months. Primary target for Schneider, ServiceNow, or Microsoft.

Nlyte (Carrier)
Acquired — Carrier 2021

Best for energy management and workflow automation. Deep ServiceNow integrations — dominant where ITSM runs through ServiceNow. Post-acquisition: slower innovation, Carrier-ecosystem lock-in.

Not investable as standalone today.

Device42
Acquired — Freshworks 2024

Strongest on software/network dependency mapping and auto-discovery. Popular for hybrid multi-site environments. ServiceNow integration. Now considered legacy track.

Legacy trajectory post-acquisition.

Hyperview
Cloud-Native SaaS

Modern SaaS architecture, lower price point, accessible UI. Better fit for mid-market colo than hyperscale. Clean product, limited depth at AI-grade complexity.

Watch for Series B / growth equity round.

FNT GmbH
European Enterprise

German firm, strong in European enterprise markets. GDPR-native, established customer base. Limited US presence and no aggressive AI roadmap visible.

AWS DC Dashboard
New Entrant — May 2025

Manages on-prem + colo alongside AWS footprint. Category-validating — and existential pressure on pure-plays without strong differentiation or hardware lock-in.

Watch impact on Sunbird/Hyperview positioning.

Hardware Giants (Bundled DCIM)

  • Schneider Electric EcoStruxure IT (June 2025): Most complete hardware-to-software stack. Acquired Aveva DCIM assets. Moat via UPS/PDU bundling. ~11.7% mindshare.
  • Vertiv: Exited pure DCIM in 2021 (discontinued Trellis — too complex). Partners with Honeywell. Hardware moat stronger than software. ~4% mindshare.
  • Siemens + IBM: Partnership (Nov 2024) combining MindSphere IoT + AI/cloud for industrial DC management.

The Critical Due Diligence Question

"Do you have native liquid cooling monitoring?"

Most legacy DCIM tools were built for air-cooled environments. The shift from 4–10 kW to 100–200 kW/rack creates a product gap that no incumbent fills natively. Any vendor with a clean answer to this question has a defensible wedge into the next upgrade cycle.

ServiceNow angle: Nlyte's deep ServiceNow integration shows the platform play — enterprises that run ITSM through ServiceNow are sticky. A DCIM product with native ServiceNow workflows and modern liquid-cooling monitoring would have both moats simultaneously.

The vendor landscape is consolidating fast — the same pattern you see in every maturing software category. Hardware giants buy software assets to bundle: Carrier acquired Nlyte, Schneider acquired Aveva's DCIM assets. That leaves a handful of independent pure-plays that are either acquisition candidates or category disruptors. For investment purposes, the two names worth focusing on are Sunbird and Hyperview. Sunbird is the priority diligence candidate: $20 million Series C from Insight Partners in February 2025, spun out from Raritan and Legrand so they had real enterprise customer relationships from day one, and best-in-class 3D visual asset management. The critical due diligence question for any DCIM vendor is simple and binary: do you have native liquid cooling monitoring? Most don't. Legacy tools treat cooling as a set-and-forget air management problem. The vendors who've built native liquid loop monitoring have a defensible wedge into every AI-grade upgrade cycle for the next five years. Ask that question in the first meeting — the answer tells you more than any benchmark.
06 — Investment Thesis

DCIM Investment Thesis

Why This Space Is Attractive

  • Software margins on mandatory spend: 60–80% gross margins attached to the $7T buildout
  • Replacement cycle built-in: AI density shift (4–10 kW → 100–200 kW/rack) makes existing tools obsolete — every upgrade creates a forced re-evaluation
  • APAC at 37% CAGR: Geographic expansion well beyond North America and Europe
  • M&A consolidation underway: Carrier bought Nlyte, Schneider bought Aveva assets — remaining pure-plays are acquisition targets
  • AWS entry validates category: Also accelerates urgency for pure-plays to close deals or exit
  • Incumbents distracted: Schneider pivoting to workplace management (Planon, €1.8B); Vertiv exited pure DCIM; Carrier integrating legacy BMS

Entry Criteria — Series C/D Target

  • 10+ enterprise customers; 2+ hyperscaler or tier-1 colo named wins
  • $3–10M ARR with clear path to $30M+ by exit (5–7 years)
  • Native liquid cooling monitoring capability
  • Cloud-native SaaS architecture — not on-prem or hybrid legacy
  • API-first design: GPU cluster, Kubernetes, ServiceNow integrations
  • Entry valuation $100–500M post-money; target 3–5× MOIC

Key Risks

  • AWS commoditization: Native cloud-DCIM product could undercut pure-plays for hyperscale customers without strong differentiation
  • Hardware bundling: Schneider/Vertiv offering software "free" with equipment creates pricing pressure on SaaS-only challengers
  • Hyperscaler build-vs-buy: Google/Meta may open-source internal DCIM tools (Kubernetes precedent) — commoditizing the category
  • M&A exit only: Plan for acquisition by Schneider, ServiceNow, or Microsoft — not IPO. Standalone SaaS scale unlikely
  • Customer concentration: 3+ hyperscaler customers = de-risked; colo-only = vulnerable to churn

Conviction Pick: Sunbird Software

$20M Series C (Insight Partners, Feb 2025). Best-in-class 3D visual asset management. Hardware-company spinout = real customer relationships from launch. Series C at this stage typically precedes strategic acquisition or growth equity round within 18–24 months. Action: monitor for Series D terms or strategic interest signal from Schneider, ServiceNow, or Microsoft.

Timing: Q3–Q4 2026 preferred. Series C/D rounds in DCIM and thermal software are expected to accelerate as AI power constraints intensify. Valuations will harden within 6 months.

Let me give you the investment thesis in one paragraph. The AI density shift from 4 to 10 kilowatts to 100 to 200 kilowatts per rack makes every existing DCIM tool obsolete. That creates a forced replacement cycle — not a nice-to-have upgrade, a mandatory re-evaluation triggered by a physical infrastructure change. The market is growing at 14 to 17% globally and 37% in APAC. Software gross margins are 60 to 80%. AWS entering validates the category but also creates urgency for pure-plays to differentiate before they get squeezed. The risks are real and worth naming clearly: AWS could commoditize the hyperscaler segment; hardware bundling from Schneider creates pricing pressure on SaaS-only players; and this is likely an M&A exit, not an IPO — plan for acquisition by Schneider, ServiceNow, or Microsoft, and size entry accordingly. The conviction pick is Sunbird: $20M Series C, Insight Partners, real customer relationships from day one. Action item: monitor for Series D terms or strategic acquisition signals from the three likely buyers over the next 18 to 24 months. The window to enter before institutional attention fully arrives is Q3 to Q4 2026.
Synthesis

Key Takeaways & Next Steps

Weights → Infrastructure

AI model weights are billions of numbers that must move through memory constantly. Larger models × more users = more power, cooling, and bandwidth demand. Understanding weights is understanding why infrastructure spend is non-optional.

Foundation for the thesis
Cold Climate → Cost Edge

Canada (Quebec, BC) cuts cooling costs 30–40%. $7T in DC investment through 2030 = $1.75T in non-GPU infrastructure. Cold climate + cheap renewable power + political stability is the site selection formula.

Operational advantage that widens over time
SMR → Watch, Don't Deploy

5 structural blockers make this a 2030–2040 story. Near-term play: HALEU supply chain, licensing consultancies, grid interconnection. Watch Darlington BWRX-300 as Western proof-of-concept.

Not yet — monitor milestones
DCIM → Software Alpha on Infra Spend

$1.4B in 2020 → up to $20.6B by 2035. 14–17% CAGR globally, 37% in APAC. AI density shift makes legacy tools obsolete — mandatory replacement cycle. M&A consolidation underway. Sunbird ($20M Series C, Insight Partners) is the immediate diligence candidate.

Highest-margin layer in the stack

Immediate Action Items

  • Initiate diligence on Sunbird Software (Series C, Insight Partners, Feb 2025)
  • Evaluate cold-climate site exposure for any portfolio DC companies
  • Set Q3–Q4 2026 as deployment window for DCIM Series C/D entry
  • Track Darlington BWRX-300 construction milestones for SMR timeline signal
  • Next session: Horizontal vs. vertical vs. agentic software investment breakdown
Four threads tie the whole deck together and they build on each other. Weights drive infrastructure spend — not metaphorically, literally. The larger the model and the more users, the more HBM bandwidth, power, and cooling are physically required. That's the foundation of the entire infrastructure investment thesis. Cold climate is not a niche story — Quebec and Iceland are where the smart capital is going because free cooling compounds over time and the WEF climate risk projections make the advantage self-reinforcing. SMRs are real technology operating in a 2030 to 2040 deployment window. Near-term investable exposure is in the supply chain, not the reactors themselves. And DCIM is the highest-margin software layer in the AI infrastructure stack, currently undergoing a forced replacement cycle that no incumbent fully owns. The best independent pure-play just raised a Series C from Insight Partners and is actively in conversations with potential strategic acquirers. Q3 to Q4 2026 is the target deployment window before the next round of valuations hardens and the institutional crowd arrives. Those are your four actions — and they're all executable this quarter.
01 / 10
Press N for speaker notes