Confidential
AI Powered Audit Pitchdeck
AI Powered Audit Pitchdeck
Audit Report
NewCoEnergy
Energy AI Heavy Seed Stage MRR: €18K
8.4 /10
Verdict: Optimistic
WCCAA — Why Commit Capital
Auto-Weighted Average
Report IDWCC-2026-demo-1
Date09 March 2026
Model Training19 classification · 89 human review · 722 AI battle lessons
BenchmarkTop 10% of 41 Seed Energy decks
This is a fictitious sample report. The company "NewCoEnergy" does not exist. This report is for demonstration purposes only, to illustrate the structure and depth of a TechTruth.ai audit.
⚠ Not reviewed by human expert Fictitious sample — not a real company
This report is generated by TechTruth.ai using dual-model AI analysis (Gemini + Claude). Results are indicative only and do not constitute professional audit advice or investment advice. All findings should be independently verified. TechTruth.ai accepts no liability for decisions made based on this report.

TechTruth AI Generated Report

// [ANONYMISED DEMO] · Energy · AI Heavy · Seed · Report ID: WCC-2026-demo-1 · 09 March 2026

⚠ Not checked by human expert
Classification Energy AI Heavy Seed MRR: €18K
🏆 Top 10% of Seed Energy decks Benchmarked vs 41 Seed Energy decks · 19 classification · 89 human review · 722 AI battle lessons applied

WCCAA Score

8.4 /10
Verdict: Optimistic

1. Executive Scorecard (Table)

DimensionScore (1–10)Notes
Founder Team9Strong industrial GTM + engineering leadership; verification mix of supported / partial (see Founder Check).
AI Asset Depth8Proprietary TSDB + query plane — not a thin chat wrapper.
Technical Moat8Years of specialised time-series R&D create replication cost.
Infrastructure & Scalability8Architecture built for high-cardinality plant data.
Data Strategy7Customer-owned telemetry — strong privacy, limits pooled training.
Why Commit Capital Auto Weighted Average (WCCAA-score) 8.4/10 Weighted: Founder 70% · AI depth 10% · Moat 8% · Infra 6% · Data 6%

2. Executive Summary

NewCoEnergy is presented as an industrial analytics layer on top of a proprietary time-series stack — with AI used as a controlled query and insight layer over deterministic plant data. The principal Reality Gap in this fictitious demo remains execution timing: several roadmap milestones read as future “vision” even where the calendar has moved forward, and MLOps depth for autonomous forecasting is still thin in the materials. The updated founder verification block below reflects the same pipeline used in live reports: structured LinkedIn checks, open-web corroboration, optional exit traces, and a roadmap/experience fit pass — shown here with anonymised personas.

✓ Deep TSDB asset — differentiated vs generic LLM wrappers ✓ Founder verification matrix (Ghost Check) ⚠ Roadmap vs calendar alignment ⚠ MLOps / drift monitoring light in deck ✓ Early revenue signal (demo)

3. Founder Check (A first Ghost Check)

What was checked (this run)

LinkedIn pass 3 profiles structured scrape
Public web (DDG) Active founder corpus
Exit trace Limited public signals only
Roadmap × CV fit Cross-checked experience vs slides
Commercial scaler signal

Present — anonymised profiles show repeatable enterprise rollout patterns in relevant verticals (demo narrative).

Verification depth vs headline claim (sample)

Founder A · Commercial 61%
Founder B · Technical 58%
Founder C · Domain advisor 74%
Name & lane Claim tested (deck, one line) Evidence (Checked / Found) Verdict Avg verification
Founder A (CEO · Commercial) “Scaled industrial software revenue across multi-region enterprise accounts.” Checked: LinkedIn snippet, timeline. Found: Roles and tenure broadly align; deal scope wording softer than headline. Partially supported 61%
Founder B (CTO · Technical) “Ex–BigTech — led data platforms for large-scale telemetry workloads.” Checked: LinkedIn, patent/news scan. Found: Strong infra credibility; limited public ML research trail vs claims. Partially supported 58%
Founder C (Advisor · Domain) “Former OEM operator — hands-on plant digitisation programmes.” Checked: DDG, trade press. Found: Advisor capacity and sector background confirmed. Supported 74%
Commercial ambition vs evidence

Founder A + C fit complex enterprise cycles typical of this category.

Technical ambition vs evidence

Founder B covers platform delivery; gaps remain vs stated autonomous forecasting path — consistent with ML/Ops hires still open in deck.

Founder list (anonymised): Three named personas above map to CEO / CTO / strategic advisor roles in the synthetic narrative — not real individuals.

Exit / sale verification: inconclusive for CEO headline exit (public corroboration partial).

Claim verification: Matrix reflects automated + human-assisted checks used in production reports.

4. Founder & Team Analysis

  • The team skews credible on delivery and industrial access; the verification table highlights where storytelling runs ahead of what open sources can prove.
  • Technical leadership is strongest on pipelines and uptime — lighter on published ML research than the “full autonomy” narrative implies.
  • Bus factor risk concentrates on engineering leadership for roadmap items that assume new ML hiring.
  • Synthetic demo only — conclusions illustrate layout, not an assessment of any real company.

5. The Main Pillars

Strategy & Product8/10
Infrastructure8/10
AI Logic7/10
ML & LLM Ops5/10
Team9/10
Data as an Asset7/10

6. AI Asset Deep-Dive (The Wrapper Check)

AI Class AI Heavy / Deep Asset
Proprietary model/stackConfirmed — TSDB core
Custom training dataMostly client-owned telemetry
LLM roleTranslation / orchestration — not sole intelligence
Technical moat replicabilityHard — years of domain-specific engineering
MLOps / drift monitoringEmerging — roadmap-heavy

7. Red-Team vs. Blue-Team Argumentation

Blue-Team (The Bull Case)

Industrial data is too messy for generic LLMs — whoever owns ingestion, canonical tagging, and low-latency query over live telemetry wins. NewCoEnergy’s specialised stack is exactly the kind of moat API wrappers cannot copy overnight.

Red-Team (The Bear Case)

Legacy SCADA naming chaos and integration debt cap how fast “AI insights” compound. Hiring for ML reliability lags the roadmap — execution risk concentrates on messy field deployments, not slides.

8. Meeting Focus List — Actionable Questions for Founders

Temporal Inconsistency & Traction

“Which roadmap milestones are live in production today versus still pilot-stage?”

Sensor Taxonomy & Cold Start

“How do you bootstrap models when tags are incomplete or inconsistent across plants?”

LLM Latency vs Plant SLAs

“Where is inference bounded vs deterministic queries — and what breaks if the LLM vendor changes pricing?”

MLOps for Forecasting

“Show the retraining, drift, and evaluation loop you will run per customer.”

9. The Why Commit Reality Check

NOTICE: Preliminary Vision Assessment. This synthetic page mirrors live report layout only. For a definitive Truth Gap on code, a GitHub Ghost Scan can be conducted — zero-knowledge metadata extraction for velocity, attribution, and dependencies — e.g. with Enjins.
ElementFounder ClaimRealityMatch
Founder / Team ClaimsExecutive scaling + BigTech platform leadershipVerification matrix: mixed — partial vs supported (see §3).Gap
AI Architecture“Natural-language access to industrial telemetry”TSDB-first architecture; LLM assists query/explain — aligns.Match
PerformanceSub-second queries on high-cardinality streamsPlausible with stack described; third-party benchmarks not attached.Match
RoadmapAutonomous forecasting milestone packDeck timing fuzzy vs stated calendar — treat as execution risk.Gap
MLOpsContinuous learning “on plant data”Monitoring / retrain loop under-specified in materials.Gap
DataCustomer insights improve with every deploymentStrong privacy posture limits cross-customer training — flywheel nuanced.Gap
Verdict: Optimistic (demo). Moat narrative is technically serious; founder claims require the same disciplined verification you see in §3–4 on real deals. Track record of shipping in messy plants matters more than slide superlatives.