I spent a year building with AI. Not using it. Building with it. Claude Code as my primary development tool. Every decision tracked in a SQLite database. Every initiative numbered.
I started with a document intelligence problem I'd watched go unsolved for 13 years. Along the way, I solved a second problem I didn't know I had. Now I have two products and no idea which one the market wants.
So I'm running an experiment.
The two products
Signal Provenance monitors files with a hash chain. Every file gets a SHA-256 hash. Every hash links to the previous one. If anything changes, the chain shows exactly what, when, and in what order. Rust core. Desktop app. Runs on your machine. $10,000/yr.
The use case: EU AI Act Articles 12 and 15 require logging and integrity for high-risk AI systems. Most companies are going to need tamper-evident audit trails. Signal Provenance builds them automatically from any directory on disk.
Current state: 49 Rust tests passing. 7,012 ledger entries on my production machine. macOS binary signed, notarized, and stapled. Running in production on my own infrastructure for 3 months.
Sympatheia finds hidden dependencies in distributed systems from passive metrics. No agents to install. No credentials required. Read-only analysis. It uses spectral coherence to detect which services are coupled, how strongly, and which ones drive the behavior of others.
The use case: your monitoring tool shows you individual service health. Sympatheia shows you which services are structurally connected in ways your dependency map doesn't capture. When Service A degrades, it tells you Service C will feel it 3 minutes later, even though they don't directly communicate.
Current state: 9,591 services analyzed across experiments. Co-located services show 1.71x higher spectral coherence (p < 10-233). Connected services show 13.21x higher latency correlation. Free audit.
The rules
Same site. Same design. Same analytics. Same builder. No A/B testing tricks. No differential pricing gimmicks. Both products get equal content investment. The market tells me which one matters.
I'll publish everything I learn. The page views. The contact form submissions. Which channels drive which product interest. What content resonates and what falls flat. If one product gets 3x the engagement after 30 days, that's my signal.
Why public
Two reasons.
First, I've tracked 178 initiatives and spent $1,772 in infrastructure over the past year with zero revenue. I have a lot of data about what it takes to build two products from scratch with AI. That data is worth sharing regardless of which product wins.
Second, I'm a solo founder. Building in public is the only distribution channel that compounds. Every post is a data point. Every framework I share is a potential citation. The experiment itself is the content.
What's next
Weekly blog posts. Each one teaches something concrete from the build. Provenance thread. Sympatheia thread. Infrastructure decisions. Mistakes. Numbers.
If you want the monthly summary of what the data shows, there's a subscribe form on the homepage.
Let's see what the market says.