Hornbill Dispatch #12 | The Muddy Reality of Counting Carbon
Part 1 of 5: We're open-sourcing our AI tree monitoring research and tech-stack, and launching a platform for everyone to try it. Part 1 is the story of how a rainy, muddy day sparked that journey.
The QR code was barely hanging on.
Our team stood ankle-deep in mud on a farmer's plot in Maharashtra in August 2021, watching weeks of painstaking work literally blow away in the monsoon wind. The laminated tag we'd carefully affixed to this particular tree was now dangling by a thread - the crisp black squares already blurred by moisture.
This was supposed to be our solution to the lack of accountability and measurability in our sector. Each QR code represented hours of work - recording GPS coordinates, documenting growth patterns, creating what we proudly called "individual tree profiles." We were going to bring transparency by giving every single tree a digital identity. Or so we thought.
The inadequacy of our "simple" and “transformative” scanning process soon dawned on us: an enthusiastic volunteer kept messaging us for months, begging to work with us in the field. We were at capacity, but his persistence eventually wore us down. "Fine," we said, "come help with tree monitoring."
His first task: scanning QR codes and updating tree data. Day one in the village of Dhawalgaon went OK. Day two, he trudged along. Day three? He disappeared. Just vanished. No goodbye, no explanation - he'd realized "individual tree documentation" actually meant tedium and drudgery, and wisely decided he had urgent business elsewhere. We never heard from him again. His disappearing act should have been our first warning sign.
But standing there in the rain, watching our revolution wash away, the truth was obvious: we'd solved transparency but murdered scalability.
The 10,000-Tree Question
It started innocently enough. At Farmers for Forests, we were working with smallholder farmers across Maharashtra to plant mixed agroforestry systems - not the monoculture plantations that dominate industrial forestry - but diverse ecosystems that could generate high quality carbon credits while providing farmers with harvest income.
We'd also made a promise to farmers: annual payments linked directly to tree survival rates on their land from year one. It wasn't a huge amount - maybe enough to cover a month or two of household expenses. But it represented something bigger. Agroforestry takes three to five years before trees start yielding fruit or timber income. Our payments were meant as a bridge - recognition of the ecosystem services farmers were providing while waiting for their trees to mature.
But our commitment to fairness had created a technical challenge. We had to know exactly how many trees survived on each farmer's plot. Not estimates. Not statistical samples. Actual counts.
But there was another layer to our challenge. We couldn't just trust field reports that claimed "90% survival rate on Plot A." We needed verification that our field teams had actually visited every tree, observed its condition, and documented its status. The QR codes became our guardrail - a "trust but verify" system that ensured accountability. When field teams submitted survival data, each entry had to include a scanned QR code as proof of visit.
No scan? No count.
Every tree gets approached. Every tree gets photographed and GPS-tagged. When trees died, we'd remove their codes. When new ones were planted, we'd add them to the system. We could calculate survival rates down to the individual tree level. Perfect transparency, perfect accountability.
The first 100 trees were exhilarating. Farmers gathered around as we explained how our field team’s phones could now tell them exactly which trees were being counted for their payments. "This is the first time someone has properly documented our trees," farmers would tell us. For the first time, the connection between land stewardship and payment appeared tangible.
The next 1,000 trees were satisfying. We were building something unprecedented - a database where every individual tree had its own digital profile.
By tree #10,000, the cracks were showing. The physical labor was immense. Our field team hated the work. Every quarterly visit meant checking thousands of QR codes. The QR codes needed constant maintenance. Sometimes QR codes kept getting mixed. When we would find the culprit, the reason often was “just for fun.”
The data entry was consuming entire days. We were spending more time documenting forests than helping farmers build them.
At tree #50,000ish - that muddy day in the monsoon - we finally asked the question we'd been avoiding: Why are we doing this?
We were like punctilious accountants who insisted on counting every penny by hand.
But we couldn't abandon the tree-level payments. The transparency we'd created was valuable. We just needed a better way to achieve it.
The Smallholder Invisibility Problem
Our QR code experiment had revealed something important to us about nature-based solutions (NbS) and the carbon credit sector: it's not built for small farmers. While we were hand-tagging individual trees, the big carbon projects at that time were using manual monitoring and satellite data to monitor vast forest concessions. Satellites can track deforestation across vast parcels of land, but they can't see the 50 young trees on a farmer's two-acre plot. They can monitor established forests and agro-forests, but they can't accurately detect small saplings in their critical first years of growth.
This creates a cruel irony: the communities most vulnerable to climate change - smallholder farmers - were often getting effectively excluded from climate solutions. Their plots are too small for satellite monitoring and too scattered for rigorous yet efficient manual checks.
From 2020 itself, our team was researching alternatives. We talked to remote sensing experts who explained why more affordable satellite data can't reliably detect small trees. We spoke with carbon project developers who described the economics that favor large-scale plantations over diverse smallholder agroforestry. And the statistical sampling approach of calculating carbon sequestration in carbon credit projects introduced uncertainty that buyers penalized with paltry payments.
The DeepForest Moment
By mid 2022, we knew one thing: we had to automate tree detection.
Our first attempt used satellite data. We'd secured 3-meter resolution PlanetScope imagery and spent weeks analyzing the images, searching for individual trees. The resolution wasn't there. Even at 3 meters per pixel, the young agroforestry trees we worked with simply didn't show up as distinct objects. Higher resolution satellite data exists, but is prohibitively expensive. Just monitoring itself would have quadrupled our cost. Also, satellite imagery comes in standard tiles covering very large areas - but our farmer plots are small (1-2 acres), scattered, and irregularly shaped.
That’s when our team read about drone based photogrammetry and thought of experimenting with it. We captured our first orthomosaic - higher-resolution aerial images where individual trees were clearly visible.
Finally, we could see what we needed to count. But seeing and automatically counting were different problems entirely.
When Pravin Mulay (who is today our VP of Research & Tech) joined us in early 2022, one of his first tasks was figuring out how to automate tree counting from these drone orthomosaics.
Instead of building detection algorithms from scratch, he discovered something called DeepForest - a pre-trained model from the University of Florida that could identify individual tree crowns in aerial photographs, achieving 70-75% accuracy across different forest types.
We got excited. Here was exactly what we needed: a computer vision system designed for the problem we were trying to solve. The team worked together to deploy and adapt the DeepForest code. What if we could give every tree a digital identity without physically tagging it? What if computer vision could replace QR codes? What if drone photogrammetry could scale our documentation approach to hundreds of thousands of farmers?
The more we experimented with DeepForest, the more excited we became. Computer vision could detect trees regardless of plot size or shape. Machine learning could potentially identify not just trees, but species, health, growth rates - all the data we'd been manually collecting, but extracted automatically from aerial images.
This wasn't just a better way to solve our inventory problem. This was a way to democratize forest monitoring itself.
Our QR Legacy
Our QR experiment had failed as a scalable solution, but it succeeded as a research project. Those 100,000 manually documented trees taught us what farmers actually needed from tree monitoring technology.
When our field teams scanned QR codes and logged each tree's condition into the Kobo Toolbox, the value that emerged from the data were the patterns we started observing when we aggregated individual tree data over time and space.
Tree #47 thrived in particular soil conditions while tree #52 did not, but only when we tracked growth rates across seasons did the correlation with microclimate become clear.
We realized that comprehensively documenting what human eyes could already observe - survival, visible growth, obvious stress - was so valuable. And there was so much more potential in what we couldn't see: early disease detection, nutrient deficiencies before they became visible, and stress patterns that would only show up in multispectral analysis. Computer vision could potentially detect problems weeks before farmers would notice them, identifying subtle changes in leaf color or growth patterns that predict future issues.
The Technical Mountain
By the end of 2022, we knew what we wanted to do next. Basically, instead of manually tagging millions of trees, we now needed to teach machines to see trees the way humans do.
But the challenges were daunting. Automating tree identification meant solving computer vision problems across dozens of tree species, varying light conditions, different growth stages, and diverse agricultural settings. We'd need to detect individual trees in dense canopies, distinguish between crops and trees in agroforestry systems, and track growth over time.
Then there was the "hidden measurement problem." Drones flying overhead could see tree crowns and estimate height, but they couldn't see trunk diameter - a very important measurement for calculating carbon storage. The trees' most valuable data was literally hidden beneath their canopies.

Finally, there was the inherent randomness. Two trees of the same species, same age, same growing conditions could have completely different carbon storage. Any automated system would need to quantify not just measurements, but uncertainty - how confident could we be in our estimates?
From QR codes to bounding boxes, we had a massive technical mountain in front of us. But we also had something valuable: nearly 1 million trees worth of ground truth data (that we ourselves had planted), and a clear understanding of what success would look like from the farmer's perspective.
As we scanned that #50,000ish QR code on that muddy day, we had already begun to imagine a different way forward. Instead of physically tagging trees, we'd digitally detect them. Instead of manual measurements, we'd have automated analysis. Instead of excluding smallholder farmers from carbon markets, we'd leverage and build upon inclusive open-source technologies. We will teach machines to count what matters.
In part 2 of 5, next week we will share how we climbed the technical mountain - from computer vision breakthroughs to solving the ‘hidden measurement’ problem with AI that quantifies its own uncertainty.






