
The highest value Waydev provides is moving our engineering org past vanity activity metrics. We're actively rolling out AI-assisted coding tools across the stack, and Waydev gives me hard telemetry on adoption rates mapped directly against PR cycle times, deployment frequency, and overall throughput. I don't just see who has the plugin enabled; I see the empirical delta in velocity. As a founder, the fact that I can pull these dashboards up in a board meeting to justify our SaaS spend on AI tooling—without having to build custom ETL pipelines to aggregate the data—is a massive win. Review collected by and hosted on G2.com.
The out-of-the-box reporting for the AI telemetry can be a bit rigid. I want more granular control over slicing the data Review collected by and hosted on G2.com.
Gabriel, this is a great breakdown, thank you.
You’re pointing to the exact shift we’re seeing across engineering orgs, moving from “who is using AI” to “what measurable delta AI is actually driving.” Mapping adoption directly to PR cycle time, deploy frequency, and throughput is where the real signal is, and I’m glad that’s proving useful in your board-level conversations.
On the reporting side, your feedback is spot on. Rigid, predefined slices don’t work once teams start asking second and third order questions about the data. We’re actively addressing this with the new AI-native layer, where you can slice telemetry dynamically, down to repo, team, language, or even contributor level, without needing to preconfigure dashboards.
If you’re open to it, I’d be interested to understand what specific dimensions you want more control over. That’s exactly the level where we’re investing right now.
Appreciate you taking the time to share this.





