When building software, we need to be able to experiment, iterate, succeed and fail…fast. At Tinybird, we found that over time, as our project and team grew, so did the size and complexity of our CI pipeline. The quality of the tests began to decline, which made the pipeline slow and unreliable, which in turn led to less test coverage.
Manually trawling through millions of logs isn't a scalable solution, so we invested the time to utilize the data generated by our tests and built analytics pipelines to give us better insights into what was going on. Using this, we could focus our efforts where it mattered most, iterating over the most problematic tests first and working our way down.
In this session, Aitana Azcona will talk about how we used data to halve the total execution time of our CI pipeline, eliminate random failures, and give our team the confidence to improve overall test coverage. Not only does this let us move faster, but it's eliminated a source of friction within the team, so we can focus on arguing about things that matter.