Community
Thanking CockroachDB's open source contributors
CockroachDB was conceived as open source software, and we are proud that CockroachDB remains open source to this day. Throughout our journey, our community has made valuable contributions to our product. Over the course of our existence, we have had over 1590 commits from over 320 external contributors across all our open source repositories. Today, we want to thank all our wonderful external contributors. In this blog post, we celebrate open source contributions across the CockroachDB repositories, with a glimpse of how we manage contributions to our own repo. We take a closer look at the impact of external contributions on the spatial offering for v20.2 and CockroachDB ORMs. We also take a look at how we encouraged first-time open source contributors during Hacktoberfest 2020. Finally, we recognize all our external contributors and celebrate their work.
Rafi Shamim
November 23, 2020
applications
How MyMahi built a scalable, serverless backend using CockroachDB and AWS Lambda
Even before the stay-at-home orders spiked the demand for digital learning platforms, MyMahi was architecting for scale. MyMahi, a New Zealand-based digital education company, built their student platform, which helps 8,000+ monthly active students track their learning journeys, with CockroachDB Core embedded in a technology stack that includes spot instances (e.g., AWS Fargate), serverless functions (e.g., AWS Lambda), and GraphQL (e.g., GraphQL.js, GraphQL Tools). Starting in 2018, MyMahi designed their new application to take advantage of technologies that allow them to scale out seamlessly in response to student activity. In this blog post, we'll highlight MyMahi’s application architecture and discuss how different technologies like AWS Lambda interact with CockroachDB to provide a scalable, serverless backend for their application.
Rafi Shamim
October 8, 2020
Product
How we built a vectorized execution engine
CockroachDB is an OLTP database, specialized for serving high-throughput queries that read or write a small number of rows. As we gained more usage, we found that customers weren’t getting the performance they expected from analytic queries that read a lot of rows, like large scans, joins, or aggregations. In April 2018, we started to seriously investigate how to improve the performance of these types of queries in CockroachDB, and began working on a new SQL execution engine. In this blog post, we use example code to discuss how we built the new engine and why it results in up to a 4x speed improvement on an industry-standard benchmark.
Rafi Shamim
October 31, 2019