1、Unlocking Near Real Time Data Replication with CDC,Apache Spark Streaming,and Delta LakeDatabricks2023Ivan Peng and Phani NalluriHow many orders did DoorDash do yesterday?How many orders did DoorDash do yesterday?Get me data from databasesselect*from table_nameGet me data from databases,fastselect*f
2、rom table_name where updated_at$LATEST_DATEmergeGet me data from databases,fast and as the schema changesselect*from information.schemas where name=table_name;mergeselect*from information.schemas where name=table_name;reconcilepageincompatibleselect*from information.schemas where name=table_name;mer
3、geselect*from information.schemas where name=table_name;reconcilepageincompatiblex1000Somewhere in there is a migration from Redshift to Snowflake,and building a whole orchestration system around the tasks HistoryAKA the State of Data at DoorDash,2020 90%of 1000 DB tables were dumped to Snowflake vi
4、a naive dump Incremental tables required:Table to have an updated_at fieldIndex on that fieldApplication to update that field on every write operation CDC was present,but in its infancy at DoorDashProject PeptoAlleviating indigestion of data processingRequirementsHave better data freshness than 24 h
5、oursOwn our data on a modern Lakehouse platformHandle schema evolution and backfillsEnable analytical workloads that otherwise would have been run on the production databasesDesign TenetsLean into CDC/Kafka across all database flavorsBuild a self-serve platform to democratize onboarding of tablesWri
6、te-once,read manyLeverage streaming checkpointing to bypass late-arriving dataOperational simplicityProject PeptoWhat we are not A coupled service with databases A real-time system that feeds into online servicesProject PeptoHighlighted Design Decisions Not-kappa architecture Freezing schemas with“s
7、chema registry”Delta Lake over other table formatsSteady State ModeRebuild ModeBatch Merge ModeProject PeptoResults Table onboarding down to 1 hour and self-serve 450 streams,over 1000 EC2 nodes running 24/7 800 GB/day as input,80 TB rewritten/day Data freshness of 7-30 minutesResultsProject PeptoCh
8、allenges and Learnings Checkpointing solves a lot of problems Type conversions are hard!For every adapter theres 2 serializers Large tables are operationally challenging State management is tough make everything idempotentDatabrickss API with idempotency guarantees simplifies a lot Reputation is hard to gain,easy to loseFuture Work Ad Hoc queries to migrate from online DBs to Delta Lake workloads Streaming PII obfuscation in medallion architecture Schema changes to the sourceQuestions?Thank you