select cron.schedule('flush-queue', '* * * * *', $$
with new_rows as (
delete from measurements_staging returning *
)
insert into measurements select * from new_rows;
$$);
The "continuous ETL" process the GP is talking about would be exactly this kind of thing, and just as trivial. (In fact it would be this exact same code, just with your mental model flipped around from "promoting data from a staging table into a canonical iceberg table" to "evicting data from a canonical table into a historical-archive table".)
Think "tiered storage."
See the example under https://github.com/Snowflake-Labs/pg_lake/blob/main/docs/ice...:
The "continuous ETL" process the GP is talking about would be exactly this kind of thing, and just as trivial. (In fact it would be this exact same code, just with your mental model flipped around from "promoting data from a staging table into a canonical iceberg table" to "evicting data from a canonical table into a historical-archive table".)