Hacker Newsnew | past | comments | ask | show | jobs | submit | yingjunwu's commentslogin

it's production ready, and has already been adopted by:

* RisingWave (https://github.com/risingwavelabs/risingwave): SQL stream processing, analytics, and management. * Chroma (https://github.com/chroma-core/chroma): Embedding database for LLM apps. * SlateDB (https://github.com/slatedb/slatedb): A cloud native embedded storage engine built on object storage.


The database services are moving downmarket towards the developer segment. Stream processing should be democratized—the next-generation systems should be free, extremely easy to install, deploy, use, and maintain.


We introduced Await-Tree as a powerful tool for observability in Async Rust. Await-Tree is a backtrace tool designed natively for Async Rust, which allows developers to observe the execution status of each async task in real time and analyze the dependency blocking relationships between different futures or tasks.


Serverless is very appealing, but it's not a panacea for all your data infra problems.


Paradox in the data infra world: SQL's expressiveness might be limited compared to Java, yet SQL-centric data systems manages to generate more revenue.


We chose to build our own SQLSmith in Rust. It wasn't a mere whim. We tested various alternatives, analyzing their capabilities and adaptability to our needs. While some solutions showed promise, SQLSmith in Rust consistently emerged as the right solution.


I am the founder of RisingWave (http://risingwave.com/), an open-source SQL streaming database. I am happy to see the launch of Warpstream! I just reviewed the project and here's my personal opinion:

* Apache Kafka is undoubtedly the leading product in the streaming platform space. It offers a simple yet effective API that has become the golden standard. All streaming/messaging vendors need to adhere to Kafka protocol.

* The original Kafka only used local storage to store data, which can be extremely expensive if the data volume is large. That's why many people are advocating for the development of Kafka Tiered Storage (KIP-405: https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A...). To my best knowledge, there are at least five vendors selling Kafka or Kafka-compatible products with tiered storage support:

-- Confluent, which builds Kora, the 10X Kafka engine: https://www.confluent.io/10x-apache-kafka/;

-- Aiven, the open-source tiered storage Kafka (source code: https://github.com/Aiven-Open/tiered-storage-for-apache-kafk...

-- Redpanda Data, which cuts your TCO by 6X (https://redpanda.com/platform-tco);

-- DataStax, which commercializes Apache Pulsar (https://pulsar.apache.org/);

-- StreamNative, which commercializes Apache Pulsar (https://pulsar.apache.org/).

* WarpStream claims to be "built directly on top of S3," which I believe is a very aggressive approach that has the potential to drastically reduce costs, even compared to tiered storage. The potential tradeoff is system performance, especially in terms of latency. As new technology, WarpStream brings novelty, and definitely it also needs to convince users that the service is robust and reliable.

* BYOC (Bring Your Own Cloud) is becoming the default option. Most of the vendors listed above offer BYOC, where data is stored in customers' cloud accounts, addressing concerns about data privacy and security.

I believe WarpStream is new technology to this market, and and would encourage the team to publish some detailed numbers to confirm its performance and efficiency!


If you're still within the edit window for your comment, the ");" in them are attached to the URLs, making them problematic to click

ed

-- Confluent, which builds Kora, the 10X Kafka engine: https://www.confluent.io/10x-apache-kafka/

-- Aiven, the open-source tiered storage Kafka (source code: https://github.com/Aiven-Open/tiered-storage-for-apache-kafk...

-- Redpanda Data, which cuts your TCO by 6X https://redpanda.com/platform-tco

-- DataStax, which commercializes Apache Pulsar https://pulsar.apache.org/

-- StreamNative, which commercializes Apache Pulsar https://pulsar.apache.org/


It doesn't make sense to limit read rate purely due to `data scraping and system manipulation.` The underlying data infra is not scalable and cost-efficient.

Elon, adopt open-source technologies to reduce your GCP bill!


Tweets are not served from GCP.


The modern real-time data stack has gained significant popularity in recent times. Systems such as Apache Pinot, Apache Druid, ClickHouse, RisingWave, and Apache Flink have been widely adopted in companies' data stacks. However, there are still some areas of confusion when it comes to this emerging space. During my conversations with customers, I frequently encounter the following questions:

* What is the difference between stream processing and real-time OLAP? * Why do I need stream processing if I'm already using an OLAP store? * Are RisingWave/Flink competitors to Pinot/Druid/ClickHouse?

These are excellent questions, and I enjoy personally explaining my perspective on the modern real-time data stack to each individual. However, in order to reach a wider audience interested in data engineering, I have written a blog post on this topic.

In summary, stream processing and real-time OLAP databases differ significantly in their design, implementation, and use cases. Stream processing is better suited for monitoring, alerting, and automation scenarios, while real-time OLAP is more suitable for interactive and exploratory analytics. Incorporating both types of systems in your data stack may be beneficial for your overall data processing needs!


Proud to see my name (https://twitter.com/YingjunWu) mentioned in Andy's blog. I was Andy's visiting PhD at CMU and was the top 1 contributor to Peloton (https://github.com/cmu-db/peloton).

Today, building a database from scratch is extremely difficult, for several reasons: 1. it anyways takes a long time; 2. there are so many successful (open-source) databases; 3. hiring top engineers are so expensive. 4. you won't get enough attention unless your system is drastically better than existing ones.

An interesting observation is that very few database was built since 2020 - almost all the newly built databases were developed on top of existing databases (PostgreSQL, ClickHouse, etc).

I started building RisingWave (https://github.com/risingwavelabs/risingwave) in early 2021. The only reason we built the system from scratch was that none of the existing systems can address the problem we are solving - distributed SQL stream processing at cloud scale. We tried Flink but gave up, as it's too heavy and it's architecture was not designed for the cloud environment.

If you want to build a database from scratch, or are simply interested in databases, we may talk.


You're obviously the expert here, but I was surprised that you found it notable very few databases have been released in the last three years. That seems like a very short timeframe. Per Wikipedia ClickHouse started as an experimental project in 2009 and was first released in 2016.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: