Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has been a concern for me too. But the agent is just a statsd receiver with some extra magic, so this seems like a thing that could be solved with the collector sending traffic to an agent rather than the HTTP APIs?

I looked at the OTel DD stuff and did not see any support for this, fwiw, maybe it doesn't work b/c the agent expects more context from the pod (e.g. app and label?)



Yeah, the DD agent and the otel-collector DD exporter actually use the same code paths for the most part. The relevant difference tends to be in metrics, where the official path involves the DD agent doing collection directly, for example, collecting redis metrics by giving the agent your redis database hostname and creds. It can then pack those into the specific shape that DD knows about and they get sent with the right name, values, etc so that DD calls them regular metrics.

If you instead went the more flexible route of using many of the de-facto standard prometheus exporters like the one for redis, or built-in prometheus metrics from something like istio, and forward those to your agent or configure your agent to poll those prometheus metrics, it won't do any reshaping (which I can see the arguments for, kinda, knowing a bit about their backend) and they just end up in the DD backend as custom metrics, and charge you at $0.10/mo per 100 time series. If you've used prometheus before for any realistic deployments with enrichment etc, you can probably see this gets expensive ridiculously fast.

What I wish they'd do instead is have some form of adapter from those de facto standards, so I can still collect metrics 99% my own way, in a portable fashion, and then add DD as my backend without ending up as custom everything, costing significantly more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: