Now I’m trying to connect a new feature, but I ran into a problem. With the Prometheus version 3.0.1 (latest) everything works fine, but with version 2.55.1 the graph is not displayed in the UI.
For the test, I ran docker-compose from the example with image: prom/prometheus:v2.55.1 and got the same effect. Can I somehow fix this myself or will only update to Prometheus 3.0.1?
We recently merged a fix that should work with older versions of prometheus. We haven’t yet released it as a new version of Flipt, but will create a new release this coming week.
In the mean time, can you try to use the nightly version of Flipt? I updated the docker-compose file to this:
There is no problem with Prometheus v2.55.1. It’s a UI bug. As workaround switch Timezone UTC on/off in Settings. The chart will not be broken after that.
Thanks for the update.
For me it still doesn’t work and from this version the graph in Grafana v11.3.1 stopped displaying. In Prometheus itself the numerical metrics are displayed correctly.
Thanks for the update! It works on docker compose (examples/analytics/prometheus/docker-compose.yml, edited prometheus to v2.55.1 and flipt to v1.53.1).
But for some reason it’s not yet working on our k8s cluster. I can see data in prometheus, but not at flipt and grafana (with this dashboard)
If I understand this correctly, flipt retrieves zero data because the query is wrong. The namespace key refers to the k8s application namespace, not flipt namespace.
Whereas on local compose setup, the data point looks like this:
# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
#
# Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
[ honor_labels: <boolean> | default = false ]
I consulted with a colleague and yea that could work. But we’re going to need to create our own ServiceMonitor manifest instead of using the template from flipt’s helm chart, because it’s not exposed there. I’m going to file a PR to expose this in the template → PR link.