- Add overcommit data processing in /cluster/status endpoint
- Extract CPU/Memory capacity and requests from Prometheus
- Calculate overcommit percentages and resource quota coverage
- Update frontend to use new overcommit data structure
- Fix issue where Cluster Overcommit Summary was showing all zeros
- Use regex pattern pod=~"{workload}.*" in workload metrics API
- This matches the fix applied to historical analysis
- Should resolve issue where resource-governance workload data was not being retrieved
- Both historical analysis and workload metrics now use consistent pod name matching
- Use regex pattern pod=~"{pod.name}.*" instead of exact match
- This allows matching pods with suffixes like resource-governance-78b77cc868-gchx7
- Apply fix to both CPU and Memory queries for usage, requests, and limits
- Should resolve issue where resource-governance pod data was not being retrieved
- Revert step calculation to 60s for better data retrieval
- Reduce threshold to 3 data points for insufficient data detection
- Add detailed logging for Prometheus query debugging
- Ensure historical data is properly retrieved from Prometheus
- Adjust Prometheus query step based on time range (5min for 24h)
- Reduce threshold from 10 to 5 data points for insufficient data detection
- Add debug logging to understand data point counts
- Improve step calculation: 30s for 1h, 5min for 24h, 30min for 7d
- Unify Prometheus queries between namespace analysis and historical analysis
- Fix efficiency calculations to prevent division by zero
- Remove duplicate validations in validation service
- Improve frontend data display with clear numerical values
- Add proper error handling for missing data
- Use node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate for CPU usage
- Use container_memory_working_set_bytes with kubelet job for memory usage
- Use kube_pod_container_resource_requests/limits with kube-state-metrics job
- Add workload-specific filtering to match OpenShift dashboard behavior
- This should resolve the 'insufficient data' issue by using the same metrics as OpenShift