- Include PromQL queries in API response for workload metrics
- Display queries in historical analysis modal with copy functionality
- Add professional styling for query display sections
- Enable users to copy and validate queries in OpenShift Console
- Organize queries by category: cluster totals, usage, requests, limits
- Add copy-to-clipboard functionality with visual feedback
- Check both CPU and Memory data availability before historical analysis
- If either CPU or Memory has insufficient data, add warning and skip analysis
- Prevent conflicting insufficient_historical_data and historical_analysis
- Ensure consistent data availability requirements for workload analysis
- Only proceed with P95/P99 calculations when both resources have sufficient data
- Fix insufficient_historical_data vs historical_analysis contradiction
- Add return statement when insufficient data to prevent P99 calculation
- Implement workload-based historical analysis instead of pod-based
- Add _extract_workload_name() to identify workload from pod names
- Add analyze_workload_historical_usage() for workload-level analysis
- Add _analyze_workload_metrics() with Prometheus workload queries
- Add validate_workload_resources_with_historical_analysis() method
- Update /cluster/status endpoint to use workload analysis by namespace
- Improve reliability by analyzing workloads instead of individual pods
- Maintain fallback to pod-level analysis if workload analysis fails
- Remove duplicate static validations from /cluster/status endpoint
- Use only historical analysis which includes static validations
- Add fallback to static validations only if historical analysis fails
- Eliminate duplicate invalid_ratio and container_metrics validations
- Improve validation efficiency and reduce redundancy
- Change validation.validation_type to validation.rule_name
- API returns rule_name field, not validation_type
- Fix undefined display in namespace analysis modal
- Ensure proper validation type display in detailed view
- Remove duplicate createNamespaceDetails() function
- Fix validation.rule_name to validation.validation_type
- Keep only showNamespaceDetailsSimple() function
- Eliminate redundant code in namespace analysis modal
- Improve code maintainability and reduce duplication
- Add info icon next to Resource Utilization metric
- Create showResourceUtilizationDetails() function
- Explain placeholder implementation status
- Show formula and purpose of Resource Utilization
- Indicate Phase 2 implementation plan
- Provide clear next steps for development
- Update README.md with Cluster Overcommit Analysis features
- Add Podman preference over Docker in requirements
- Update DOCUMENTATION.md with Phase 1 completion status
- Update AIAgents-Support.md with Phase 1.3 completion
- Add Cluster Overcommit Analysis to completed features
- Update version to 1.2.0 and dates to 2025-10-01
- Reflect current implementation status across all docs
- Add proper closeModal() function
- Fix close button (X) click handler
- Fix click outside modal to close
- Remove modal from DOM instead of just hiding
- Improve modal user experience
- Replace tooltips with info icons (ℹ️) next to CPU/Memory Overcommit
- Add modal dialogs showing detailed overcommit calculations
- Change Resource Quota Coverage to Resource Utilization
- Add CSS styling for overcommit details modals
- Improve UX with clickable info icons instead of hover tooltips
- Show capacity, requests, overcommit percentage, and available resources
- Add tooltips showing capacity, requests, and calculation details
- Include CPU and Memory capacity/requests in API response
- Add CSS styling for tooltip hover effects
- Show detailed breakdown: Capacity Total, Requests Total, and calculation formula
- Improve user experience with transparent overcommit information
- Add overcommit data processing in /cluster/status endpoint
- Extract CPU/Memory capacity and requests from Prometheus
- Calculate overcommit percentages and resource quota coverage
- Update frontend to use new overcommit data structure
- Fix issue where Cluster Overcommit Summary was showing all zeros
- Use regex pattern pod=~"{workload}.*" in workload metrics API
- This matches the fix applied to historical analysis
- Should resolve issue where resource-governance workload data was not being retrieved
- Both historical analysis and workload metrics now use consistent pod name matching
- Use regex pattern pod=~"{pod.name}.*" instead of exact match
- This allows matching pods with suffixes like resource-governance-78b77cc868-gchx7
- Apply fix to both CPU and Memory queries for usage, requests, and limits
- Should resolve issue where resource-governance pod data was not being retrieved
- Revert step calculation to 60s for better data retrieval
- Reduce threshold to 3 data points for insufficient data detection
- Add detailed logging for Prometheus query debugging
- Ensure historical data is properly retrieved from Prometheus
- Adjust Prometheus query step based on time range (5min for 24h)
- Reduce threshold from 10 to 5 data points for insufficient data detection
- Add debug logging to understand data point counts
- Improve step calculation: 30s for 1h, 5min for 24h, 30min for 7d
- Unify Prometheus queries between namespace analysis and historical analysis
- Fix efficiency calculations to prevent division by zero
- Remove duplicate validations in validation service
- Improve frontend data display with clear numerical values
- Add proper error handling for missing data
- Use node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate for CPU usage
- Use container_memory_working_set_bytes with kubelet job for memory usage
- Use kube_pod_container_resource_requests/limits with kube-state-metrics job
- Add workload-specific filtering to match OpenShift dashboard behavior
- This should resolve the 'insufficient data' issue by using the same metrics as OpenShift