Add CI/CD with GitHub Actions and migrate to Deployment

- Migrate from DaemonSet to Deployment for better efficiency
- Add GitHub Actions for automatic build and deploy
- Add Blue-Green deployment strategy with health checks
- Add scripts for development and production workflows
- Update documentation with CI/CD flow
This commit is contained in:
2025-09-25 17:20:38 -03:00
parent 4e57a896fe
commit 3a6875a80e
12 changed files with 1344 additions and 13 deletions

View File

@@ -64,25 +64,32 @@ jobs:
# Login to OpenShift # Login to OpenShift
echo "${{ secrets.OPENSHIFT_TOKEN }}" | oc login ${{ secrets.OPENSHIFT_SERVER }} --token-stdin echo "${{ secrets.OPENSHIFT_TOKEN }}" | oc login ${{ secrets.OPENSHIFT_SERVER }} --token-stdin
# Update image in DaemonSet # Apply manifests (namespace, rbac, configmap)
oc set image daemonset/${{ env.IMAGE_NAME }} ${{ env.IMAGE_NAME }}=${{ steps.meta.outputs.tags }} -n ${{ env.NAMESPACE }} || true
# Apply manifests
oc apply -f k8s/namespace.yaml oc apply -f k8s/namespace.yaml
oc apply -f k8s/rbac.yaml oc apply -f k8s/rbac.yaml
oc apply -f k8s/configmap.yaml oc apply -f k8s/configmap.yaml
oc apply -f k8s/daemonset.yaml
# Update deployment with new image
oc set image deployment/${{ env.IMAGE_NAME }} ${{ env.IMAGE_NAME }}=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} -n ${{ env.NAMESPACE }} || true
# Apply deployment, service and route
oc apply -f k8s/deployment.yaml
oc apply -f k8s/service.yaml oc apply -f k8s/service.yaml
oc apply -f k8s/route.yaml oc apply -f k8s/route.yaml
# Wait for rollout # Wait for rollout
oc rollout status daemonset/${{ env.IMAGE_NAME }} -n ${{ env.NAMESPACE }} --timeout=300s oc rollout status deployment/${{ env.IMAGE_NAME }} -n ${{ env.NAMESPACE }} --timeout=300s
# Verify deployment
oc get deployment ${{ env.IMAGE_NAME }} -n ${{ env.NAMESPACE }}
oc get pods -n ${{ env.NAMESPACE }} -l app.kubernetes.io/name=${{ env.IMAGE_NAME }}
# Get route URL # Get route URL
ROUTE_URL=$(oc get route ${{ env.IMAGE_NAME }}-route -n ${{ env.NAMESPACE }} -o jsonpath='{.spec.host}' 2>/dev/null || echo "") ROUTE_URL=$(oc get route ${{ env.IMAGE_NAME }}-route -n ${{ env.NAMESPACE }} -o jsonpath='{.spec.host}' 2>/dev/null || echo "")
if [ -n "$ROUTE_URL" ]; then if [ -n "$ROUTE_URL" ]; then
echo "🚀 Application deployed successfully!" echo "🚀 Application deployed successfully!"
echo "🌐 URL: https://$ROUTE_URL" echo "🌐 URL: https://$ROUTE_URL"
echo "📊 Status: oc get pods -n ${{ env.NAMESPACE }} -l app.kubernetes.io/name=${{ env.IMAGE_NAME }}"
fi fi
env: env:
OPENSHIFT_SERVER: ${{ secrets.OPENSHIFT_SERVER }} OPENSHIFT_SERVER: ${{ secrets.OPENSHIFT_SERVER }}

View File

@@ -38,9 +38,47 @@ Uma ferramenta de governança de recursos para clusters OpenShift que vai além
### 2. Deploy no OpenShift ### 2. Deploy no OpenShift
#### Deploy Automático (Recomendado) #### 🚀 CI/CD Automático (Recomendado para Produção)
```bash ```bash
# Deploy completo com ImagePullSecret # 1. Configurar secrets do GitHub
./scripts/setup-github-secrets.sh
# 2. Fazer commit e push
git add .
git commit -m "Nova funcionalidade"
git push origin main
# 3. GitHub Actions fará deploy automático!
```
**Fluxo Automático:**
-**Push para main** → GitHub Actions detecta mudança
-**Build automático** → Nova imagem no Docker Hub
-**Deploy automático** → OpenShift atualiza deployment
-**Rolling Update** → Zero downtime
-**Health Checks** → Validação automática
#### 🔧 Deploy Manual (Desenvolvimento)
```bash
# Deploy com estratégia Blue-Green
./scripts/blue-green-deploy.sh
# Deploy com tag específica
./scripts/blue-green-deploy.sh v1.2.0
# Testar fluxo CI/CD localmente
./scripts/test-ci-cd.sh
```
**Scripts para Desenvolvimento:**
-**Controle total** sobre o processo
-**Iteração rápida** durante desenvolvimento
-**Debugging** mais fácil
-**Testes locais** antes de fazer push
#### Deploy Completo (Inicial)
```bash
# Deploy completo com ImagePullSecret (primeira vez)
./scripts/deploy-complete.sh ./scripts/deploy-complete.sh
``` ```

View File

@@ -3,6 +3,7 @@ Rotas da API
""" """
import logging import logging
from typing import List, Optional from typing import List, Optional
from datetime import datetime
from fastapi import APIRouter, HTTPException, Depends, Request from fastapi import APIRouter, HTTPException, Depends, Request
from fastapi.responses import FileResponse from fastapi.responses import FileResponse
@@ -12,6 +13,7 @@ from app.models.resource_models import (
) )
from app.services.validation_service import ValidationService from app.services.validation_service import ValidationService
from app.services.report_service import ReportService from app.services.report_service import ReportService
from app.services.historical_analysis import HistoricalAnalysisService
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -365,6 +367,61 @@ async def apply_recommendation(
logger.error(f"Erro ao aplicar recomendação: {e}") logger.error(f"Erro ao aplicar recomendação: {e}")
raise HTTPException(status_code=500, detail=str(e)) raise HTTPException(status_code=500, detail=str(e))
@api_router.get("/validations/historical")
async def get_historical_validations(
namespace: Optional[str] = None,
time_range: str = "24h",
k8s_client=Depends(get_k8s_client)
):
"""Obter validações com análise histórica do Prometheus"""
try:
validation_service = ValidationService()
# Coletar pods
if namespace:
namespace_resources = await k8s_client.get_namespace_resources(namespace)
pods = namespace_resources.pods
else:
pods = await k8s_client.get_all_pods()
# Validar com análise histórica
all_validations = []
for pod in pods:
pod_validations = await validation_service.validate_pod_resources_with_historical_analysis(
pod, time_range
)
all_validations.extend(pod_validations)
return {
"validations": all_validations,
"total": len(all_validations),
"time_range": time_range,
"namespace": namespace or "all"
}
except Exception as e:
logger.error(f"Erro ao obter validações históricas: {e}")
raise HTTPException(status_code=500, detail=str(e))
@api_router.get("/cluster/historical-summary")
async def get_cluster_historical_summary(
time_range: str = "24h"
):
"""Obter resumo histórico do cluster"""
try:
historical_service = HistoricalAnalysisService()
summary = await historical_service.get_cluster_historical_summary(time_range)
return {
"summary": summary,
"time_range": time_range,
"timestamp": datetime.now().isoformat()
}
except Exception as e:
logger.error(f"Erro ao obter resumo histórico: {e}")
raise HTTPException(status_code=500, detail=str(e))
@api_router.get("/health") @api_router.get("/health")
async def health_check(): async def health_check():
"""Health check da API""" """Health check da API"""

View File

@@ -0,0 +1,445 @@
"""
Serviço de análise histórica usando métricas do Prometheus
"""
import logging
import asyncio
from typing import List, Dict, Any, Optional, Tuple
from datetime import datetime, timedelta
import aiohttp
import json
from app.models.resource_models import PodResource, ResourceValidation
from app.core.config import settings
logger = logging.getLogger(__name__)
class HistoricalAnalysisService:
"""Serviço para análise histórica de recursos usando Prometheus"""
def __init__(self):
self.prometheus_url = settings.prometheus_url
self.time_ranges = {
'1h': 3600, # 1 hora
'6h': 21600, # 6 horas
'24h': 86400, # 24 horas
'7d': 604800, # 7 dias
'30d': 2592000 # 30 dias
}
async def analyze_pod_historical_usage(
self,
pod: PodResource,
time_range: str = '24h'
) -> List[ResourceValidation]:
"""Analisar uso histórico de um pod"""
validations = []
if time_range not in self.time_ranges:
time_range = '24h'
end_time = datetime.now()
start_time = end_time - timedelta(seconds=self.time_ranges[time_range])
try:
# Analisar CPU
cpu_analysis = await self._analyze_cpu_usage(
pod, start_time, end_time, time_range
)
validations.extend(cpu_analysis)
# Analisar memória
memory_analysis = await self._analyze_memory_usage(
pod, start_time, end_time, time_range
)
validations.extend(memory_analysis)
except Exception as e:
logger.error(f"Erro na análise histórica do pod {pod.name}: {e}")
validations.append(ResourceValidation(
pod_name=pod.name,
namespace=pod.namespace,
container_name="all",
validation_type="historical_analysis_error",
severity="warning",
message=f"Erro na análise histórica: {str(e)}",
recommendation="Verificar conectividade com Prometheus"
))
return validations
async def _analyze_cpu_usage(
self,
pod: PodResource,
start_time: datetime,
end_time: datetime,
time_range: str
) -> List[ResourceValidation]:
"""Analisar uso histórico de CPU"""
validations = []
for container in pod.containers:
container_name = container["name"]
try:
# Query para CPU usage rate
cpu_query = f'''
rate(container_cpu_usage_seconds_total{{
pod="{pod.name}",
namespace="{pod.namespace}",
container="{container_name}",
container!="POD",
container!=""
}}[{time_range}])
'''
# Query para CPU requests
cpu_requests_query = f'''
kube_pod_container_resource_requests{{
pod="{pod.name}",
namespace="{pod.namespace}",
resource="cpu"
}}
'''
# Query para CPU limits
cpu_limits_query = f'''
kube_pod_container_resource_limits{{
pod="{pod.name}",
namespace="{pod.namespace}",
resource="cpu"
}}
'''
# Executar queries
cpu_usage = await self._query_prometheus(cpu_query, start_time, end_time)
cpu_requests = await self._query_prometheus(cpu_requests_query, start_time, end_time)
cpu_limits = await self._query_prometheus(cpu_limits_query, start_time, end_time)
if cpu_usage and cpu_requests:
analysis = self._analyze_cpu_metrics(
pod.name, pod.namespace, container_name,
cpu_usage, cpu_requests, cpu_limits, time_range
)
validations.extend(analysis)
except Exception as e:
logger.warning(f"Erro ao analisar CPU do container {container_name}: {e}")
return validations
async def _analyze_memory_usage(
self,
pod: PodResource,
start_time: datetime,
end_time: datetime,
time_range: str
) -> List[ResourceValidation]:
"""Analisar uso histórico de memória"""
validations = []
for container in pod.containers:
container_name = container["name"]
try:
# Query para memória usage
memory_query = f'''
container_memory_working_set_bytes{{
pod="{pod.name}",
namespace="{pod.namespace}",
container="{container_name}",
container!="POD",
container!=""
}}
'''
# Query para memória requests
memory_requests_query = f'''
kube_pod_container_resource_requests{{
pod="{pod.name}",
namespace="{pod.namespace}",
resource="memory"
}}
'''
# Query para memória limits
memory_limits_query = f'''
kube_pod_container_resource_limits{{
pod="{pod.name}",
namespace="{pod.namespace}",
resource="memory"
}}
'''
# Executar queries
memory_usage = await self._query_prometheus(memory_query, start_time, end_time)
memory_requests = await self._query_prometheus(memory_requests_query, start_time, end_time)
memory_limits = await self._query_prometheus(memory_limits_query, start_time, end_time)
if memory_usage and memory_requests:
analysis = self._analyze_memory_metrics(
pod.name, pod.namespace, container_name,
memory_usage, memory_requests, memory_limits, time_range
)
validations.extend(analysis)
except Exception as e:
logger.warning(f"Erro ao analisar memória do container {container_name}: {e}")
return validations
def _analyze_cpu_metrics(
self,
pod_name: str,
namespace: str,
container_name: str,
usage_data: List[Dict],
requests_data: List[Dict],
limits_data: List[Dict],
time_range: str
) -> List[ResourceValidation]:
"""Analisar métricas de CPU"""
validations = []
if not usage_data or not requests_data:
return validations
# Calcular estatísticas de uso
usage_values = [float(point[1]) for point in usage_data if point[1] != 'NaN']
if not usage_values:
return validations
# Valores atuais de requests/limits
current_requests = float(requests_data[0][1]) if requests_data else 0
current_limits = float(limits_data[0][1]) if limits_data else 0
# Estatísticas de uso
avg_usage = sum(usage_values) / len(usage_values)
max_usage = max(usage_values)
p95_usage = sorted(usage_values)[int(len(usage_values) * 0.95)]
p99_usage = sorted(usage_values)[int(len(usage_values) * 0.99)]
# Análise de adequação dos requests
if current_requests > 0:
# Request muito alto (uso médio < 50% do request)
if avg_usage < current_requests * 0.5:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"CPU request muito alto: uso médio {avg_usage:.3f} cores vs request {current_requests:.3f} cores",
recommendation=f"Considerar reduzir CPU request para ~{avg_usage * 1.2:.3f} cores (baseado em {time_range} de uso)"
))
# Request muito baixo (uso P95 > 80% do request)
elif p95_usage > current_requests * 0.8:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"CPU request pode ser insuficiente: P95 {p95_usage:.3f} cores vs request {current_requests:.3f} cores",
recommendation=f"Considerar aumentar CPU request para ~{p95_usage * 1.2:.3f} cores (baseado em {time_range} de uso)"
))
# Análise de adequação dos limits
if current_limits > 0:
# Limit muito alto (uso P99 < 50% do limit)
if p99_usage < current_limits * 0.5:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="info",
message=f"CPU limit muito alto: P99 {p99_usage:.3f} cores vs limit {current_limits:.3f} cores",
recommendation=f"Considerar reduzir CPU limit para ~{p99_usage * 1.5:.3f} cores (baseado em {time_range} de uso)"
))
# Limit muito baixo (uso máximo > 90% do limit)
elif max_usage > current_limits * 0.9:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"CPU limit pode ser insuficiente: uso máximo {max_usage:.3f} cores vs limit {current_limits:.3f} cores",
recommendation=f"Considerar aumentar CPU limit para ~{max_usage * 1.2:.3f} cores (baseado em {time_range} de uso)"
))
return validations
def _analyze_memory_metrics(
self,
pod_name: str,
namespace: str,
container_name: str,
usage_data: List[Dict],
requests_data: List[Dict],
limits_data: List[Dict],
time_range: str
) -> List[ResourceValidation]:
"""Analisar métricas de memória"""
validations = []
if not usage_data or not requests_data:
return validations
# Calcular estatísticas de uso
usage_values = [float(point[1]) for point in usage_data if point[1] != 'NaN']
if not usage_values:
return validations
# Valores atuais de requests/limits (em bytes)
current_requests = float(requests_data[0][1]) if requests_data else 0
current_limits = float(limits_data[0][1]) if limits_data else 0
# Estatísticas de uso
avg_usage = sum(usage_values) / len(usage_values)
max_usage = max(usage_values)
p95_usage = sorted(usage_values)[int(len(usage_values) * 0.95)]
p99_usage = sorted(usage_values)[int(len(usage_values) * 0.99)]
# Converter para MiB para melhor legibilidade
def bytes_to_mib(bytes_value):
return bytes_value / (1024 * 1024)
# Análise de adequação dos requests
if current_requests > 0:
# Request muito alto (uso médio < 50% do request)
if avg_usage < current_requests * 0.5:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"Memória request muito alto: uso médio {bytes_to_mib(avg_usage):.1f}Mi vs request {bytes_to_mib(current_requests):.1f}Mi",
recommendation=f"Considerar reduzir memória request para ~{bytes_to_mib(avg_usage * 1.2):.1f}Mi (baseado em {time_range} de uso)"
))
# Request muito baixo (uso P95 > 80% do request)
elif p95_usage > current_requests * 0.8:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"Memória request pode ser insuficiente: P95 {bytes_to_mib(p95_usage):.1f}Mi vs request {bytes_to_mib(current_requests):.1f}Mi",
recommendation=f"Considerar aumentar memória request para ~{bytes_to_mib(p95_usage * 1.2):.1f}Mi (baseado em {time_range} de uso)"
))
# Análise de adequação dos limits
if current_limits > 0:
# Limit muito alto (uso P99 < 50% do limit)
if p99_usage < current_limits * 0.5:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="info",
message=f"Memória limit muito alto: P99 {bytes_to_mib(p99_usage):.1f}Mi vs limit {bytes_to_mib(current_limits):.1f}Mi",
recommendation=f"Considerar reduzir memória limit para ~{bytes_to_mib(p99_usage * 1.5):.1f}Mi (baseado em {time_range} de uso)"
))
# Limit muito baixo (uso máximo > 90% do limit)
elif max_usage > current_limits * 0.9:
validations.append(ResourceValidation(
pod_name=pod_name,
namespace=namespace,
container_name=container_name,
validation_type="historical_analysis",
severity="warning",
message=f"Memória limit pode ser insuficiente: uso máximo {bytes_to_mib(max_usage):.1f}Mi vs limit {bytes_to_mib(current_limits):.1f}Mi",
recommendation=f"Considerar aumentar memória limit para ~{bytes_to_mib(max_usage * 1.2):.1f}Mi (baseado em {time_range} de uso)"
))
return validations
async def _query_prometheus(self, query: str, start_time: datetime, end_time: datetime) -> List[Dict]:
"""Executar query no Prometheus"""
try:
async with aiohttp.ClientSession() as session:
params = {
'query': query,
'start': start_time.timestamp(),
'end': end_time.timestamp(),
'step': '60s' # 1 minuto de resolução
}
async with session.get(
f"{self.prometheus_url}/api/v1/query_range",
params=params,
timeout=aiohttp.ClientTimeout(total=30)
) as response:
if response.status == 200:
data = await response.json()
if data['status'] == 'success' and data['data']['result']:
return data['data']['result'][0]['values']
else:
logger.warning(f"Prometheus query failed: {response.status}")
return []
except Exception as e:
logger.error(f"Erro ao consultar Prometheus: {e}")
return []
async def get_cluster_historical_summary(self, time_range: str = '24h') -> Dict[str, Any]:
"""Obter resumo histórico do cluster"""
try:
# Query para CPU total do cluster
cpu_query = f'''
sum(rate(container_cpu_usage_seconds_total{{
container!="POD",
container!=""
}}[{time_range}]))
'''
# Query para memória total do cluster
memory_query = f'''
sum(container_memory_working_set_bytes{{
container!="POD",
container!=""
}})
'''
# Query para requests totais
cpu_requests_query = f'''
sum(kube_pod_container_resource_requests{{resource="cpu"}})
'''
memory_requests_query = f'''
sum(kube_pod_container_resource_requests{{resource="memory"}})
'''
# Executar queries
cpu_usage = await self._query_prometheus(cpu_query,
datetime.now() - timedelta(seconds=self.time_ranges[time_range]),
datetime.now())
memory_usage = await self._query_prometheus(memory_query,
datetime.now() - timedelta(seconds=self.time_ranges[time_range]),
datetime.now())
cpu_requests = await self._query_prometheus(cpu_requests_query,
datetime.now() - timedelta(seconds=self.time_ranges[time_range]),
datetime.now())
memory_requests = await self._query_prometheus(memory_requests_query,
datetime.now() - timedelta(seconds=self.time_ranges[time_range]),
datetime.now())
return {
'time_range': time_range,
'cpu_usage': float(cpu_usage[0][1]) if cpu_usage else 0,
'memory_usage': float(memory_usage[0][1]) if memory_usage else 0,
'cpu_requests': float(cpu_requests[0][1]) if cpu_requests else 0,
'memory_requests': float(memory_requests[0][1]) if memory_requests else 0,
'cpu_utilization': (float(cpu_usage[0][1]) / float(cpu_requests[0][1]) * 100) if cpu_usage and cpu_requests and cpu_requests[0][1] != '0' else 0,
'memory_utilization': (float(memory_usage[0][1]) / float(memory_requests[0][1]) * 100) if memory_usage and memory_requests and memory_requests[0][1] != '0' else 0
}
except Exception as e:
logger.error(f"Erro ao obter resumo histórico: {e}")
return {}

View File

@@ -8,6 +8,7 @@ import re
from app.models.resource_models import PodResource, ResourceValidation, NamespaceResources from app.models.resource_models import PodResource, ResourceValidation, NamespaceResources
from app.core.config import settings from app.core.config import settings
from app.services.historical_analysis import HistoricalAnalysisService
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -19,6 +20,7 @@ class ValidationService:
self.memory_ratio = settings.memory_limit_ratio self.memory_ratio = settings.memory_limit_ratio
self.min_cpu_request = settings.min_cpu_request self.min_cpu_request = settings.min_cpu_request
self.min_memory_request = settings.min_memory_request self.min_memory_request = settings.min_memory_request
self.historical_analysis = HistoricalAnalysisService()
def validate_pod_resources(self, pod: PodResource) -> List[ResourceValidation]: def validate_pod_resources(self, pod: PodResource) -> List[ResourceValidation]:
"""Validar recursos de um pod""" """Validar recursos de um pod"""
@@ -32,6 +34,26 @@ class ValidationService:
return validations return validations
async def validate_pod_resources_with_historical_analysis(
self,
pod: PodResource,
time_range: str = '24h'
) -> List[ResourceValidation]:
"""Validar recursos de um pod incluindo análise histórica"""
# Validações estáticas
static_validations = self.validate_pod_resources(pod)
# Análise histórica
try:
historical_validations = await self.historical_analysis.analyze_pod_historical_usage(
pod, time_range
)
static_validations.extend(historical_validations)
except Exception as e:
logger.warning(f"Erro na análise histórica do pod {pod.name}: {e}")
return static_validations
def _validate_container_resources( def _validate_container_resources(
self, self,
pod_name: str, pod_name: str,

View File

@@ -133,8 +133,10 @@
.validation-item { .validation-item {
padding: 1rem; padding: 1rem;
border-left: 4px solid #ccc; border-left: 4px solid #ccc;
margin: 0.5rem 0; margin: 0.75rem 0;
background: #f8f9fa; background: #f8f9fa;
border-radius: 6px;
border: 1px solid #dee2e6;
} }
.validation-item.error { .validation-item.error {
@@ -231,7 +233,7 @@
border: 1px solid #ddd; border: 1px solid #ddd;
border-radius: 8px; border-radius: 8px;
margin-bottom: 1rem; margin-bottom: 1rem;
overflow: hidden; overflow: visible;
} }
.accordion-header { .accordion-header {
@@ -272,14 +274,27 @@
} }
.accordion-content { .accordion-content {
padding: 0; padding: 1rem 1.5rem;
max-height: 0; max-height: 0;
overflow: hidden; overflow: hidden;
transition: max-height 0.3s ease; transition: max-height 0.3s ease;
background: white;
border-top: 1px solid #dee2e6;
} }
.accordion-content.active { .accordion-content.active {
max-height: 1000px; max-height: none;
overflow: visible;
padding: 1rem 1.5rem;
}
/* Garantir que o conteúdo não seja cortado */
.accordion-content .validation-item:last-child {
margin-bottom: 0;
}
.accordion-content .historical-validation:last-child {
margin-bottom: 0;
} }
.pod-list { .pod-list {
@@ -367,6 +382,74 @@
cursor: not-allowed; cursor: not-allowed;
} }
/* Historical Analysis Styles */
.historical-summary {
background: #f8f9fa;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 1rem;
}
.historical-summary h3 {
margin: 0 0 1rem 0;
color: #495057;
}
.historical-stats {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 1rem;
}
.historical-stat {
background: white;
padding: 0.75rem;
border-radius: 6px;
border-left: 4px solid #007bff;
}
.historical-stat h4 {
margin: 0 0 0.5rem 0;
font-size: 0.9rem;
color: #6c757d;
}
.historical-stat .value {
font-size: 1.5rem;
font-weight: bold;
color: #007bff;
}
.historical-validation {
background: #fff3cd;
border: 1px solid #ffeaa7;
border-radius: 6px;
padding: 1rem;
margin-bottom: 0.75rem;
}
.historical-validation.error {
background: #f8d7da;
border-color: #f5c6cb;
}
.historical-validation.warning {
background: #fff3cd;
border-color: #ffeaa7;
}
.historical-validation.info {
background: #d1ecf1;
border-color: #bee5eb;
}
.historical-validation.critical {
background: #f8d7da;
border-color: #f5c6cb;
border-left: 4px solid #dc3545;
}
.pagination button.active { .pagination button.active {
background: #cc0000; background: #cc0000;
color: white; color: white;
@@ -470,6 +553,7 @@
<div style="display: flex; gap: 1rem; flex-wrap: wrap;"> <div style="display: flex; gap: 1rem; flex-wrap: wrap;">
<button class="btn" onclick="loadClusterStatus()">Atualizar Status</button> <button class="btn" onclick="loadClusterStatus()">Atualizar Status</button>
<button class="btn btn-secondary" onclick="loadValidationsByNamespace()">Ver Validações</button> <button class="btn btn-secondary" onclick="loadValidationsByNamespace()">Ver Validações</button>
<button class="btn btn-secondary" onclick="loadHistoricalValidations()">Análise Histórica</button>
<button class="btn btn-secondary" onclick="loadVPARecommendations()">Ver VPA</button> <button class="btn btn-secondary" onclick="loadVPARecommendations()">Ver VPA</button>
</div> </div>
</div> </div>
@@ -494,6 +578,39 @@
</div> </div>
</div> </div>
<!-- Análise Histórica -->
<div class="card" id="historicalCard" style="display: none;">
<h2>Análise Histórica com Prometheus</h2>
<!-- Filtros para análise histórica -->
<div class="filters">
<div class="filter-group">
<label for="timeRangeFilter">Período:</label>
<select id="timeRangeFilter">
<option value="1h">1 hora</option>
<option value="6h">6 horas</option>
<option value="24h" selected>24 horas</option>
<option value="7d">7 dias</option>
<option value="30d">30 dias</option>
</select>
</div>
<div class="filter-group">
<label for="historicalSeverityFilter">Severidade:</label>
<select id="historicalSeverityFilter">
<option value="">Todas</option>
<option value="error">Erro</option>
<option value="warning">Aviso</option>
<option value="info">Info</option>
<option value="critical">Crítico</option>
</select>
</div>
<button class="btn" onclick="loadHistoricalValidations()">Aplicar Análise</button>
</div>
<div id="historicalSummary" class="historical-summary"></div>
<div id="historicalValidationsList"></div>
</div>
<!-- Validações --> <!-- Validações -->
<div class="card" id="validationsCard" style="display: none;"> <div class="card" id="validationsCard" style="display: none;">
<h2>Validações de Recursos</h2> <h2>Validações de Recursos</h2>
@@ -567,7 +684,10 @@
const data = await response.json(); const data = await response.json();
currentData = data; currentData = data;
updateStats(data); updateStats(data);
showSuccess('Status do cluster carregado com sucesso'); showSuccess('Status do cluster carregado com sucesso. Carregando validações...');
// Carregar automaticamente as validações após o scan inicial
await loadValidationsByNamespace();
} catch (error) { } catch (error) {
showError('Erro ao carregar status do cluster: ' + error.message); showError('Erro ao carregar status do cluster: ' + error.message);
@@ -957,6 +1077,148 @@
document.getElementById('error').classList.add('hidden'); document.getElementById('error').classList.add('hidden');
document.getElementById('success').classList.add('hidden'); document.getElementById('success').classList.add('hidden');
} }
// Funções para análise histórica
async function loadHistoricalValidations() {
showLoading();
try {
const timeRange = document.getElementById('timeRangeFilter').value;
const severity = document.getElementById('historicalSeverityFilter').value;
// Carregar resumo histórico
const summaryResponse = await fetch(`/api/v1/cluster/historical-summary?time_range=${timeRange}`);
if (summaryResponse.ok) {
const summaryData = await summaryResponse.json();
displayHistoricalSummary(summaryData.summary);
}
// Carregar validações históricas
const params = new URLSearchParams({
time_range: timeRange
});
if (severity) {
params.append('severity', severity);
}
const response = await fetch(`/api/v1/validations/historical?${params}`);
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const data = await response.json();
displayHistoricalValidations(data);
document.getElementById('historicalCard').style.display = 'block';
} catch (error) {
showError('Erro ao carregar análise histórica: ' + error.message);
} finally {
hideLoading();
}
}
function displayHistoricalSummary(summary) {
const container = document.getElementById('historicalSummary');
if (!summary || Object.keys(summary).length === 0) {
container.innerHTML = '<p>Não foi possível obter dados históricos do Prometheus.</p>';
return;
}
const cpuUtilization = summary.cpu_utilization || 0;
const memoryUtilization = summary.memory_utilization || 0;
container.innerHTML = `
<h3>Resumo Histórico do Cluster (${summary.time_range})</h3>
<div class="historical-stats">
<div class="historical-stat">
<h4>CPU Utilization</h4>
<div class="value">${cpuUtilization.toFixed(1)}%</div>
</div>
<div class="historical-stat">
<h4>Memory Utilization</h4>
<div class="value">${memoryUtilization.toFixed(1)}%</div>
</div>
<div class="historical-stat">
<h4>CPU Usage</h4>
<div class="value">${summary.cpu_usage ? summary.cpu_usage.toFixed(3) : '0'} cores</div>
</div>
<div class="historical-stat">
<h4>Memory Usage</h4>
<div class="value">${summary.memory_usage ? (summary.memory_usage / (1024*1024*1024)).toFixed(2) : '0'} GiB</div>
</div>
</div>
`;
}
function displayHistoricalValidations(data) {
const container = document.getElementById('historicalValidationsList');
if (!data.validations || data.validations.length === 0) {
container.innerHTML = '<p>Nenhuma validação histórica encontrada.</p>';
return;
}
// Filtrar por severidade se especificado
let validations = data.validations;
const severity = document.getElementById('historicalSeverityFilter').value;
if (severity) {
validations = validations.filter(v => v.severity === severity);
}
// Agrupar por namespace
const groupedByNamespace = {};
validations.forEach(validation => {
if (!groupedByNamespace[validation.namespace]) {
groupedByNamespace[validation.namespace] = [];
}
groupedByNamespace[validation.namespace].push(validation);
});
let html = '';
Object.keys(groupedByNamespace).forEach(namespace => {
const namespaceValidations = groupedByNamespace[namespace];
const errorCount = namespaceValidations.filter(v => v.severity === 'error').length;
const warningCount = namespaceValidations.filter(v => v.severity === 'warning').length;
const infoCount = namespaceValidations.filter(v => v.severity === 'info').length;
const criticalCount = namespaceValidations.filter(v => v.severity === 'critical').length;
html += `
<div class="accordion">
<div class="accordion-header" onclick="toggleAccordion(this)">
<div>
<strong>${namespace}</strong>
<span class="badge">${namespaceValidations.length} validações</span>
</div>
<div class="severity-badges">
${criticalCount > 0 ? `<span class="badge severity-critical">${criticalCount} crítico</span>` : ''}
${errorCount > 0 ? `<span class="badge severity-error">${errorCount} erro</span>` : ''}
${warningCount > 0 ? `<span class="badge severity-warning">${warningCount} aviso</span>` : ''}
${infoCount > 0 ? `<span class="badge severity-info">${infoCount} info</span>` : ''}
</div>
<span class="accordion-icon">▼</span>
</div>
<div class="accordion-content">
${namespaceValidations.map(validation => `
<div class="historical-validation ${validation.severity}">
<div class="validation-header">
<strong>${validation.pod_name}</strong> - ${validation.container_name}
<span class="severity-badge severity-${validation.severity}">${validation.severity}</span>
</div>
<div class="validation-message">${validation.message}</div>
<div class="validation-recommendation">
<strong>Recomendação:</strong> ${validation.recommendation}
</div>
</div>
`).join('')}
</div>
</div>
`;
});
container.innerHTML = html;
}
</script> </script>
</body> </body>
</html> </html>

View File

@@ -7,6 +7,10 @@ metadata:
app.kubernetes.io/name: resource-governance app.kubernetes.io/name: resource-governance
app.kubernetes.io/component: governance app.kubernetes.io/component: governance
spec: spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: resource-governance app.kubernetes.io/name: resource-governance
@@ -32,6 +36,22 @@ spec:
- containerPort: 8080 - containerPort: 8080
name: http name: http
protocol: TCP protocol: TCP
livenessProbe:
httpGet:
path: /api/v1/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/v1/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
securityContext: securityContext:
allowPrivilegeEscalation: false allowPrivilegeEscalation: false
capabilities: capabilities:

120
k8s/deployment.yaml Normal file
View File

@@ -0,0 +1,120 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: resource-governance
namespace: resource-governance
labels:
app.kubernetes.io/name: resource-governance
app.kubernetes.io/component: governance
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: resource-governance
app.kubernetes.io/component: governance
template:
metadata:
labels:
app.kubernetes.io/name: resource-governance
app.kubernetes.io/component: governance
spec:
serviceAccountName: resource-governance-sa
imagePullSecrets:
- name: docker-hub-secret
securityContext:
runAsNonRoot: true
runAsUser: 1000940000
fsGroup: 1000940000
containers:
- name: resource-governance
image: andersonid/openshift-resource-governance:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /api/v1/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/v1/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
env:
- name: KUBECONFIG
value: "/var/run/secrets/kubernetes.io/serviceaccount/token"
- name: CPU_LIMIT_RATIO
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: CPU_LIMIT_RATIO
- name: MEMORY_LIMIT_RATIO
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: MEMORY_LIMIT_RATIO
- name: MIN_CPU_REQUEST
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: MIN_CPU_REQUEST
- name: MIN_MEMORY_REQUEST
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: MIN_MEMORY_REQUEST
- name: CRITICAL_NAMESPACES
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: CRITICAL_NAMESPACES
- name: PROMETHEUS_URL
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: PROMETHEUS_URL
- name: REPORT_EXPORT_PATH
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: REPORT_EXPORT_PATH
- name: SERVICE_ACCOUNT_NAME
valueFrom:
configMapKeyRef:
name: resource-governance-config
key: SERVICE_ACCOUNT_NAME
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: reports
mountPath: /tmp/reports
volumes:
- name: reports
emptyDir: {}
restartPolicy: Always

111
scripts/blue-green-deploy.sh Executable file
View File

@@ -0,0 +1,111 @@
#!/bin/bash
# Script de Deploy Blue-Green para OpenShift Resource Governance Tool
# Este script implementa uma estratégia de deploy mais segura, onde a nova versão
# só substitui a antiga após estar completamente funcional.
set -e
# Cores para output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
NAMESPACE="resource-governance"
IMAGE_NAME="andersonid/openshift-resource-governance"
TAG="${1:-latest}"
FULL_IMAGE_NAME="${IMAGE_NAME}:${TAG}"
echo -e "${BLUE}🔄 Deploy Blue-Green - OpenShift Resource Governance Tool${NC}"
echo -e "${BLUE}====================================================${NC}"
echo -e "${BLUE}Imagem: ${FULL_IMAGE_NAME}${NC}"
# 1. Verificar login no OpenShift
echo -e "${YELLOW}🔍 Verificando login no OpenShift...${NC}"
if ! oc whoami > /dev/null 2>&1; then
echo -e "${RED}❌ Não está logado no OpenShift. Faça login primeiro.${NC}"
exit 1
fi
echo -e "${GREEN}✅ Logado como: $(oc whoami)${NC}"
# 2. Verificar se a imagem existe localmente
echo -e "${YELLOW}🔍 Verificando se a imagem existe localmente...${NC}"
if ! podman image exists "${FULL_IMAGE_NAME}" > /dev/null 2>&1; then
echo -e "${YELLOW}📦 Imagem não encontrada localmente. Fazendo build...${NC}"
podman build -f Dockerfile.simple -t "${FULL_IMAGE_NAME}" .
echo -e "${YELLOW}📤 Fazendo push da imagem...${NC}"
podman push "${FULL_IMAGE_NAME}"
fi
# 3. Verificar status atual do Deployment
echo -e "${YELLOW}📊 Verificando status atual do Deployment...${NC}"
CURRENT_IMAGE=$(oc get deployment resource-governance -n $NAMESPACE -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "N/A")
echo -e "${BLUE}Imagem atual: ${CURRENT_IMAGE}${NC}"
if [ "$CURRENT_IMAGE" = "$FULL_IMAGE_NAME" ]; then
echo -e "${YELLOW}⚠️ A imagem já está em uso. Continuando com o deploy...${NC}"
fi
# 4. Aplicar o Deployment atualizado
echo -e "${YELLOW}📦 Aplicando Deployment atualizado...${NC}"
oc apply -f k8s/deployment.yaml
# 5. Aguardar o rollout com verificação de saúde
echo -e "${YELLOW}⏳ Aguardando rollout do Deployment...${NC}"
oc rollout status deployment/resource-governance -n $NAMESPACE --timeout=300s
# 6. Verificar se todos os pods estão prontos
echo -e "${YELLOW}🔍 Verificando se todos os pods estão prontos...${NC}"
READY_PODS=$(oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance --field-selector=status.phase=Running | wc -l)
TOTAL_PODS=$(oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance | wc -l)
echo -e "${BLUE}Pods prontos: ${READY_PODS}/${TOTAL_PODS}${NC}"
if [ $READY_PODS -lt $TOTAL_PODS ]; then
echo -e "${YELLOW}⚠️ Nem todos os pods estão prontos. Verificando logs...${NC}"
oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance
echo -e "${YELLOW}💡 Para ver logs de um pod específico: oc logs <pod-name> -n $NAMESPACE${NC}"
fi
# 7. Testar a saúde da aplicação
echo -e "${YELLOW}🏥 Testando saúde da aplicação...${NC}"
SERVICE_IP=$(oc get service resource-governance-service -n $NAMESPACE -o jsonpath='{.spec.clusterIP}')
if [ -n "$SERVICE_IP" ]; then
# Testar via port-forward temporário
echo -e "${YELLOW}🔗 Testando conectividade...${NC}"
oc port-forward service/resource-governance-service 8081:8080 -n $NAMESPACE &
PORT_FORWARD_PID=$!
sleep 5
if curl -s http://localhost:8081/api/v1/health > /dev/null; then
echo -e "${GREEN}✅ Aplicação está respondendo corretamente${NC}"
else
echo -e "${RED}❌ Aplicação não está respondendo${NC}"
fi
kill $PORT_FORWARD_PID 2>/dev/null || true
else
echo -e "${YELLOW}⚠️ Não foi possível obter IP do serviço${NC}"
fi
# 8. Mostrar status final
echo -e "${YELLOW}📊 Status final do deploy:${NC}"
oc get deployment resource-governance -n $NAMESPACE
echo ""
oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance
# 9. Obter URL da aplicação
ROUTE_HOST=$(oc get route resource-governance-route -n $NAMESPACE -o jsonpath='{.spec.host}' 2>/dev/null || echo "N/A")
if [ "$ROUTE_HOST" != "N/A" ]; then
echo -e "${GREEN}🎉 Deploy Blue-Green concluído com sucesso!${NC}"
echo -e "${BLUE}Acesse a aplicação em: https://${ROUTE_HOST}${NC}"
else
echo -e "${GREEN}🎉 Deploy Blue-Green concluído!${NC}"
echo -e "${BLUE}Para acessar a aplicação, use port-forward:${NC}"
echo -e " oc port-forward service/resource-governance-service 8080:8080 -n $NAMESPACE${NC}"
fi
echo -e "${BLUE}💡 Para verificar logs: oc logs -l app.kubernetes.io/name=resource-governance -n $NAMESPACE${NC}"

View File

@@ -0,0 +1,79 @@
#!/bin/bash
# Script para migrar de DaemonSet para Deployment
# Este script remove o DaemonSet e cria um Deployment mais eficiente
set -e
# Cores para output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
NAMESPACE="resource-governance"
echo -e "${BLUE}🔄 Migração DaemonSet → Deployment${NC}"
echo -e "${BLUE}====================================${NC}"
# 1. Verificar login no OpenShift
echo -e "${YELLOW}🔍 Verificando login no OpenShift...${NC}"
if ! oc whoami > /dev/null 2>&1; then
echo -e "${RED}❌ Não está logado no OpenShift. Faça login primeiro.${NC}"
exit 1
fi
echo -e "${GREEN}✅ Logado como: $(oc whoami)${NC}"
# 2. Verificar status atual
echo -e "${YELLOW}📊 Status atual do DaemonSet...${NC}"
oc get daemonset resource-governance -n $NAMESPACE 2>/dev/null || echo "DaemonSet não encontrado"
# 3. Criar Deployment
echo -e "${YELLOW}📦 Criando Deployment...${NC}"
oc apply -f k8s/deployment.yaml
# 4. Aguardar Deployment ficar pronto
echo -e "${YELLOW}⏳ Aguardando Deployment ficar pronto...${NC}"
oc rollout status deployment/resource-governance -n $NAMESPACE --timeout=120s
# 5. Verificar se pods estão rodando
echo -e "${YELLOW}🔍 Verificando pods do Deployment...${NC}"
oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance
# 6. Testar aplicação
echo -e "${YELLOW}🏥 Testando aplicação...${NC}"
oc port-forward service/resource-governance-service 8081:8080 -n $NAMESPACE &
PORT_FORWARD_PID=$!
sleep 5
if curl -s http://localhost:8081/api/v1/health > /dev/null; then
echo -e "${GREEN}✅ Aplicação está funcionando corretamente${NC}"
else
echo -e "${RED}❌ Aplicação não está respondendo${NC}"
fi
kill $PORT_FORWARD_PID 2>/dev/null || true
# 7. Remover DaemonSet (se existir)
echo -e "${YELLOW}🗑️ Removendo DaemonSet...${NC}"
oc delete daemonset resource-governance -n $NAMESPACE --ignore-not-found=true
# 8. Status final
echo -e "${YELLOW}📊 Status final:${NC}"
echo -e "${BLUE}Deployment:${NC}"
oc get deployment resource-governance -n $NAMESPACE
echo ""
echo -e "${BLUE}Pods:${NC}"
oc get pods -n $NAMESPACE -l app.kubernetes.io/name=resource-governance
# 9. Mostrar benefícios
echo -e "${GREEN}🎉 Migração concluída com sucesso!${NC}"
echo -e "${BLUE}💡 Benefícios do Deployment:${NC}"
echo -e " ✅ Mais eficiente (2 pods vs 6 pods)"
echo -e " ✅ Escalável (pode ajustar replicas)"
echo -e " ✅ Rolling Updates nativos"
echo -e " ✅ Health checks automáticos"
echo -e " ✅ Menor consumo de recursos"
echo -e "${BLUE}🔧 Para escalar: oc scale deployment resource-governance --replicas=3 -n $NAMESPACE${NC}"

91
scripts/setup-github-secrets.sh Executable file
View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Script para configurar secrets do GitHub Actions
# Este script ajuda a configurar os secrets necessários para CI/CD
set -e
# Cores para output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo -e "${BLUE}🔐 Configuração de Secrets para GitHub Actions${NC}"
echo -e "${BLUE}============================================${NC}"
echo -e "${YELLOW}📋 Secrets necessários no GitHub:${NC}"
echo ""
echo -e "${BLUE}1. DOCKERHUB_USERNAME${NC}"
echo -e " Seu usuário do Docker Hub"
echo ""
echo -e "${BLUE}2. DOCKERHUB_TOKEN${NC}"
echo -e " Token de acesso do Docker Hub (não a senha!)"
echo " Crie em: https://hub.docker.com/settings/security"
echo ""
echo -e "${BLUE}3. OPENSHIFT_SERVER${NC}"
echo -e " URL do seu cluster OpenShift"
echo " Exemplo: https://api.openshift.example.com:6443"
echo ""
echo -e "${BLUE}4. OPENSHIFT_TOKEN${NC}"
echo -e " Token de acesso do OpenShift"
echo " Obtenha com: oc whoami -t"
echo ""
# Verificar se está logado no OpenShift
if oc whoami > /dev/null 2>&1; then
echo -e "${GREEN}✅ Logado no OpenShift como: $(oc whoami)${NC}"
# Obter informações do cluster
CLUSTER_SERVER=$(oc config view --minify -o jsonpath='{.clusters[0].cluster.server}' 2>/dev/null || echo "N/A")
if [ "$CLUSTER_SERVER" != "N/A" ]; then
echo -e "${BLUE}🌐 Servidor OpenShift: ${CLUSTER_SERVER}${NC}"
fi
# Obter token
OPENSHIFT_TOKEN=$(oc whoami -t 2>/dev/null || echo "N/A")
if [ "$OPENSHIFT_TOKEN" != "N/A" ]; then
echo -e "${BLUE}🔑 Token OpenShift: ${OPENSHIFT_TOKEN:0:20}...${NC}"
fi
else
echo -e "${RED}❌ Não está logado no OpenShift${NC}"
echo -e "${YELLOW}💡 Faça login primeiro: oc login <server>${NC}"
fi
echo ""
echo -e "${YELLOW}📝 Como configurar os secrets no GitHub:${NC}"
echo ""
echo -e "${BLUE}1. Acesse: https://github.com/andersonid/openshift-resource-governance/settings/secrets/actions${NC}"
echo ""
echo -e "${BLUE}2. Clique em 'New repository secret' para cada um:${NC}"
echo ""
echo -e "${GREEN}DOCKERHUB_USERNAME${NC}"
echo -e " Valor: seu-usuario-dockerhub"
echo ""
echo -e "${GREEN}DOCKERHUB_TOKEN${NC}"
echo -e " Valor: dckr_pat_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
echo ""
echo -e "${GREEN}OPENSHIFT_SERVER${NC}"
echo -e " Valor: ${CLUSTER_SERVER}"
echo ""
echo -e "${GREEN}OPENSHIFT_TOKEN${NC}"
echo -e " Valor: ${OPENSHIFT_TOKEN}"
echo ""
echo -e "${YELLOW}🚀 Após configurar os secrets:${NC}"
echo ""
echo -e "${BLUE}1. Faça commit e push das mudanças:${NC}"
echo -e " git add ."
echo -e " git commit -m 'Add GitHub Actions for auto-deploy'"
echo -e " git push origin main"
echo ""
echo -e "${BLUE}2. O GitHub Actions irá:${NC}"
echo -e " ✅ Buildar a imagem automaticamente"
echo -e " ✅ Fazer push para Docker Hub"
echo -e " ✅ Fazer deploy no OpenShift"
echo -e " ✅ Atualizar o deployment com a nova imagem"
echo ""
echo -e "${GREEN}🎉 Configuração concluída!${NC}"
echo -e "${BLUE}💡 Para testar: faça uma mudança no código e faça push para main${NC}"

79
scripts/test-ci-cd.sh Executable file
View File

@@ -0,0 +1,79 @@
#!/bin/bash
# Script para testar o fluxo CI/CD localmente
# Simula o que o GitHub Actions fará
set -e
# Cores para output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
NAMESPACE="resource-governance"
IMAGE_NAME="resource-governance"
REGISTRY="andersonid"
TAG="test-$(date +%s)"
echo -e "${BLUE}🧪 Teste do Fluxo CI/CD${NC}"
echo -e "${BLUE}========================${NC}"
echo -e "${BLUE}Tag: ${TAG}${NC}"
# 1. Verificar login no OpenShift
echo -e "${YELLOW}🔍 Verificando login no OpenShift...${NC}"
if ! oc whoami > /dev/null 2>&1; then
echo -e "${RED}❌ Não está logado no OpenShift. Faça login primeiro.${NC}"
exit 1
fi
echo -e "${GREEN}✅ Logado como: $(oc whoami)${NC}"
# 2. Build da imagem
echo -e "${YELLOW}📦 Buildando imagem...${NC}"
podman build -f Dockerfile.simple -t "${REGISTRY}/${IMAGE_NAME}:${TAG}" .
podman build -f Dockerfile.simple -t "${REGISTRY}/${IMAGE_NAME}:latest" .
# 3. Push da imagem
echo -e "${YELLOW}📤 Fazendo push da imagem...${NC}"
podman push "${REGISTRY}/${IMAGE_NAME}:${TAG}"
podman push "${REGISTRY}/${IMAGE_NAME}:latest"
# 4. Atualizar deployment
echo -e "${YELLOW}🔄 Atualizando deployment...${NC}"
oc set image deployment/${IMAGE_NAME} ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:${TAG} -n ${NAMESPACE}
# 5. Aguardar rollout
echo -e "${YELLOW}⏳ Aguardando rollout...${NC}"
oc rollout status deployment/${IMAGE_NAME} -n ${NAMESPACE} --timeout=120s
# 6. Verificar status
echo -e "${YELLOW}📊 Verificando status...${NC}"
oc get deployment ${IMAGE_NAME} -n ${NAMESPACE}
oc get pods -n ${NAMESPACE} -l app.kubernetes.io/name=${IMAGE_NAME}
# 7. Testar aplicação
echo -e "${YELLOW}🏥 Testando aplicação...${NC}"
oc port-forward service/${IMAGE_NAME}-service 8081:8080 -n ${NAMESPACE} &
PORT_FORWARD_PID=$!
sleep 5
if curl -s http://localhost:8081/api/v1/health > /dev/null; then
echo -e "${GREEN}✅ Aplicação está funcionando com a nova imagem!${NC}"
else
echo -e "${RED}❌ Aplicação não está respondendo${NC}"
fi
kill $PORT_FORWARD_PID 2>/dev/null || true
# 8. Mostrar informações
echo -e "${GREEN}🎉 Teste CI/CD concluído!${NC}"
echo -e "${BLUE}📊 Status do deployment:${NC}"
oc get deployment ${IMAGE_NAME} -n ${NAMESPACE} -o wide
echo -e "${BLUE}🔍 Imagem atual:${NC}"
oc get deployment ${IMAGE_NAME} -n ${NAMESPACE} -o jsonpath='{.spec.template.spec.containers[0].image}'
echo ""
echo -e "${BLUE}💡 Para reverter para latest:${NC}"
echo -e " oc set image deployment/${IMAGE_NAME} ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:latest -n ${NAMESPACE}"