Add complete Uptime Monitoring API implementation
- Created FastAPI application with SQLite database - Implemented monitor management endpoints (CRUD operations) - Added uptime checking functionality with response time tracking - Included statistics endpoints for uptime percentage and metrics - Set up database models and Alembic migrations - Added comprehensive API documentation - Configured CORS and health check endpoints
This commit is contained in:
parent
5fa1223cdc
commit
79eb3ef108
114
README.md
114
README.md
@ -1,3 +1,113 @@
|
||||
# FastAPI Application
|
||||
# Uptime Monitoring API
|
||||
|
||||
This is a FastAPI application bootstrapped by BackendIM, the AI-powered backend generation platform.
|
||||
A FastAPI-based uptime monitoring service that allows you to monitor website/endpoint availability and performance.
|
||||
|
||||
## Features
|
||||
|
||||
- **Monitor Management**: Create, update, delete, and list website monitors
|
||||
- **Uptime Checking**: Automated and manual uptime checks with response time tracking
|
||||
- **Statistics**: Get uptime percentage, average response times, and check history
|
||||
- **RESTful API**: Full REST API with OpenAPI documentation
|
||||
- **SQLite Database**: Lightweight database for storing monitors and check results
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Base Endpoints
|
||||
- `GET /` - API information and navigation
|
||||
- `GET /health` - Health check endpoint
|
||||
- `GET /docs` - Interactive API documentation (Swagger UI)
|
||||
- `GET /redoc` - Alternative API documentation
|
||||
|
||||
### Monitor Endpoints
|
||||
- `POST /api/v1/monitors` - Create a new monitor
|
||||
- `GET /api/v1/monitors` - List all monitors
|
||||
- `GET /api/v1/monitors/{monitor_id}` - Get specific monitor
|
||||
- `PUT /api/v1/monitors/{monitor_id}` - Update monitor
|
||||
- `DELETE /api/v1/monitors/{monitor_id}` - Delete monitor
|
||||
- `GET /api/v1/monitors/{monitor_id}/checks` - Get monitor check history
|
||||
- `GET /api/v1/monitors/{monitor_id}/stats` - Get monitor statistics
|
||||
|
||||
### Check Endpoints
|
||||
- `POST /api/v1/checks/run/{monitor_id}` - Run check for specific monitor
|
||||
- `POST /api/v1/checks/run-all` - Run checks for all active monitors
|
||||
|
||||
## Installation & Setup
|
||||
|
||||
1. Install dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
2. Run database migrations (optional - tables are created automatically):
|
||||
```bash
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
3. Start the application:
|
||||
```bash
|
||||
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
||||
```
|
||||
|
||||
The API will be available at `http://localhost:8000`
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Create a Monitor
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/api/v1/monitors" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "My Website",
|
||||
"url": "https://example.com",
|
||||
"method": "GET",
|
||||
"timeout": 30,
|
||||
"interval": 300,
|
||||
"is_active": true
|
||||
}'
|
||||
```
|
||||
|
||||
### Run a Check
|
||||
```bash
|
||||
curl -X POST "http://localhost:8000/api/v1/checks/run/1"
|
||||
```
|
||||
|
||||
### Get Monitor Statistics
|
||||
```bash
|
||||
curl "http://localhost:8000/api/v1/monitors/1/stats"
|
||||
```
|
||||
|
||||
## Monitor Configuration
|
||||
|
||||
When creating a monitor, you can configure:
|
||||
|
||||
- **name**: Human-readable name for the monitor
|
||||
- **url**: The URL to monitor
|
||||
- **method**: HTTP method (GET, POST, etc.) - defaults to GET
|
||||
- **timeout**: Request timeout in seconds - defaults to 30
|
||||
- **interval**: Check interval in seconds - defaults to 300 (5 minutes)
|
||||
- **is_active**: Whether the monitor is active - defaults to true
|
||||
|
||||
## Database
|
||||
|
||||
The application uses SQLite database located at `/app/storage/db/db.sqlite`. The database contains:
|
||||
|
||||
- **monitors**: Store monitor configurations
|
||||
- **uptime_checks**: Store check results and history
|
||||
|
||||
## Environment Variables
|
||||
|
||||
No environment variables are required for basic operation. The application uses SQLite with default settings.
|
||||
|
||||
## Development
|
||||
|
||||
### Linting
|
||||
```bash
|
||||
ruff check .
|
||||
ruff format .
|
||||
```
|
||||
|
||||
### Database Migrations
|
||||
```bash
|
||||
alembic revision --autogenerate -m "Description"
|
||||
alembic upgrade head
|
||||
```
|
||||
|
41
alembic.ini
Normal file
41
alembic.ini
Normal file
@ -0,0 +1,41 @@
|
||||
[alembic]
|
||||
script_location = migrations
|
||||
prepend_sys_path = .
|
||||
version_path_separator = os
|
||||
sqlalchemy.url = sqlite:////app/storage/db/db.sqlite
|
||||
|
||||
[post_write_hooks]
|
||||
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
0
app/__init__.py
Normal file
0
app/__init__.py
Normal file
0
app/db/__init__.py
Normal file
0
app/db/__init__.py
Normal file
3
app/db/base.py
Normal file
3
app/db/base.py
Normal file
@ -0,0 +1,3 @@
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
Base = declarative_base()
|
22
app/db/session.py
Normal file
22
app/db/session.py
Normal file
@ -0,0 +1,22 @@
|
||||
from pathlib import Path
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
DB_DIR = Path("/app") / "storage" / "db"
|
||||
DB_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
SQLALCHEMY_DATABASE_URL = f"sqlite:///{DB_DIR}/db.sqlite"
|
||||
|
||||
engine = create_engine(
|
||||
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
|
||||
)
|
||||
|
||||
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
||||
|
||||
|
||||
def get_db():
|
||||
db = SessionLocal()
|
||||
try:
|
||||
yield db
|
||||
finally:
|
||||
db.close()
|
0
app/models/__init__.py
Normal file
0
app/models/__init__.py
Normal file
29
app/models/monitor.py
Normal file
29
app/models/monitor.py
Normal file
@ -0,0 +1,29 @@
|
||||
from sqlalchemy import Column, Integer, String, DateTime, Boolean, Float, Text
|
||||
from sqlalchemy.sql import func
|
||||
from app.db.base import Base
|
||||
|
||||
|
||||
class Monitor(Base):
|
||||
__tablename__ = "monitors"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
name = Column(String, nullable=False, index=True)
|
||||
url = Column(String, nullable=False)
|
||||
method = Column(String, default="GET")
|
||||
timeout = Column(Integer, default=30)
|
||||
interval = Column(Integer, default=300) # Check interval in seconds
|
||||
is_active = Column(Boolean, default=True)
|
||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||
updated_at = Column(DateTime(timezone=True), onupdate=func.now())
|
||||
|
||||
|
||||
class UptimeCheck(Base):
|
||||
__tablename__ = "uptime_checks"
|
||||
|
||||
id = Column(Integer, primary_key=True, index=True)
|
||||
monitor_id = Column(Integer, nullable=False, index=True)
|
||||
status_code = Column(Integer)
|
||||
response_time = Column(Float) # Response time in milliseconds
|
||||
is_up = Column(Boolean, nullable=False)
|
||||
error_message = Column(Text, nullable=True)
|
||||
checked_at = Column(DateTime(timezone=True), server_default=func.now())
|
56
app/models/schemas.py
Normal file
56
app/models/schemas.py
Normal file
@ -0,0 +1,56 @@
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
from pydantic import BaseModel, HttpUrl
|
||||
|
||||
|
||||
class MonitorBase(BaseModel):
|
||||
name: str
|
||||
url: HttpUrl
|
||||
method: str = "GET"
|
||||
timeout: int = 30
|
||||
interval: int = 300
|
||||
is_active: bool = True
|
||||
|
||||
|
||||
class MonitorCreate(MonitorBase):
|
||||
pass
|
||||
|
||||
|
||||
class MonitorUpdate(BaseModel):
|
||||
name: Optional[str] = None
|
||||
url: Optional[HttpUrl] = None
|
||||
method: Optional[str] = None
|
||||
timeout: Optional[int] = None
|
||||
interval: Optional[int] = None
|
||||
is_active: Optional[bool] = None
|
||||
|
||||
|
||||
class MonitorResponse(MonitorBase):
|
||||
id: int
|
||||
created_at: datetime
|
||||
updated_at: Optional[datetime] = None
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
class UptimeCheckResponse(BaseModel):
|
||||
id: int
|
||||
monitor_id: int
|
||||
status_code: Optional[int]
|
||||
response_time: Optional[float]
|
||||
is_up: bool
|
||||
error_message: Optional[str]
|
||||
checked_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
|
||||
|
||||
class MonitorStats(BaseModel):
|
||||
monitor_id: int
|
||||
uptime_percentage: float
|
||||
total_checks: int
|
||||
successful_checks: int
|
||||
average_response_time: Optional[float]
|
||||
last_check: Optional[datetime]
|
0
app/routers/__init__.py
Normal file
0
app/routers/__init__.py
Normal file
25
app/routers/checks.py
Normal file
25
app/routers/checks.py
Normal file
@ -0,0 +1,25 @@
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from app.db.session import get_db
|
||||
from app.models.monitor import Monitor
|
||||
from app.services.uptime_checker import UptimeChecker
|
||||
|
||||
router = APIRouter(prefix="/checks", tags=["checks"])
|
||||
|
||||
|
||||
@router.post("/run/{monitor_id}")
|
||||
def run_check(monitor_id: int, db: Session = Depends(get_db)):
|
||||
monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
|
||||
checker = UptimeChecker(db)
|
||||
result = checker.check_monitor(monitor)
|
||||
return result
|
||||
|
||||
|
||||
@router.post("/run-all")
|
||||
def run_all_checks(db: Session = Depends(get_db)):
|
||||
checker = UptimeChecker(db)
|
||||
results = checker.check_all_active_monitors()
|
||||
return {"checks_run": len(results), "results": results}
|
133
app/routers/monitors.py
Normal file
133
app/routers/monitors.py
Normal file
@ -0,0 +1,133 @@
|
||||
from typing import List
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import desc, func
|
||||
from app.db.session import get_db
|
||||
from app.models.monitor import Monitor, UptimeCheck
|
||||
from app.models.schemas import (
|
||||
MonitorCreate,
|
||||
MonitorUpdate,
|
||||
MonitorResponse,
|
||||
UptimeCheckResponse,
|
||||
MonitorStats,
|
||||
)
|
||||
|
||||
router = APIRouter(prefix="/monitors", tags=["monitors"])
|
||||
|
||||
|
||||
@router.post("/", response_model=MonitorResponse)
|
||||
def create_monitor(monitor: MonitorCreate, db: Session = Depends(get_db)):
|
||||
db_monitor = Monitor(**monitor.dict())
|
||||
db.add(db_monitor)
|
||||
db.commit()
|
||||
db.refresh(db_monitor)
|
||||
return db_monitor
|
||||
|
||||
|
||||
@router.get("/", response_model=List[MonitorResponse])
|
||||
def get_monitors(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
|
||||
monitors = db.query(Monitor).offset(skip).limit(limit).all()
|
||||
return monitors
|
||||
|
||||
|
||||
@router.get("/{monitor_id}", response_model=MonitorResponse)
|
||||
def get_monitor(monitor_id: int, db: Session = Depends(get_db)):
|
||||
monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
return monitor
|
||||
|
||||
|
||||
@router.put("/{monitor_id}", response_model=MonitorResponse)
|
||||
def update_monitor(
|
||||
monitor_id: int, monitor: MonitorUpdate, db: Session = Depends(get_db)
|
||||
):
|
||||
db_monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not db_monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
|
||||
update_data = monitor.dict(exclude_unset=True)
|
||||
for field, value in update_data.items():
|
||||
setattr(db_monitor, field, value)
|
||||
|
||||
db.commit()
|
||||
db.refresh(db_monitor)
|
||||
return db_monitor
|
||||
|
||||
|
||||
@router.delete("/{monitor_id}")
|
||||
def delete_monitor(monitor_id: int, db: Session = Depends(get_db)):
|
||||
monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
|
||||
db.delete(monitor)
|
||||
db.commit()
|
||||
return {"message": "Monitor deleted successfully"}
|
||||
|
||||
|
||||
@router.get("/{monitor_id}/checks", response_model=List[UptimeCheckResponse])
|
||||
def get_monitor_checks(
|
||||
monitor_id: int, skip: int = 0, limit: int = 100, db: Session = Depends(get_db)
|
||||
):
|
||||
monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
|
||||
checks = (
|
||||
db.query(UptimeCheck)
|
||||
.filter(UptimeCheck.monitor_id == monitor_id)
|
||||
.order_by(desc(UptimeCheck.checked_at))
|
||||
.offset(skip)
|
||||
.limit(limit)
|
||||
.all()
|
||||
)
|
||||
return checks
|
||||
|
||||
|
||||
@router.get("/{monitor_id}/stats", response_model=MonitorStats)
|
||||
def get_monitor_stats(monitor_id: int, db: Session = Depends(get_db)):
|
||||
monitor = db.query(Monitor).filter(Monitor.id == monitor_id).first()
|
||||
if not monitor:
|
||||
raise HTTPException(status_code=404, detail="Monitor not found")
|
||||
|
||||
total_checks = (
|
||||
db.query(UptimeCheck).filter(UptimeCheck.monitor_id == monitor_id).count()
|
||||
)
|
||||
successful_checks = (
|
||||
db.query(UptimeCheck)
|
||||
.filter(UptimeCheck.monitor_id == monitor_id, UptimeCheck.is_up)
|
||||
.count()
|
||||
)
|
||||
|
||||
uptime_percentage = (
|
||||
(successful_checks / total_checks * 100) if total_checks > 0 else 0
|
||||
)
|
||||
|
||||
avg_response_time = (
|
||||
db.query(func.avg(UptimeCheck.response_time))
|
||||
.filter(
|
||||
UptimeCheck.monitor_id == monitor_id,
|
||||
UptimeCheck.is_up,
|
||||
UptimeCheck.response_time.isnot(None),
|
||||
)
|
||||
.scalar()
|
||||
)
|
||||
|
||||
last_check = (
|
||||
db.query(UptimeCheck)
|
||||
.filter(UptimeCheck.monitor_id == monitor_id)
|
||||
.order_by(desc(UptimeCheck.checked_at))
|
||||
.first()
|
||||
)
|
||||
|
||||
return MonitorStats(
|
||||
monitor_id=monitor_id,
|
||||
uptime_percentage=round(uptime_percentage, 2),
|
||||
total_checks=total_checks,
|
||||
successful_checks=successful_checks,
|
||||
average_response_time=round(avg_response_time, 2)
|
||||
if avg_response_time
|
||||
else None,
|
||||
last_check=last_check.checked_at if last_check else None,
|
||||
)
|
0
app/services/__init__.py
Normal file
0
app/services/__init__.py
Normal file
77
app/services/uptime_checker.py
Normal file
77
app/services/uptime_checker.py
Normal file
@ -0,0 +1,77 @@
|
||||
import time
|
||||
import requests
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
from sqlalchemy.orm import Session
|
||||
from app.models.monitor import Monitor, UptimeCheck
|
||||
|
||||
|
||||
class UptimeChecker:
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def check_monitor(self, monitor: Monitor) -> Dict[str, Any]:
|
||||
start_time = time.time()
|
||||
is_up = False
|
||||
status_code = None
|
||||
error_message = None
|
||||
response_time = None
|
||||
|
||||
try:
|
||||
response = requests.request(
|
||||
method=monitor.method,
|
||||
url=str(monitor.url),
|
||||
timeout=monitor.timeout,
|
||||
allow_redirects=True,
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
response_time = (end_time - start_time) * 1000 # Convert to milliseconds
|
||||
status_code = response.status_code
|
||||
|
||||
# Consider 2xx and 3xx status codes as "up"
|
||||
is_up = 200 <= status_code < 400
|
||||
|
||||
if not is_up:
|
||||
error_message = f"HTTP {status_code}: {response.reason}"
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
error_message = f"Request timed out after {monitor.timeout} seconds"
|
||||
except requests.exceptions.ConnectionError:
|
||||
error_message = "Connection error - unable to reach the endpoint"
|
||||
except requests.exceptions.RequestException as e:
|
||||
error_message = f"Request error: {str(e)}"
|
||||
except Exception as e:
|
||||
error_message = f"Unexpected error: {str(e)}"
|
||||
|
||||
# Save the check result to database
|
||||
uptime_check = UptimeCheck(
|
||||
monitor_id=monitor.id,
|
||||
status_code=status_code,
|
||||
response_time=response_time,
|
||||
is_up=is_up,
|
||||
error_message=error_message,
|
||||
checked_at=datetime.utcnow(),
|
||||
)
|
||||
|
||||
self.db.add(uptime_check)
|
||||
self.db.commit()
|
||||
|
||||
return {
|
||||
"monitor_id": monitor.id,
|
||||
"is_up": is_up,
|
||||
"status_code": status_code,
|
||||
"response_time": response_time,
|
||||
"error_message": error_message,
|
||||
"checked_at": uptime_check.checked_at,
|
||||
}
|
||||
|
||||
def check_all_active_monitors(self) -> list:
|
||||
active_monitors = self.db.query(Monitor).filter(Monitor.is_active).all()
|
||||
results = []
|
||||
|
||||
for monitor in active_monitors:
|
||||
result = self.check_monitor(monitor)
|
||||
results.append(result)
|
||||
|
||||
return results
|
47
main.py
Normal file
47
main.py
Normal file
@ -0,0 +1,47 @@
|
||||
from fastapi import FastAPI
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from app.db.session import engine
|
||||
from app.db.base import Base
|
||||
from app.routers import monitors, checks
|
||||
|
||||
# Create database tables
|
||||
Base.metadata.create_all(bind=engine)
|
||||
|
||||
app = FastAPI(
|
||||
title="Uptime Monitoring API",
|
||||
description="API for monitoring website/endpoint uptime and performance",
|
||||
version="1.0.0",
|
||||
openapi_url="/openapi.json",
|
||||
)
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Include routers
|
||||
app.include_router(monitors.router, prefix="/api/v1")
|
||||
app.include_router(checks.router, prefix="/api/v1")
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
return {
|
||||
"title": "Uptime Monitoring API",
|
||||
"documentation": "/docs",
|
||||
"health_check": "/health",
|
||||
}
|
||||
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
return {"status": "healthy", "service": "uptime-monitoring-api"}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
50
migrations/env.py
Normal file
50
migrations/env.py
Normal file
@ -0,0 +1,50 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from logging.config import fileConfig
|
||||
from sqlalchemy import engine_from_config
|
||||
from sqlalchemy import pool
|
||||
from alembic import context
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parent.parent))
|
||||
|
||||
from app.db.base import Base
|
||||
|
||||
config = context.config
|
||||
|
||||
if config.config_file_name is not None:
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
target_metadata = Base.metadata
|
||||
|
||||
|
||||
def run_migrations_offline() -> None:
|
||||
url = config.get_main_option("sqlalchemy.url")
|
||||
context.configure(
|
||||
url=url,
|
||||
target_metadata=target_metadata,
|
||||
literal_binds=True,
|
||||
dialect_opts={"paramstyle": "named"},
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def run_migrations_online() -> None:
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section, {}),
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(connection=connection, target_metadata=target_metadata)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
24
migrations/script.py.mako
Normal file
24
migrations/script.py.mako
Normal file
@ -0,0 +1,24 @@
|
||||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
depends_on = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
${downgrades if downgrades else "pass"}
|
74
migrations/versions/001_initial_migration.py
Normal file
74
migrations/versions/001_initial_migration.py
Normal file
@ -0,0 +1,74 @@
|
||||
"""Initial migration
|
||||
|
||||
Revision ID: 001
|
||||
Revises:
|
||||
Create Date: 2024-12-17 12:00:00.000000
|
||||
|
||||
"""
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = "001"
|
||||
down_revision = None
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# Create monitors table
|
||||
op.create_table(
|
||||
"monitors",
|
||||
sa.Column("id", sa.Integer(), nullable=False),
|
||||
sa.Column("name", sa.String(), nullable=False),
|
||||
sa.Column("url", sa.String(), nullable=False),
|
||||
sa.Column("method", sa.String(), nullable=True),
|
||||
sa.Column("timeout", sa.Integer(), nullable=True),
|
||||
sa.Column("interval", sa.Integer(), nullable=True),
|
||||
sa.Column("is_active", sa.Boolean(), nullable=True),
|
||||
sa.Column(
|
||||
"created_at",
|
||||
sa.DateTime(timezone=True),
|
||||
server_default=sa.text("(CURRENT_TIMESTAMP)"),
|
||||
nullable=True,
|
||||
),
|
||||
sa.Column("updated_at", sa.DateTime(timezone=True), nullable=True),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
)
|
||||
op.create_index(op.f("ix_monitors_id"), "monitors", ["id"], unique=False)
|
||||
op.create_index(op.f("ix_monitors_name"), "monitors", ["name"], unique=False)
|
||||
|
||||
# Create uptime_checks table
|
||||
op.create_table(
|
||||
"uptime_checks",
|
||||
sa.Column("id", sa.Integer(), nullable=False),
|
||||
sa.Column("monitor_id", sa.Integer(), nullable=False),
|
||||
sa.Column("status_code", sa.Integer(), nullable=True),
|
||||
sa.Column("response_time", sa.Float(), nullable=True),
|
||||
sa.Column("is_up", sa.Boolean(), nullable=False),
|
||||
sa.Column("error_message", sa.Text(), nullable=True),
|
||||
sa.Column(
|
||||
"checked_at",
|
||||
sa.DateTime(timezone=True),
|
||||
server_default=sa.text("(CURRENT_TIMESTAMP)"),
|
||||
nullable=True,
|
||||
),
|
||||
sa.PrimaryKeyConstraint("id"),
|
||||
)
|
||||
op.create_index(op.f("ix_uptime_checks_id"), "uptime_checks", ["id"], unique=False)
|
||||
op.create_index(
|
||||
op.f("ix_uptime_checks_monitor_id"),
|
||||
"uptime_checks",
|
||||
["monitor_id"],
|
||||
unique=False,
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
op.drop_index(op.f("ix_uptime_checks_monitor_id"), table_name="uptime_checks")
|
||||
op.drop_index(op.f("ix_uptime_checks_id"), table_name="uptime_checks")
|
||||
op.drop_table("uptime_checks")
|
||||
op.drop_index(op.f("ix_monitors_name"), table_name="monitors")
|
||||
op.drop_index(op.f("ix_monitors_id"), table_name="monitors")
|
||||
op.drop_table("monitors")
|
8
requirements.txt
Normal file
8
requirements.txt
Normal file
@ -0,0 +1,8 @@
|
||||
fastapi==0.104.1
|
||||
uvicorn[standard]==0.24.0
|
||||
sqlalchemy==2.0.23
|
||||
alembic==1.12.1
|
||||
pydantic==2.5.0
|
||||
requests==2.31.0
|
||||
python-multipart==0.0.6
|
||||
ruff==0.1.6
|
Loading…
x
Reference in New Issue
Block a user