mirror of
https://github.com/ChrisSewell/LeosShoes.git
synced 2025-07-01 10:07:27 -04:00
Initial commit: Complete Paw Burn Risk Assessment Tool with WeatherAPI integration, comprehensive risk scoring, SQLite storage, rich visualizations, configurable parameters, and full documentation with examples
This commit is contained in:
20
.gitignore
vendored
Normal file
20
.gitignore
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Generated plots directory
|
||||
plots/
|
||||
|
||||
# Database files
|
||||
*.db
|
||||
paw_risk.db
|
||||
|
||||
# Python cache
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
# macOS system files
|
||||
.DS_Store
|
||||
|
||||
# Allow documentation images
|
||||
*.png
|
||||
!project_images/*.png
|
312
README.md
Normal file
312
README.md
Normal file
@ -0,0 +1,312 @@
|
||||
# 🐾 Paw Burn Risk Assessment Tool
|
||||
|
||||
A Python application that assesses paw burn risk for dogs based on weather conditions including temperature, UV index, and other environmental factors. The tool provides hourly risk scores, recommendations for protective footwear, and visualizations to help keep your furry friend safe.
|
||||
|
||||
## Features
|
||||
|
||||
- **Weather Data Integration**: Fetches historical, current, and forecast weather data from WeatherAPI.com
|
||||
- **Smart Risk Scoring**: Calculates risk scores based on multiple factors:
|
||||
- Air temperature (80°F, 90°F, 100°F+ thresholds)
|
||||
- UV index (6, 8, 10+ thresholds)
|
||||
- Weather conditions (sunny/clear conditions)
|
||||
- Accumulated heat (rolling 2-hour average)
|
||||
- Surface recovery time (cooling after peak temperatures)
|
||||
- Rapid temperature swings detection
|
||||
- **Intelligent Recommendations**: Provides actionable advice on when to use protective dog shoes
|
||||
- **Data Persistence**: Stores weather and risk data in SQLite database
|
||||
- **Rich Visualizations**: Creates comprehensive plots and dashboards
|
||||
- **Configurable Parameters**: Customize thresholds and assessment criteria
|
||||
- **Terminal Output**: Clean, formatted output for development and monitoring
|
||||
|
||||
## Installation
|
||||
|
||||
1. **Clone or download the project files**
|
||||
|
||||
2. **Install dependencies**:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. **Set up environment variables**:
|
||||
- Copy `env_template.txt` to `.env`
|
||||
- Sign up for a free API key at [WeatherAPI.com](https://www.weatherapi.com/)
|
||||
- Add your API key to the `.env` file:
|
||||
```
|
||||
WEATHER_API_KEY=your_actual_api_key_here
|
||||
DEFAULT_LOCATION=Your City
|
||||
```
|
||||
|
||||
4. **Test your setup**:
|
||||
```bash
|
||||
python test_setup.py
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Get your API key**: Sign up at [WeatherAPI.com](https://www.weatherapi.com/) (free)
|
||||
2. **Set up environment**: `cp env_template.txt .env` and add your API key
|
||||
3. **Install dependencies**: `pip install -r requirements.txt`
|
||||
4. **Test setup**: `python test_setup.py` (should show all ✅)
|
||||
5. **Run assessment**: `python main.py --location "90210"`
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run a basic risk assessment for your default location:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
### Command Line Options
|
||||
|
||||
```bash
|
||||
python main.py [OPTIONS]
|
||||
|
||||
Options:
|
||||
-l, --location TEXT Specify location (city, zip code, coordinates)
|
||||
-d, --detailed Show detailed hourly breakdown
|
||||
-p, --plot Display interactive plots
|
||||
-s, --save-plots Save plots to files
|
||||
--no-recommendations Skip recommendations output
|
||||
--config-check Check configuration and exit
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
**Basic assessment for a specific location:**
|
||||
```bash
|
||||
python main.py --location "Phoenix, AZ"
|
||||
```
|
||||
|
||||
**Using zip codes:**
|
||||
```bash
|
||||
python main.py --location "85001" # Phoenix, AZ zip code
|
||||
python main.py --location "90210" # Beverly Hills, CA zip code
|
||||
```
|
||||
|
||||
**Detailed hourly breakdown with plots:**
|
||||
```bash
|
||||
python main.py --detailed --plot
|
||||
```
|
||||
|
||||
**Save plots for reporting:**
|
||||
```bash
|
||||
python main.py --save-plots --location "Las Vegas, NV"
|
||||
# Plots saved to plots/ directory (auto-created and cleared each run)
|
||||
```
|
||||
|
||||
**Combine detailed analysis with plots:**
|
||||
```bash
|
||||
python main.py --location "90210" --detailed --save-plots
|
||||
```
|
||||
|
||||
## Risk Scoring System
|
||||
|
||||
The tool uses a comprehensive scoring system (0-10 scale) with the following components:
|
||||
|
||||
### Temperature Score
|
||||
- +1 if ≥ 80°F
|
||||
- +2 if ≥ 90°F
|
||||
- +3 if ≥ 100°F
|
||||
|
||||
### UV Index Score
|
||||
- +1 if ≥ 6
|
||||
- +2 if ≥ 8
|
||||
- +3 if ≥ 10
|
||||
|
||||
### Weather Condition Score
|
||||
- +1 if sunny or clear conditions
|
||||
|
||||
### Accumulated Heat Score
|
||||
- +1 if 2-hour rolling average temperature > 85°F OR average UV ≥ 6
|
||||
|
||||
### Surface Recovery Score
|
||||
- -1 if > 2 hours since last 90°F+ reading (cooling bonus)
|
||||
|
||||
### Additional Factors
|
||||
- +0.5 bonus for rapid temperature swings (15°F+ change in 1 hour)
|
||||
|
||||
**Shoe Recommendation Threshold**: ≥ 6.0 (configurable)
|
||||
|
||||
## Configuration
|
||||
|
||||
All parameters can be customized via environment variables in your `.env` file:
|
||||
|
||||
```bash
|
||||
# Temperature thresholds (°F)
|
||||
TEMP_THRESHOLD_LOW=80
|
||||
TEMP_THRESHOLD_MED=90
|
||||
TEMP_THRESHOLD_HIGH=100
|
||||
|
||||
# UV Index thresholds
|
||||
UV_THRESHOLD_LOW=6
|
||||
UV_THRESHOLD_MED=8
|
||||
UV_THRESHOLD_HIGH=10
|
||||
|
||||
# Risk assessment
|
||||
RISK_THRESHOLD_SHOES=6
|
||||
ROLLING_WINDOW_HOURS=2
|
||||
SURFACE_RECOVERY_HOURS=2
|
||||
```
|
||||
|
||||
## Output Examples
|
||||
|
||||
### Summary Output
|
||||
```
|
||||
🐾 Paw Burn Risk Assessment Tool
|
||||
========================================
|
||||
🌤️ Fetching weather data for Phoenix, AZ...
|
||||
📊 Retrieved 24 hours of weather data
|
||||
🧮 Calculating paw burn risk scores...
|
||||
|
||||
🐕 PAW BURN RISK ASSESSMENT - PHOENIX, AZ
|
||||
============================================================
|
||||
|
||||
📈 DAILY SUMMARY:
|
||||
• Total Hours Analyzed: 24
|
||||
• High Risk Hours: 8
|
||||
• Maximum Risk Score: 8.5/10
|
||||
• Average Risk Score: 4.2/10
|
||||
• Peak Risk Time: 15:00
|
||||
• Continuous Risk Periods: 2
|
||||
|
||||
⏰ HIGH RISK TIME PERIODS:
|
||||
• 11:00 - 13:00 (2.0 hours)
|
||||
• 14:00 - 18:00 (4.0 hours)
|
||||
|
||||
💡 RECOMMENDATIONS:
|
||||
⚠️ Protective dog shoes recommended for 8 hours today.
|
||||
🕐 Avoid walks during the identified high-risk time periods, or ensure your dog wears protective booties.
|
||||
🌡️ HIGH RISK: Hot surfaces likely. Test pavement with your hand - if too hot for 5 seconds, it's too hot for paws.
|
||||
```
|
||||
|
||||
### Plot Output (when using --save-plots)
|
||||
```
|
||||
📁 Plots directory ready: plots/
|
||||
📊 Generating and saving visualizations...
|
||||
📊 Plot saved: plots/risk_timeline_Phoenix_AZ_20250607_152345.png
|
||||
📊 Plot saved: plots/risk_components_Phoenix_AZ_20250607_152345.png
|
||||
📊 Plot saved: plots/risk_dashboard_Phoenix_AZ_20250607_152345.png
|
||||
```
|
||||
|
||||
### Detailed Hourly Output
|
||||
```
|
||||
🕐 HOURLY BREAKDOWN:
|
||||
--------------------------------------------------------------------------------
|
||||
Time Temp UV Condition Risk Shoes
|
||||
--------------------------------------------------------------------------------
|
||||
06:00 75°F 1.0 Clear 1.0 ✅ no
|
||||
07:00 78°F 2.0 Clear 1.0 ✅ no
|
||||
08:00 82°F 4.0 Clear 2.0 ✅ no
|
||||
...
|
||||
14:00 98°F 9.0 Sunny 7.0 ⚠️ YES
|
||||
15:00 102°F 10.0 Sunny 8.5 ⚠️ YES
|
||||
16:00 99°F 8.0 Sunny 7.0 ⚠️ YES
|
||||
```
|
||||
|
||||
## Data Storage & Output
|
||||
|
||||
### Database
|
||||
The application stores data in a SQLite database (`paw_risk.db` by default) with tables for:
|
||||
- **weather_data**: Historical weather observations
|
||||
- **risk_scores**: Calculated risk assessments
|
||||
|
||||
This enables:
|
||||
- Historical trend analysis
|
||||
- Performance tracking over time
|
||||
- Offline analysis of past data
|
||||
|
||||
### Plot Files
|
||||
When using `--save-plots`, visualizations are saved to:
|
||||
- **plots/** directory (auto-created)
|
||||
- **Cleared on each run** to contain only latest analysis
|
||||
- **Files include location and timestamp** for identification
|
||||
- **Formats**: High-resolution PNG files (300 DPI)
|
||||
|
||||
## Visualizations
|
||||
|
||||
The tool generates several types of high-quality plots:
|
||||
|
||||
### 1. Risk Timeline
|
||||
Main risk score over time with threshold indicators, temperature, and UV data:
|
||||
|
||||

|
||||
|
||||
### 2. Component Breakdown
|
||||
Stacked visualization showing how each risk factor contributes over time:
|
||||
|
||||

|
||||
|
||||
### 3. Summary Dashboard
|
||||
Comprehensive overview combining risk timeline, statistics, and weather conditions:
|
||||
|
||||

|
||||
|
||||
**Plot Features:**
|
||||
- High-resolution PNG output (300 DPI)
|
||||
- Clear time-based x-axis with hourly markers
|
||||
- Color-coded risk thresholds and warnings
|
||||
- Interactive legends and detailed annotations
|
||||
- Professional styling suitable for reports
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Data Gap Handling
|
||||
- Interpolates missing UV index values using nearby hours
|
||||
- Falls back to temperature-based UV estimation when necessary
|
||||
- Robust error handling for API inconsistencies
|
||||
|
||||
### Rapid Heat Swing Detection
|
||||
- Monitors for sudden temperature changes (15°F+ per hour)
|
||||
- Applies additional risk scoring for volatile conditions
|
||||
- Logs warnings for extreme temperature variations
|
||||
|
||||
### Flexible Location Support
|
||||
- **City names**: "New York", "Los Angeles"
|
||||
- **City, State**: "Phoenix, AZ", "Miami, FL"
|
||||
- **City, Country**: "London, UK", "Tokyo, Japan"
|
||||
- **Zip codes**: "10001", "90210", "33101"
|
||||
- **Coordinates**: "40.7128,-74.0060"
|
||||
- Automatic location validation
|
||||
- Configurable default location
|
||||
|
||||
## Project Structure
|
||||
|
||||
### Core Files
|
||||
- **`main.py`**: CLI application and coordination
|
||||
- **`config.py`**: Configuration management
|
||||
- **`models.py`**: Data models and database operations
|
||||
- **`weather_api.py`**: WeatherAPI.com integration
|
||||
- **`risk_calculator.py`**: Core risk assessment logic
|
||||
- **`plotting.py`**: Visualization and charting
|
||||
|
||||
### Setup & Testing
|
||||
- **`requirements.txt`**: Python dependencies
|
||||
- **`env_template.txt`**: Environment variables template
|
||||
- **`test_setup.py`**: Setup verification script
|
||||
- **`README.md`**: This documentation
|
||||
|
||||
### Generated Files (Ignored by Git)
|
||||
- **`plots/`**: Generated visualization files
|
||||
- **`.env`**: Your personal API keys and settings
|
||||
- **`paw_risk.db`**: SQLite database with weather and risk data
|
||||
- **`*.png`**: Individual plot files (when saved outside plots/ dir)
|
||||
|
||||
## Contributing
|
||||
|
||||
The codebase is modular and extensible - feel free to enhance any component!
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.7+
|
||||
- Internet connection for weather data
|
||||
- WeatherAPI.com account (free tier available)
|
||||
|
||||
## License
|
||||
|
||||
This project is designed for pet safety and educational purposes. Please ensure you're following WeatherAPI.com's terms of service for API usage.
|
||||
|
||||
## Safety Note
|
||||
|
||||
This tool provides guidance based on environmental conditions, but always use your best judgment when it comes to your pet's safety. Factors like breed, age, paw pad condition, and individual sensitivity can vary significantly between dogs.
|
75
config.py
Normal file
75
config.py
Normal file
@ -0,0 +1,75 @@
|
||||
"""Configuration management for paw burn risk assessment."""
|
||||
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
# Load environment variables
|
||||
load_dotenv()
|
||||
|
||||
@dataclass
|
||||
class RiskConfig:
|
||||
"""Configuration for risk assessment parameters."""
|
||||
temp_threshold_low: float = 80.0
|
||||
temp_threshold_med: float = 90.0
|
||||
temp_threshold_high: float = 100.0
|
||||
|
||||
uv_threshold_low: float = 6.0
|
||||
uv_threshold_med: float = 8.0
|
||||
uv_threshold_high: float = 10.0
|
||||
|
||||
risk_threshold_shoes: float = 6.0
|
||||
rolling_window_hours: int = 2
|
||||
surface_recovery_hours: int = 2
|
||||
|
||||
@classmethod
|
||||
def from_env(cls) -> 'RiskConfig':
|
||||
"""Create configuration from environment variables."""
|
||||
return cls(
|
||||
temp_threshold_low=float(os.getenv('TEMP_THRESHOLD_LOW', 80)),
|
||||
temp_threshold_med=float(os.getenv('TEMP_THRESHOLD_MED', 90)),
|
||||
temp_threshold_high=float(os.getenv('TEMP_THRESHOLD_HIGH', 100)),
|
||||
uv_threshold_low=float(os.getenv('UV_THRESHOLD_LOW', 6)),
|
||||
uv_threshold_med=float(os.getenv('UV_THRESHOLD_MED', 8)),
|
||||
uv_threshold_high=float(os.getenv('UV_THRESHOLD_HIGH', 10)),
|
||||
risk_threshold_shoes=float(os.getenv('RISK_THRESHOLD_SHOES', 6)),
|
||||
rolling_window_hours=int(os.getenv('ROLLING_WINDOW_HOURS', 2)),
|
||||
surface_recovery_hours=int(os.getenv('SURFACE_RECOVERY_HOURS', 2))
|
||||
)
|
||||
|
||||
@dataclass
|
||||
class AppConfig:
|
||||
"""Main application configuration."""
|
||||
weather_api_key: str
|
||||
default_location: str = "New York"
|
||||
database_path: str = "paw_risk.db"
|
||||
risk_config: RiskConfig = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.risk_config is None:
|
||||
self.risk_config = RiskConfig.from_env()
|
||||
|
||||
@classmethod
|
||||
def from_env(cls) -> 'AppConfig':
|
||||
"""Create configuration from environment variables."""
|
||||
api_key = os.getenv('WEATHER_API_KEY')
|
||||
if not api_key:
|
||||
raise ValueError("WEATHER_API_KEY environment variable is required")
|
||||
|
||||
return cls(
|
||||
weather_api_key=api_key,
|
||||
default_location=os.getenv('DEFAULT_LOCATION', 'New York'),
|
||||
database_path=os.getenv('DATABASE_PATH', 'paw_risk.db'),
|
||||
risk_config=RiskConfig.from_env()
|
||||
)
|
||||
|
||||
# Global configuration instance
|
||||
config = None
|
||||
|
||||
def get_config() -> AppConfig:
|
||||
"""Get the global configuration instance."""
|
||||
global config
|
||||
if config is None:
|
||||
config = AppConfig.from_env()
|
||||
return config
|
28
env_template.txt
Normal file
28
env_template.txt
Normal file
@ -0,0 +1,28 @@
|
||||
# Paw Burn Risk Assessment - Environment Variables Template
|
||||
# Copy this file to .env and fill in your values
|
||||
|
||||
# WeatherAPI.com API Key (Required)
|
||||
# Sign up at https://www.weatherapi.com/ to get a free API key
|
||||
WEATHER_API_KEY=your_weatherapi_key_here
|
||||
|
||||
# Default location for weather data (Optional)
|
||||
# Can be city name, zip code, or coordinates
|
||||
DEFAULT_LOCATION=New York
|
||||
|
||||
# Database file path (Optional)
|
||||
DATABASE_PATH=paw_risk.db
|
||||
|
||||
# Temperature thresholds in Fahrenheit (Optional)
|
||||
TEMP_THRESHOLD_LOW=80
|
||||
TEMP_THRESHOLD_MED=90
|
||||
TEMP_THRESHOLD_HIGH=100
|
||||
|
||||
# UV Index thresholds (Optional)
|
||||
UV_THRESHOLD_LOW=6
|
||||
UV_THRESHOLD_MED=8
|
||||
UV_THRESHOLD_HIGH=10
|
||||
|
||||
# Risk assessment parameters (Optional)
|
||||
RISK_THRESHOLD_SHOES=6
|
||||
ROLLING_WINDOW_HOURS=2
|
||||
SURFACE_RECOVERY_HOURS=2
|
238
main.py
Normal file
238
main.py
Normal file
@ -0,0 +1,238 @@
|
||||
"""Main application for paw burn risk assessment."""
|
||||
|
||||
import logging
|
||||
import argparse
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional
|
||||
import json
|
||||
|
||||
from config import get_config, AppConfig
|
||||
from models import DatabaseManager
|
||||
from weather_api import create_weather_client
|
||||
from risk_calculator import create_risk_calculator
|
||||
from plotting import create_plotter
|
||||
|
||||
# Set up logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class PawRiskApp:
|
||||
"""Main application class for paw burn risk assessment."""
|
||||
|
||||
def __init__(self, config: Optional[AppConfig] = None):
|
||||
self.config = config or get_config()
|
||||
self.db_manager = DatabaseManager(self.config.database_path)
|
||||
self.weather_client = create_weather_client()
|
||||
self.risk_calculator = create_risk_calculator()
|
||||
self.plotter = create_plotter()
|
||||
|
||||
def fetch_and_analyze_today(self, location: Optional[str] = None) -> dict:
|
||||
"""Fetch weather data and analyze risk for today."""
|
||||
location = location or self.config.default_location
|
||||
|
||||
print(f"🌤️ Fetching weather data for {location}...")
|
||||
|
||||
try:
|
||||
# Fetch complete day weather data
|
||||
weather_hours = self.weather_client.get_full_day_weather(location)
|
||||
|
||||
if not weather_hours:
|
||||
return {"error": "No weather data available"}
|
||||
|
||||
print(f"📊 Retrieved {len(weather_hours)} hours of weather data")
|
||||
|
||||
# Save weather data to database
|
||||
self.db_manager.save_weather_data(weather_hours)
|
||||
|
||||
# Calculate risk scores
|
||||
print("🧮 Calculating paw burn risk scores...")
|
||||
risk_scores = self.risk_calculator.calculate_risk_scores(weather_hours)
|
||||
|
||||
if not risk_scores:
|
||||
return {"error": "Unable to calculate risk scores"}
|
||||
|
||||
# Save risk scores to database
|
||||
self.db_manager.save_risk_scores(risk_scores)
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = self.risk_calculator.generate_recommendations(risk_scores)
|
||||
|
||||
return {
|
||||
"weather_hours": weather_hours,
|
||||
"risk_scores": risk_scores,
|
||||
"recommendations": recommendations,
|
||||
"location": location
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during analysis: {e}")
|
||||
return {"error": str(e)}
|
||||
|
||||
def print_summary(self, analysis_result: dict):
|
||||
"""Print a formatted summary of the analysis."""
|
||||
if "error" in analysis_result:
|
||||
print(f"❌ Error: {analysis_result['error']}")
|
||||
return
|
||||
|
||||
recommendations = analysis_result["recommendations"]
|
||||
location = analysis_result["location"]
|
||||
|
||||
print("\n" + "="*60)
|
||||
print(f"🐕 PAW BURN RISK ASSESSMENT - {location.upper()}")
|
||||
print("="*60)
|
||||
|
||||
# Summary statistics
|
||||
summary = recommendations["summary"]
|
||||
print(f"\n📈 DAILY SUMMARY:")
|
||||
print(f" • Total Hours Analyzed: {summary['total_hours_analyzed']}")
|
||||
print(f" • High Risk Hours: {summary['high_risk_hours']}")
|
||||
print(f" • Maximum Risk Score: {summary['max_risk_score']}/10")
|
||||
print(f" • Average Risk Score: {summary['average_risk_score']}/10")
|
||||
print(f" • Peak Risk Time: {summary['peak_risk_time']}")
|
||||
print(f" • Continuous Risk Periods: {summary['continuous_risk_blocks']}")
|
||||
|
||||
# Risk periods
|
||||
if recommendations["risk_periods"]:
|
||||
print(f"\n⏰ HIGH RISK TIME PERIODS:")
|
||||
for period in recommendations["risk_periods"]:
|
||||
print(f" • {period['start']} - {period['end']} ({period['duration_hours']} hours)")
|
||||
|
||||
# Recommendations
|
||||
print(f"\n💡 RECOMMENDATIONS:")
|
||||
for rec in recommendations["recommendations"]:
|
||||
print(f" {rec}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
|
||||
def print_detailed_hourly(self, analysis_result: dict):
|
||||
"""Print detailed hourly breakdown."""
|
||||
if "error" in analysis_result:
|
||||
return
|
||||
|
||||
weather_hours = analysis_result["weather_hours"]
|
||||
risk_scores = analysis_result["risk_scores"]
|
||||
|
||||
print("\n🕐 HOURLY BREAKDOWN:")
|
||||
print("-" * 80)
|
||||
print(f"{'Time':>6} {'Temp':>6} {'UV':>4} {'Condition':>12} {'Risk':>6} {'Shoes':>7}")
|
||||
print("-" * 80)
|
||||
|
||||
for weather, risk in zip(weather_hours, risk_scores):
|
||||
time_str = weather.datetime.strftime("%H:%M")
|
||||
temp_str = f"{weather.temperature_f:.0f}°F"
|
||||
uv_str = f"{weather.uv_index:.1f}" if weather.uv_index else "N/A"
|
||||
condition_short = weather.condition[:12]
|
||||
risk_str = f"{risk.total_score:.1f}"
|
||||
shoes_str = "YES" if risk.recommend_shoes else "no"
|
||||
shoes_color = "⚠️ " if risk.recommend_shoes else "✅ "
|
||||
|
||||
print(f"{time_str:>6} {temp_str:>6} {uv_str:>4} {condition_short:>12} "
|
||||
f"{risk_str:>6} {shoes_color}{shoes_str:>5}")
|
||||
|
||||
def create_plots(self, analysis_result: dict, save_plots: bool = False):
|
||||
"""Create and display plots."""
|
||||
if "error" in analysis_result:
|
||||
return
|
||||
|
||||
weather_hours = analysis_result["weather_hours"]
|
||||
risk_scores = analysis_result["risk_scores"]
|
||||
recommendations = analysis_result["recommendations"]
|
||||
location = analysis_result["location"]
|
||||
|
||||
if save_plots:
|
||||
print("\n📊 Generating and saving visualizations...")
|
||||
else:
|
||||
print("\n📊 Generating visualizations...")
|
||||
|
||||
# Create timestamp for file names
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
location_safe = location.replace(" ", "_").replace(",", "")
|
||||
|
||||
try:
|
||||
# Main risk timeline
|
||||
save_path = f"risk_timeline_{location_safe}_{timestamp}.png" if save_plots else None
|
||||
self.plotter.plot_risk_timeline(
|
||||
risk_scores, weather_hours,
|
||||
threshold=self.config.risk_config.risk_threshold_shoes,
|
||||
save_path=save_path,
|
||||
show=not save_plots
|
||||
)
|
||||
|
||||
# Component breakdown
|
||||
save_path = f"risk_components_{location_safe}_{timestamp}.png" if save_plots else None
|
||||
self.plotter.plot_risk_components(
|
||||
risk_scores,
|
||||
save_path=save_path,
|
||||
show=not save_plots
|
||||
)
|
||||
|
||||
# Summary dashboard
|
||||
save_path = f"risk_dashboard_{location_safe}_{timestamp}.png" if save_plots else None
|
||||
self.plotter.create_summary_dashboard(
|
||||
risk_scores, weather_hours, recommendations,
|
||||
save_path=save_path,
|
||||
show=not save_plots
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating plots: {e}")
|
||||
print(f"⚠️ Error creating plots: {e}")
|
||||
|
||||
def main():
|
||||
"""Main entry point for the application."""
|
||||
parser = argparse.ArgumentParser(description="Paw Burn Risk Assessment Tool")
|
||||
parser.add_argument("--location", "-l", type=str, help="Location for weather data (city name, zip code, or coordinates)")
|
||||
parser.add_argument("--detailed", "-d", action="store_true", help="Show detailed hourly breakdown")
|
||||
parser.add_argument("--plot", "-p", action="store_true", help="Show plots")
|
||||
parser.add_argument("--save-plots", "-s", action="store_true", help="Save plots to files")
|
||||
parser.add_argument("--no-recommendations", action="store_true", help="Skip recommendations")
|
||||
parser.add_argument("--config-check", action="store_true", help="Check configuration and exit")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
# Initialize application
|
||||
print("🐾 Paw Burn Risk Assessment Tool")
|
||||
print("=" * 40)
|
||||
|
||||
# Check configuration if requested
|
||||
if args.config_check:
|
||||
try:
|
||||
config = get_config()
|
||||
print("✅ Configuration loaded successfully")
|
||||
print(f"API Key: {'Set' if config.weather_api_key else 'NOT SET'}")
|
||||
print(f"Default Location: {config.default_location}")
|
||||
print(f"Database Path: {config.database_path}")
|
||||
print(f"Risk Threshold: {config.risk_config.risk_threshold_shoes}")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"❌ Configuration error: {e}")
|
||||
return
|
||||
|
||||
app = PawRiskApp()
|
||||
|
||||
# Run analysis
|
||||
analysis_result = app.fetch_and_analyze_today(args.location)
|
||||
|
||||
# Display results
|
||||
if args.detailed:
|
||||
app.print_detailed_hourly(analysis_result)
|
||||
|
||||
if args.plot or args.save_plots:
|
||||
app.create_plots(analysis_result, save_plots=args.save_plots)
|
||||
|
||||
# Show summary unless explicitly disabled
|
||||
if not args.no_recommendations:
|
||||
app.print_summary(analysis_result)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Goodbye!")
|
||||
except Exception as e:
|
||||
logger.error(f"Application error: {e}")
|
||||
print(f"❌ Application error: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
258
models.py
Normal file
258
models.py
Normal file
@ -0,0 +1,258 @@
|
||||
"""Data models for weather and risk assessment."""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Optional, List
|
||||
import sqlite3
|
||||
import json
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class WeatherHour:
|
||||
"""Represents weather data for a single hour."""
|
||||
datetime: datetime
|
||||
temperature_f: float
|
||||
uv_index: Optional[float]
|
||||
condition: str
|
||||
is_forecast: bool = False
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
'datetime': self.datetime.isoformat(),
|
||||
'temperature_f': self.temperature_f,
|
||||
'uv_index': self.uv_index,
|
||||
'condition': self.condition,
|
||||
'is_forecast': self.is_forecast
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> 'WeatherHour':
|
||||
"""Create from dictionary."""
|
||||
return cls(
|
||||
datetime=datetime.fromisoformat(data['datetime']),
|
||||
temperature_f=data['temperature_f'],
|
||||
uv_index=data.get('uv_index'),
|
||||
condition=data['condition'],
|
||||
is_forecast=data.get('is_forecast', False)
|
||||
)
|
||||
|
||||
@dataclass
|
||||
class RiskScore:
|
||||
"""Represents a risk score for a specific hour."""
|
||||
datetime: datetime
|
||||
temperature_score: float
|
||||
uv_score: float
|
||||
condition_score: float
|
||||
accumulated_heat_score: float
|
||||
surface_recovery_score: float
|
||||
total_score: float
|
||||
recommend_shoes: bool
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
'datetime': self.datetime.isoformat(),
|
||||
'temperature_score': self.temperature_score,
|
||||
'uv_score': self.uv_score,
|
||||
'condition_score': self.condition_score,
|
||||
'accumulated_heat_score': self.accumulated_heat_score,
|
||||
'surface_recovery_score': self.surface_recovery_score,
|
||||
'total_score': self.total_score,
|
||||
'recommend_shoes': self.recommend_shoes
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: dict) -> 'RiskScore':
|
||||
"""Create from dictionary."""
|
||||
return cls(
|
||||
datetime=datetime.fromisoformat(data['datetime']),
|
||||
temperature_score=data['temperature_score'],
|
||||
uv_score=data['uv_score'],
|
||||
condition_score=data['condition_score'],
|
||||
accumulated_heat_score=data['accumulated_heat_score'],
|
||||
surface_recovery_score=data['surface_recovery_score'],
|
||||
total_score=data['total_score'],
|
||||
recommend_shoes=data['recommend_shoes']
|
||||
)
|
||||
|
||||
class DatabaseManager:
|
||||
"""Manages database operations for weather and risk data."""
|
||||
|
||||
def __init__(self, db_path: str):
|
||||
self.db_path = db_path
|
||||
self.init_database()
|
||||
|
||||
def init_database(self):
|
||||
"""Initialize database tables."""
|
||||
conn = sqlite3.connect(self.db_path)
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Weather data table
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS weather_data (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
datetime TEXT UNIQUE,
|
||||
temperature_f REAL,
|
||||
uv_index REAL,
|
||||
condition TEXT,
|
||||
is_forecast BOOLEAN,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
''')
|
||||
|
||||
# Risk scores table
|
||||
cursor.execute('''
|
||||
CREATE TABLE IF NOT EXISTS risk_scores (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
datetime TEXT UNIQUE,
|
||||
temperature_score REAL,
|
||||
uv_score REAL,
|
||||
condition_score REAL,
|
||||
accumulated_heat_score REAL,
|
||||
surface_recovery_score REAL,
|
||||
total_score REAL,
|
||||
recommend_shoes BOOLEAN,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
''')
|
||||
|
||||
# Create indexes for better performance
|
||||
cursor.execute('CREATE INDEX IF NOT EXISTS idx_weather_datetime ON weather_data(datetime)')
|
||||
cursor.execute('CREATE INDEX IF NOT EXISTS idx_risk_datetime ON risk_scores(datetime)')
|
||||
|
||||
conn.commit()
|
||||
logger.info("Database initialized successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error initializing database: {e}")
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def save_weather_data(self, weather_hours: List[WeatherHour]):
|
||||
"""Save weather data to database."""
|
||||
conn = sqlite3.connect(self.db_path)
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
|
||||
for hour in weather_hours:
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO weather_data
|
||||
(datetime, temperature_f, uv_index, condition, is_forecast)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
''', (
|
||||
hour.datetime.isoformat(),
|
||||
hour.temperature_f,
|
||||
hour.uv_index,
|
||||
hour.condition,
|
||||
hour.is_forecast
|
||||
))
|
||||
|
||||
conn.commit()
|
||||
logger.info(f"Saved {len(weather_hours)} weather records")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving weather data: {e}")
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def save_risk_scores(self, risk_scores: List[RiskScore]):
|
||||
"""Save risk scores to database."""
|
||||
conn = sqlite3.connect(self.db_path)
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
|
||||
for score in risk_scores:
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO risk_scores
|
||||
(datetime, temperature_score, uv_score, condition_score,
|
||||
accumulated_heat_score, surface_recovery_score, total_score, recommend_shoes)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
''', (
|
||||
score.datetime.isoformat(),
|
||||
score.temperature_score,
|
||||
score.uv_score,
|
||||
score.condition_score,
|
||||
score.accumulated_heat_score,
|
||||
score.surface_recovery_score,
|
||||
score.total_score,
|
||||
score.recommend_shoes
|
||||
))
|
||||
|
||||
conn.commit()
|
||||
logger.info(f"Saved {len(risk_scores)} risk score records")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving risk scores: {e}")
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def get_weather_data(self, start_date: datetime, end_date: datetime) -> List[WeatherHour]:
|
||||
"""Retrieve weather data for a date range."""
|
||||
conn = sqlite3.connect(self.db_path)
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
SELECT datetime, temperature_f, uv_index, condition, is_forecast
|
||||
FROM weather_data
|
||||
WHERE datetime BETWEEN ? AND ?
|
||||
ORDER BY datetime
|
||||
''', (start_date.isoformat(), end_date.isoformat()))
|
||||
|
||||
results = []
|
||||
for row in cursor.fetchall():
|
||||
results.append(WeatherHour(
|
||||
datetime=datetime.fromisoformat(row[0]),
|
||||
temperature_f=row[1],
|
||||
uv_index=row[2],
|
||||
condition=row[3],
|
||||
is_forecast=bool(row[4])
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error retrieving weather data: {e}")
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
def get_risk_scores(self, start_date: datetime, end_date: datetime) -> List[RiskScore]:
|
||||
"""Retrieve risk scores for a date range."""
|
||||
conn = sqlite3.connect(self.db_path)
|
||||
try:
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
SELECT datetime, temperature_score, uv_score, condition_score,
|
||||
accumulated_heat_score, surface_recovery_score, total_score, recommend_shoes
|
||||
FROM risk_scores
|
||||
WHERE datetime BETWEEN ? AND ?
|
||||
ORDER BY datetime
|
||||
''', (start_date.isoformat(), end_date.isoformat()))
|
||||
|
||||
results = []
|
||||
for row in cursor.fetchall():
|
||||
results.append(RiskScore(
|
||||
datetime=datetime.fromisoformat(row[0]),
|
||||
temperature_score=row[1],
|
||||
uv_score=row[2],
|
||||
condition_score=row[3],
|
||||
accumulated_heat_score=row[4],
|
||||
surface_recovery_score=row[5],
|
||||
total_score=row[6],
|
||||
recommend_shoes=bool(row[7])
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error retrieving risk scores: {e}")
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
360
plotting.py
Normal file
360
plotting.py
Normal file
@ -0,0 +1,360 @@
|
||||
"""Plotting and visualization for paw burn risk assessment."""
|
||||
|
||||
import matplotlib
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Tuple
|
||||
import numpy as np
|
||||
import warnings
|
||||
import os
|
||||
import shutil
|
||||
from models import WeatherHour, RiskScore
|
||||
|
||||
# Suppress matplotlib UserWarnings and macOS GUI warnings
|
||||
warnings.filterwarnings('ignore', category=UserWarning, module='matplotlib')
|
||||
warnings.filterwarnings('ignore', message='.*NSSavePanel.*')
|
||||
|
||||
class RiskPlotter:
|
||||
"""Handles plotting and visualization of risk data."""
|
||||
|
||||
def __init__(self, figure_size: Tuple[int, int] = (12, 8)):
|
||||
self.figure_size = figure_size
|
||||
self.plots_dir = "plots"
|
||||
self._plots_dir_setup = False
|
||||
|
||||
# Set up matplotlib style and suppress warnings
|
||||
plt.style.use('default')
|
||||
plt.rcParams['figure.figsize'] = figure_size
|
||||
plt.rcParams['font.size'] = 10
|
||||
|
||||
def _setup_plots_directory(self):
|
||||
"""Create and clear plots directory."""
|
||||
if os.path.exists(self.plots_dir):
|
||||
# Clear existing plots
|
||||
shutil.rmtree(self.plots_dir)
|
||||
os.makedirs(self.plots_dir, exist_ok=True)
|
||||
print(f"📁 Plots directory ready: {self.plots_dir}/")
|
||||
|
||||
def _safe_save_plot(self, filename: str, dpi: int = 300):
|
||||
"""Safely save plot to plots directory."""
|
||||
if filename:
|
||||
# Setup plots directory on first save
|
||||
if not self._plots_dir_setup:
|
||||
self._setup_plots_directory()
|
||||
self._plots_dir_setup = True
|
||||
|
||||
# Create full path in plots directory
|
||||
save_path = os.path.join(self.plots_dir, filename)
|
||||
|
||||
# Save with current backend (don't switch backends as it causes blank files)
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
plt.savefig(save_path, dpi=dpi, bbox_inches='tight', facecolor='white')
|
||||
|
||||
print(f"📊 Plot saved: {save_path}")
|
||||
|
||||
def plot_risk_timeline(self, risk_scores: List[RiskScore],
|
||||
weather_hours: List[WeatherHour],
|
||||
threshold: float = 6.0,
|
||||
save_path: Optional[str] = None,
|
||||
show: bool = True) -> None:
|
||||
"""Plot risk scores over time with recommendation threshold."""
|
||||
if not risk_scores:
|
||||
print("No risk data to plot")
|
||||
return
|
||||
|
||||
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=self.figure_size,
|
||||
sharex=True, gridspec_kw={'height_ratios': [2, 1, 1]})
|
||||
|
||||
# Extract data
|
||||
times = [score.datetime for score in risk_scores]
|
||||
total_scores = [score.total_score for score in risk_scores]
|
||||
temperatures = [hour.temperature_f for hour in weather_hours]
|
||||
uv_indices = [hour.uv_index or 0 for hour in weather_hours]
|
||||
|
||||
# Main risk score plot
|
||||
ax1.plot(times, total_scores, 'b-', linewidth=2, label='Risk Score')
|
||||
ax1.axhline(y=threshold, color='red', linestyle='--', alpha=0.7,
|
||||
label=f'Shoe Threshold ({threshold})')
|
||||
|
||||
# Highlight high-risk periods
|
||||
high_risk_times = []
|
||||
high_risk_scores = []
|
||||
for score in risk_scores:
|
||||
if score.recommend_shoes:
|
||||
high_risk_times.append(score.datetime)
|
||||
high_risk_scores.append(score.total_score)
|
||||
|
||||
if high_risk_times:
|
||||
ax1.scatter(high_risk_times, high_risk_scores, color='red', s=50,
|
||||
alpha=0.7, zorder=5, label='Shoes Recommended')
|
||||
|
||||
ax1.set_ylabel('Risk Score')
|
||||
ax1.set_title('Paw Burn Risk Assessment Timeline')
|
||||
ax1.legend()
|
||||
ax1.grid(True, alpha=0.3)
|
||||
ax1.set_ylim(0, 10)
|
||||
|
||||
# Temperature subplot
|
||||
ax2.plot(times, temperatures, 'orange', linewidth=2, label='Temperature (°F)')
|
||||
ax2.axhline(y=80, color='yellow', linestyle=':', alpha=0.7, label='80°F')
|
||||
ax2.axhline(y=90, color='orange', linestyle=':', alpha=0.7, label='90°F')
|
||||
ax2.axhline(y=100, color='red', linestyle=':', alpha=0.7, label='100°F')
|
||||
ax2.set_ylabel('Temperature (°F)')
|
||||
ax2.legend(loc='upper right')
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
# UV Index subplot
|
||||
ax3.plot(times, uv_indices, 'purple', linewidth=2, label='UV Index')
|
||||
ax3.axhline(y=6, color='yellow', linestyle=':', alpha=0.7, label='UV 6')
|
||||
ax3.axhline(y=8, color='orange', linestyle=':', alpha=0.7, label='UV 8')
|
||||
ax3.axhline(y=10, color='red', linestyle=':', alpha=0.7, label='UV 10')
|
||||
ax3.set_ylabel('UV Index')
|
||||
ax3.set_xlabel('Time')
|
||||
ax3.legend(loc='upper right')
|
||||
ax3.grid(True, alpha=0.3)
|
||||
|
||||
# Format x-axis
|
||||
for ax in [ax1, ax2, ax3]:
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
|
||||
ax.xaxis.set_major_locator(mdates.HourLocator(interval=2))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
# If tight_layout fails, adjust manually
|
||||
plt.subplots_adjust(hspace=0.4, bottom=0.15)
|
||||
|
||||
if save_path:
|
||||
self._safe_save_plot(save_path)
|
||||
|
||||
if show:
|
||||
plt.show()
|
||||
|
||||
def plot_risk_components(self, risk_scores: List[RiskScore],
|
||||
save_path: Optional[str] = None,
|
||||
show: bool = True) -> None:
|
||||
"""Plot breakdown of risk score components."""
|
||||
if not risk_scores:
|
||||
print("No risk data to plot")
|
||||
return
|
||||
|
||||
fig, ax = plt.subplots(figsize=self.figure_size)
|
||||
|
||||
# Extract data
|
||||
times = [score.datetime for score in risk_scores]
|
||||
temp_scores = [score.temperature_score for score in risk_scores]
|
||||
uv_scores = [score.uv_score for score in risk_scores]
|
||||
condition_scores = [score.condition_score for score in risk_scores]
|
||||
accumulated_scores = [score.accumulated_heat_score for score in risk_scores]
|
||||
recovery_scores = [score.surface_recovery_score for score in risk_scores]
|
||||
|
||||
# Create stacked area plot
|
||||
ax.stackplot(times, temp_scores, uv_scores, condition_scores,
|
||||
accumulated_scores, recovery_scores,
|
||||
labels=['Temperature', 'UV Index', 'Condition', 'Accumulated Heat', 'Surface Recovery'],
|
||||
alpha=0.7)
|
||||
|
||||
ax.set_ylabel('Risk Score Components')
|
||||
ax.set_xlabel('Time')
|
||||
ax.set_title('Risk Score Component Breakdown')
|
||||
ax.legend(loc='upper left', bbox_to_anchor=(1, 1))
|
||||
ax.grid(True, alpha=0.3)
|
||||
|
||||
# Format x-axis
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
|
||||
ax.xaxis.set_major_locator(mdates.HourLocator(interval=2))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
# If tight_layout fails, adjust manually
|
||||
plt.subplots_adjust(right=0.75, bottom=0.15)
|
||||
|
||||
if save_path:
|
||||
self._safe_save_plot(save_path)
|
||||
print(f"Component plot saved")
|
||||
|
||||
if show:
|
||||
plt.show()
|
||||
|
||||
def plot_risk_heatmap(self, risk_scores: List[RiskScore],
|
||||
save_path: Optional[str] = None,
|
||||
show: bool = True) -> None:
|
||||
"""Create a heatmap visualization of risk throughout the day."""
|
||||
if not risk_scores:
|
||||
print("No risk data to plot")
|
||||
return
|
||||
|
||||
fig, ax = plt.subplots(figsize=(12, 3))
|
||||
|
||||
# Create time vs risk matrix
|
||||
hours = [score.datetime.hour for score in risk_scores]
|
||||
scores = [score.total_score for score in risk_scores]
|
||||
|
||||
# Create a matrix for the heatmap
|
||||
hour_range = list(range(24))
|
||||
risk_matrix = np.zeros((1, 24))
|
||||
|
||||
for hour, score in zip(hours, scores):
|
||||
risk_matrix[0, hour] = score
|
||||
|
||||
# Create heatmap
|
||||
im = ax.imshow(risk_matrix, cmap='RdYlBu_r', aspect='auto', vmin=0, vmax=10)
|
||||
|
||||
# Customize the plot
|
||||
ax.set_xticks(range(24))
|
||||
ax.set_xticklabels([f'{h:02d}:00' for h in range(24)])
|
||||
ax.set_yticks([])
|
||||
ax.set_xlabel('Hour of Day')
|
||||
ax.set_title('Paw Burn Risk Heatmap')
|
||||
|
||||
# Add colorbar
|
||||
cbar = plt.colorbar(im, ax=ax, orientation='horizontal', pad=0.1)
|
||||
cbar.set_label('Risk Score')
|
||||
|
||||
# Add threshold line
|
||||
threshold_line = 6.0 / 10.0 # Normalize to colormap scale
|
||||
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
# If tight_layout fails, adjust manually
|
||||
plt.subplots_adjust(bottom=0.2)
|
||||
|
||||
if save_path:
|
||||
self._safe_save_plot(save_path)
|
||||
print(f"Heatmap saved")
|
||||
|
||||
if show:
|
||||
plt.show()
|
||||
|
||||
def create_summary_dashboard(self, risk_scores: List[RiskScore],
|
||||
weather_hours: List[WeatherHour],
|
||||
recommendations: dict,
|
||||
save_path: Optional[str] = None,
|
||||
show: bool = True) -> None:
|
||||
"""Create a comprehensive dashboard with all visualizations."""
|
||||
if not risk_scores or not weather_hours:
|
||||
print("Insufficient data for dashboard")
|
||||
return
|
||||
|
||||
fig = plt.figure(figsize=(16, 12))
|
||||
|
||||
# Create subplots
|
||||
gs = fig.add_gridspec(3, 2, height_ratios=[2, 1, 1], hspace=0.3, wspace=0.3)
|
||||
|
||||
# Main timeline plot
|
||||
ax1 = fig.add_subplot(gs[0, :])
|
||||
times = [score.datetime for score in risk_scores]
|
||||
total_scores = [score.total_score for score in risk_scores]
|
||||
|
||||
ax1.plot(times, total_scores, 'b-', linewidth=3, label='Risk Score')
|
||||
ax1.axhline(y=6, color='red', linestyle='--', alpha=0.7, label='Shoe Threshold')
|
||||
|
||||
# Highlight high-risk periods
|
||||
high_risk_periods = []
|
||||
current_start = None
|
||||
|
||||
for i, score in enumerate(risk_scores):
|
||||
if score.recommend_shoes and current_start is None:
|
||||
current_start = i
|
||||
elif not score.recommend_shoes and current_start is not None:
|
||||
high_risk_periods.append((current_start, i))
|
||||
current_start = None
|
||||
|
||||
if current_start is not None:
|
||||
high_risk_periods.append((current_start, len(risk_scores)))
|
||||
|
||||
for start_idx, end_idx in high_risk_periods:
|
||||
ax1.axvspan(times[start_idx], times[end_idx-1], alpha=0.3, color='red')
|
||||
|
||||
ax1.set_ylabel('Risk Score')
|
||||
ax1.set_title('Paw Burn Risk Assessment - Daily Overview')
|
||||
ax1.legend()
|
||||
ax1.grid(True, alpha=0.3)
|
||||
ax1.set_ylim(0, 10)
|
||||
|
||||
# Component breakdown
|
||||
ax2 = fig.add_subplot(gs[1, 0])
|
||||
component_names = ['Temp', 'UV', 'Condition', 'Heat Accum', 'Recovery']
|
||||
avg_components = [
|
||||
np.mean([s.temperature_score for s in risk_scores]),
|
||||
np.mean([s.uv_score for s in risk_scores]),
|
||||
np.mean([s.condition_score for s in risk_scores]),
|
||||
np.mean([s.accumulated_heat_score for s in risk_scores]),
|
||||
np.mean([s.surface_recovery_score for s in risk_scores])
|
||||
]
|
||||
|
||||
bars = ax2.bar(component_names, avg_components, color=['orange', 'purple', 'lightblue', 'yellow', 'green'])
|
||||
ax2.set_ylabel('Average Score')
|
||||
ax2.set_title('Average Risk Components')
|
||||
ax2.tick_params(axis='x', rotation=45)
|
||||
|
||||
# Statistics
|
||||
ax3 = fig.add_subplot(gs[1, 1])
|
||||
ax3.axis('off')
|
||||
|
||||
stats_text = f"""
|
||||
DAILY SUMMARY
|
||||
|
||||
Total Hours Analyzed: {recommendations['summary']['total_hours_analyzed']}
|
||||
High Risk Hours: {recommendations['summary']['high_risk_hours']}
|
||||
Max Risk Score: {recommendations['summary']['max_risk_score']}
|
||||
Average Risk Score: {recommendations['summary']['average_risk_score']}
|
||||
Peak Risk Time: {recommendations['summary']['peak_risk_time']}
|
||||
Risk Periods: {recommendations['summary']['continuous_risk_blocks']}
|
||||
"""
|
||||
|
||||
ax3.text(0.1, 0.9, stats_text, transform=ax3.transAxes, fontsize=10,
|
||||
verticalalignment='top', fontfamily='monospace')
|
||||
|
||||
# Weather conditions
|
||||
ax4 = fig.add_subplot(gs[2, :])
|
||||
temperatures = [hour.temperature_f for hour in weather_hours]
|
||||
uv_indices = [hour.uv_index or 0 for hour in weather_hours]
|
||||
|
||||
ax4_twin = ax4.twinx()
|
||||
|
||||
line1 = ax4.plot(times, temperatures, 'orange', linewidth=2, label='Temperature (°F)')
|
||||
line2 = ax4_twin.plot(times, uv_indices, 'purple', linewidth=2, label='UV Index')
|
||||
|
||||
ax4.set_ylabel('Temperature (°F)', color='orange')
|
||||
ax4_twin.set_ylabel('UV Index', color='purple')
|
||||
ax4.set_xlabel('Time')
|
||||
ax4.set_title('Weather Conditions')
|
||||
|
||||
# Format x-axis for all subplots
|
||||
for ax in [ax1, ax4]:
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
|
||||
ax.xaxis.set_major_locator(mdates.HourLocator(interval=2))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Use constrained layout or manual adjustment instead of tight_layout
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore", UserWarning)
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
# If tight_layout fails with complex layouts, adjust manually
|
||||
plt.subplots_adjust(hspace=0.4, wspace=0.4, bottom=0.1, top=0.95)
|
||||
|
||||
if save_path:
|
||||
self._safe_save_plot(save_path)
|
||||
print(f"Dashboard saved")
|
||||
|
||||
if show:
|
||||
plt.show()
|
||||
|
||||
def create_plotter(figure_size: Tuple[int, int] = (12, 8)) -> RiskPlotter:
|
||||
"""Create a risk plotter with specified figure size."""
|
||||
return RiskPlotter(figure_size)
|
BIN
project_images/risk_components_example.png
Normal file
BIN
project_images/risk_components_example.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 211 KiB |
BIN
project_images/risk_dashboard_example.png
Normal file
BIN
project_images/risk_dashboard_example.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 428 KiB |
BIN
project_images/risk_timeline_example.png
Normal file
BIN
project_images/risk_timeline_example.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 255 KiB |
5
requirements.txt
Normal file
5
requirements.txt
Normal file
@ -0,0 +1,5 @@
|
||||
requests>=2.31.0
|
||||
python-dotenv>=1.0.0
|
||||
matplotlib>=3.7.0
|
||||
pandas>=2.0.0
|
||||
numpy>=1.24.0
|
301
risk_calculator.py
Normal file
301
risk_calculator.py
Normal file
@ -0,0 +1,301 @@
|
||||
"""Risk calculation engine for paw burn assessment."""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Optional, Tuple
|
||||
from models import WeatherHour, RiskScore
|
||||
from config import RiskConfig, get_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class RiskCalculator:
|
||||
"""Calculates paw burn risk scores based on weather conditions."""
|
||||
|
||||
def __init__(self, config: Optional[RiskConfig] = None):
|
||||
self.config = config or get_config().risk_config
|
||||
|
||||
def calculate_temperature_score(self, temperature_f: float) -> float:
|
||||
"""Calculate risk score based on air temperature."""
|
||||
if temperature_f >= self.config.temp_threshold_high: # ≥100°F
|
||||
return 3.0
|
||||
elif temperature_f >= self.config.temp_threshold_med: # ≥90°F
|
||||
return 2.0
|
||||
elif temperature_f >= self.config.temp_threshold_low: # ≥80°F
|
||||
return 1.0
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
def calculate_uv_score(self, uv_index: Optional[float]) -> float:
|
||||
"""Calculate risk score based on UV index."""
|
||||
if uv_index is None:
|
||||
return 0.0
|
||||
|
||||
if uv_index >= self.config.uv_threshold_high: # ≥10
|
||||
return 3.0
|
||||
elif uv_index >= self.config.uv_threshold_med: # ≥8
|
||||
return 2.0
|
||||
elif uv_index >= self.config.uv_threshold_low: # ≥6
|
||||
return 1.0
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
def calculate_condition_score(self, condition: str) -> float:
|
||||
"""Calculate risk score based on weather condition."""
|
||||
condition_lower = condition.lower()
|
||||
if 'sunny' in condition_lower or 'clear' in condition_lower:
|
||||
return 1.0
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
def calculate_accumulated_heat_score(self, weather_hours: List[WeatherHour],
|
||||
current_index: int) -> float:
|
||||
"""Calculate risk score based on accumulated heat (rolling average)."""
|
||||
if current_index < 1:
|
||||
return 0.0
|
||||
|
||||
# Get the last N hours (including current)
|
||||
window_size = min(self.config.rolling_window_hours, current_index + 1)
|
||||
start_index = current_index - window_size + 1
|
||||
|
||||
window_hours = weather_hours[start_index:current_index + 1]
|
||||
|
||||
# Calculate average temperature and UV
|
||||
temp_sum = sum(hour.temperature_f for hour in window_hours)
|
||||
avg_temp = temp_sum / len(window_hours)
|
||||
|
||||
# Calculate average UV (handling None values)
|
||||
uv_values = [hour.uv_index for hour in window_hours if hour.uv_index is not None]
|
||||
avg_uv = sum(uv_values) / len(uv_values) if uv_values else 0
|
||||
|
||||
# Score based on accumulated heat criteria
|
||||
score = 0.0
|
||||
if avg_temp > 85.0:
|
||||
score += 1.0
|
||||
if avg_uv >= 6.0:
|
||||
score += 1.0
|
||||
|
||||
return min(score, 1.0) # Cap at 1.0 as per specification
|
||||
|
||||
def calculate_surface_recovery_score(self, weather_hours: List[WeatherHour],
|
||||
current_index: int) -> float:
|
||||
"""Calculate surface recovery score (time since last peak temperature)."""
|
||||
if current_index < 2:
|
||||
return 0.0
|
||||
|
||||
# Look back to find the last time temperature was ≥90°F
|
||||
hours_since_peak = 0
|
||||
for i in range(current_index - 1, -1, -1):
|
||||
hours_since_peak += 1
|
||||
if weather_hours[i].temperature_f >= 90.0:
|
||||
break
|
||||
else:
|
||||
# No peak found in available data
|
||||
hours_since_peak = current_index + 1
|
||||
|
||||
# Give recovery credit if it's been >2 hours since last 90°F reading
|
||||
if hours_since_peak > self.config.surface_recovery_hours:
|
||||
return -1.0
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
def interpolate_missing_uv(self, weather_hours: List[WeatherHour]) -> List[WeatherHour]:
|
||||
"""Interpolate missing UV values using nearby hours."""
|
||||
if not weather_hours:
|
||||
return weather_hours
|
||||
|
||||
# Create a copy to avoid modifying the original
|
||||
processed_hours = [WeatherHour(
|
||||
datetime=hour.datetime,
|
||||
temperature_f=hour.temperature_f,
|
||||
uv_index=hour.uv_index,
|
||||
condition=hour.condition,
|
||||
is_forecast=hour.is_forecast
|
||||
) for hour in weather_hours]
|
||||
|
||||
# Find UV values that need interpolation
|
||||
for i, hour in enumerate(processed_hours):
|
||||
if hour.uv_index is None:
|
||||
# Look for nearest non-None values
|
||||
left_uv = None
|
||||
right_uv = None
|
||||
|
||||
# Search left
|
||||
for j in range(i - 1, -1, -1):
|
||||
if processed_hours[j].uv_index is not None:
|
||||
left_uv = processed_hours[j].uv_index
|
||||
break
|
||||
|
||||
# Search right
|
||||
for j in range(i + 1, len(processed_hours)):
|
||||
if processed_hours[j].uv_index is not None:
|
||||
right_uv = processed_hours[j].uv_index
|
||||
break
|
||||
|
||||
# Interpolate or use fallback
|
||||
if left_uv is not None and right_uv is not None:
|
||||
processed_hours[i].uv_index = (left_uv + right_uv) / 2
|
||||
elif left_uv is not None:
|
||||
processed_hours[i].uv_index = left_uv
|
||||
elif right_uv is not None:
|
||||
processed_hours[i].uv_index = right_uv
|
||||
else:
|
||||
# Use temperature-based fallback (rough approximation)
|
||||
temp_f = hour.temperature_f
|
||||
if temp_f >= 95:
|
||||
processed_hours[i].uv_index = 8.0
|
||||
elif temp_f >= 85:
|
||||
processed_hours[i].uv_index = 6.0
|
||||
elif temp_f >= 75:
|
||||
processed_hours[i].uv_index = 4.0
|
||||
else:
|
||||
processed_hours[i].uv_index = 2.0
|
||||
|
||||
return processed_hours
|
||||
|
||||
def detect_rapid_heat_swings(self, weather_hours: List[WeatherHour]) -> List[int]:
|
||||
"""Detect hours with rapid temperature changes."""
|
||||
rapid_swing_indices = []
|
||||
|
||||
for i in range(1, len(weather_hours)):
|
||||
temp_diff = abs(weather_hours[i].temperature_f - weather_hours[i-1].temperature_f)
|
||||
if temp_diff >= 15.0: # 15°F+ change in one hour
|
||||
rapid_swing_indices.append(i)
|
||||
logger.warning(f"Rapid temperature swing detected at {weather_hours[i].datetime}: "
|
||||
f"{temp_diff:.1f}°F change")
|
||||
|
||||
return rapid_swing_indices
|
||||
|
||||
def calculate_risk_scores(self, weather_hours: List[WeatherHour]) -> List[RiskScore]:
|
||||
"""Calculate risk scores for all weather hours."""
|
||||
if not weather_hours:
|
||||
return []
|
||||
|
||||
# Preprocess data
|
||||
processed_hours = self.interpolate_missing_uv(weather_hours)
|
||||
rapid_swings = self.detect_rapid_heat_swings(processed_hours)
|
||||
|
||||
risk_scores = []
|
||||
|
||||
for i, hour in enumerate(processed_hours):
|
||||
# Calculate individual component scores
|
||||
temp_score = self.calculate_temperature_score(hour.temperature_f)
|
||||
uv_score = self.calculate_uv_score(hour.uv_index)
|
||||
condition_score = self.calculate_condition_score(hour.condition)
|
||||
accumulated_score = self.calculate_accumulated_heat_score(processed_hours, i)
|
||||
recovery_score = self.calculate_surface_recovery_score(processed_hours, i)
|
||||
|
||||
# Apply rapid swing bonus
|
||||
rapid_swing_bonus = 0.5 if i in rapid_swings else 0.0
|
||||
|
||||
# Calculate total score
|
||||
total_score = (temp_score + uv_score + condition_score +
|
||||
accumulated_score + recovery_score + rapid_swing_bonus)
|
||||
|
||||
# Ensure score is within bounds
|
||||
total_score = max(0.0, min(10.0, total_score))
|
||||
|
||||
# Determine if shoes are recommended
|
||||
recommend_shoes = total_score >= self.config.risk_threshold_shoes
|
||||
|
||||
risk_scores.append(RiskScore(
|
||||
datetime=hour.datetime,
|
||||
temperature_score=temp_score,
|
||||
uv_score=uv_score,
|
||||
condition_score=condition_score,
|
||||
accumulated_heat_score=accumulated_score,
|
||||
surface_recovery_score=recovery_score,
|
||||
total_score=total_score,
|
||||
recommend_shoes=recommend_shoes
|
||||
))
|
||||
|
||||
return risk_scores
|
||||
|
||||
def identify_continuous_risk_blocks(self, risk_scores: List[RiskScore]) -> List[Tuple[datetime, datetime]]:
|
||||
"""Identify continuous time blocks where shoes are recommended."""
|
||||
if not risk_scores:
|
||||
return []
|
||||
|
||||
blocks = []
|
||||
current_block_start = None
|
||||
|
||||
for score in risk_scores:
|
||||
if score.recommend_shoes:
|
||||
if current_block_start is None:
|
||||
current_block_start = score.datetime
|
||||
else:
|
||||
if current_block_start is not None:
|
||||
blocks.append((current_block_start, score.datetime))
|
||||
current_block_start = None
|
||||
|
||||
# Handle case where risk period extends to the end
|
||||
if current_block_start is not None:
|
||||
blocks.append((current_block_start, risk_scores[-1].datetime))
|
||||
|
||||
return blocks
|
||||
|
||||
def generate_recommendations(self, risk_scores: List[RiskScore]) -> dict:
|
||||
"""Generate comprehensive recommendations based on risk scores."""
|
||||
if not risk_scores:
|
||||
return {"error": "No risk data available"}
|
||||
|
||||
# Calculate statistics
|
||||
total_hours = len(risk_scores)
|
||||
high_risk_hours = sum(1 for score in risk_scores if score.recommend_shoes)
|
||||
max_score = max(score.total_score for score in risk_scores)
|
||||
avg_score = sum(score.total_score for score in risk_scores) / total_hours
|
||||
|
||||
# Find peak risk time
|
||||
peak_score = max(risk_scores, key=lambda x: x.total_score)
|
||||
|
||||
# Identify continuous risk blocks
|
||||
risk_blocks = self.identify_continuous_risk_blocks(risk_scores)
|
||||
|
||||
recommendations = {
|
||||
"summary": {
|
||||
"total_hours_analyzed": total_hours,
|
||||
"high_risk_hours": high_risk_hours,
|
||||
"max_risk_score": round(max_score, 1),
|
||||
"average_risk_score": round(avg_score, 1),
|
||||
"peak_risk_time": peak_score.datetime.strftime("%H:%M"),
|
||||
"continuous_risk_blocks": len(risk_blocks)
|
||||
},
|
||||
"risk_periods": [
|
||||
{
|
||||
"start": start.strftime("%H:%M"),
|
||||
"end": end.strftime("%H:%M"),
|
||||
"duration_hours": round((end - start).total_seconds() / 3600, 1)
|
||||
}
|
||||
for start, end in risk_blocks
|
||||
],
|
||||
"recommendations": []
|
||||
}
|
||||
|
||||
# Generate specific recommendations
|
||||
if high_risk_hours == 0:
|
||||
recommendations["recommendations"].append(
|
||||
"🐾 Great news! No protective footwear needed today - paws should be safe on all surfaces."
|
||||
)
|
||||
else:
|
||||
recommendations["recommendations"].append(
|
||||
f"⚠️ Protective dog shoes recommended for {high_risk_hours} hours today."
|
||||
)
|
||||
|
||||
if risk_blocks:
|
||||
recommendations["recommendations"].append(
|
||||
"🕐 Avoid walks during the identified high-risk time periods, or ensure your dog wears protective booties."
|
||||
)
|
||||
|
||||
if max_score >= 8:
|
||||
recommendations["recommendations"].append(
|
||||
"🔥 EXTREME RISK: Surface temperatures may cause immediate paw burns. Keep walks very short and on grass/shaded areas only."
|
||||
)
|
||||
elif max_score >= 7:
|
||||
recommendations["recommendations"].append(
|
||||
"🌡️ HIGH RISK: Hot surfaces likely. Test pavement with your hand - if too hot for 5 seconds, it's too hot for paws."
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def create_risk_calculator(config: Optional[RiskConfig] = None) -> RiskCalculator:
|
||||
"""Create a risk calculator with optional custom configuration."""
|
||||
return RiskCalculator(config)
|
210
test_setup.py
Normal file
210
test_setup.py
Normal file
@ -0,0 +1,210 @@
|
||||
"""Test script to verify setup and configuration."""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
def test_imports():
|
||||
"""Test that all required modules can be imported."""
|
||||
print("🔍 Testing imports...")
|
||||
|
||||
try:
|
||||
import requests
|
||||
print("✅ requests")
|
||||
except ImportError:
|
||||
print("❌ requests - run: pip install requests")
|
||||
return False
|
||||
|
||||
try:
|
||||
import matplotlib
|
||||
print("✅ matplotlib")
|
||||
except ImportError:
|
||||
print("❌ matplotlib - run: pip install matplotlib")
|
||||
return False
|
||||
|
||||
try:
|
||||
import pandas
|
||||
print("✅ pandas")
|
||||
except ImportError:
|
||||
print("❌ pandas - run: pip install pandas")
|
||||
return False
|
||||
|
||||
try:
|
||||
import numpy
|
||||
print("✅ numpy")
|
||||
except ImportError:
|
||||
print("❌ numpy - run: pip install numpy")
|
||||
return False
|
||||
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
print("✅ python-dotenv")
|
||||
except ImportError:
|
||||
print("❌ python-dotenv - run: pip install python-dotenv")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def test_local_modules():
|
||||
"""Test that local modules can be imported."""
|
||||
print("\n🔍 Testing local modules...")
|
||||
|
||||
try:
|
||||
from config import get_config
|
||||
print("✅ config.py")
|
||||
except ImportError as e:
|
||||
print(f"❌ config.py - {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
from models import WeatherHour, RiskScore, DatabaseManager
|
||||
print("✅ models.py")
|
||||
except ImportError as e:
|
||||
print(f"❌ models.py - {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
from weather_api import WeatherAPIClient
|
||||
print("✅ weather_api.py")
|
||||
except ImportError as e:
|
||||
print(f"❌ weather_api.py - {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
from risk_calculator import RiskCalculator
|
||||
print("✅ risk_calculator.py")
|
||||
except ImportError as e:
|
||||
print(f"❌ risk_calculator.py - {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
from plotting import RiskPlotter
|
||||
print("✅ plotting.py")
|
||||
except ImportError as e:
|
||||
print(f"❌ plotting.py - {e}")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def test_configuration():
|
||||
"""Test configuration loading."""
|
||||
print("\n🔍 Testing configuration...")
|
||||
|
||||
# Check for .env file
|
||||
if not os.path.exists('.env'):
|
||||
print("⚠️ No .env file found")
|
||||
print(" Create one from env_template.txt:")
|
||||
print(" cp env_template.txt .env")
|
||||
print(" Then edit .env with your WeatherAPI key")
|
||||
return False
|
||||
|
||||
try:
|
||||
from config import get_config
|
||||
config = get_config()
|
||||
|
||||
if not config.weather_api_key or config.weather_api_key == "your_weatherapi_key_here":
|
||||
print("❌ WeatherAPI key not set")
|
||||
print(" Edit your .env file and add your API key:")
|
||||
print(" WEATHER_API_KEY=your_actual_key_here")
|
||||
return False
|
||||
else:
|
||||
print("✅ WeatherAPI key configured")
|
||||
|
||||
print(f"✅ Default location: {config.default_location}")
|
||||
print(f"✅ Database path: {config.database_path}")
|
||||
print(f"✅ Risk threshold: {config.risk_config.risk_threshold_shoes}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Configuration error: {e}")
|
||||
return False
|
||||
|
||||
def test_database():
|
||||
"""Test database initialization."""
|
||||
print("\n🔍 Testing database...")
|
||||
|
||||
try:
|
||||
from models import DatabaseManager
|
||||
db = DatabaseManager("test_db.db")
|
||||
print("✅ Database initialization successful")
|
||||
|
||||
# Clean up test database
|
||||
os.remove("test_db.db")
|
||||
print("✅ Database cleanup successful")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database error: {e}")
|
||||
return False
|
||||
|
||||
def test_weather_api():
|
||||
"""Test weather API connection."""
|
||||
print("\n🔍 Testing WeatherAPI connection...")
|
||||
|
||||
try:
|
||||
from weather_api import create_weather_client
|
||||
client = create_weather_client()
|
||||
|
||||
# Test with multiple location formats
|
||||
test_locations = ["London", "10001"] # City and zip code
|
||||
|
||||
for location in test_locations:
|
||||
if client.validate_location(location):
|
||||
print(f"✅ WeatherAPI connection successful (tested with {location})")
|
||||
return True
|
||||
|
||||
print("❌ WeatherAPI validation failed for all test locations")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ WeatherAPI error: {e}")
|
||||
print(" Check your API key and internet connection")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Run all tests."""
|
||||
print("🐾 Paw Burn Risk Assessment - Setup Test")
|
||||
print("=" * 50)
|
||||
|
||||
tests = [
|
||||
("Import Dependencies", test_imports),
|
||||
("Local Modules", test_local_modules),
|
||||
("Configuration", test_configuration),
|
||||
("Database", test_database),
|
||||
("WeatherAPI", test_weather_api),
|
||||
]
|
||||
|
||||
passed = 0
|
||||
total = len(tests)
|
||||
|
||||
for test_name, test_func in tests:
|
||||
print(f"\n📋 {test_name}")
|
||||
print("-" * 30)
|
||||
|
||||
if test_func():
|
||||
passed += 1
|
||||
print(f"✅ {test_name} PASSED")
|
||||
else:
|
||||
print(f"❌ {test_name} FAILED")
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print(f"📊 RESULTS: {passed}/{total} tests passed")
|
||||
|
||||
if passed == total:
|
||||
print("🎉 All tests passed! Your setup is ready.")
|
||||
print("\nTry running:")
|
||||
print(" python main.py --config-check")
|
||||
print(" python main.py --location 'Your City'")
|
||||
else:
|
||||
print("⚠️ Some tests failed. Please fix the issues above.")
|
||||
print("\nFor help, check:")
|
||||
print(" - README.md for setup instructions")
|
||||
print(" - env_template.txt for configuration")
|
||||
|
||||
return passed == total
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
172
weather_api.py
Normal file
172
weather_api.py
Normal file
@ -0,0 +1,172 @@
|
||||
"""Weather API integration for fetching weather data."""
|
||||
|
||||
import requests
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Optional
|
||||
from models import WeatherHour
|
||||
from config import get_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class WeatherAPIClient:
|
||||
"""Client for interacting with WeatherAPI.com"""
|
||||
|
||||
def __init__(self, api_key: str):
|
||||
self.api_key = api_key
|
||||
self.base_url = "http://api.weatherapi.com/v1"
|
||||
self.session = requests.Session()
|
||||
|
||||
def _make_request(self, endpoint: str, params: dict) -> dict:
|
||||
"""Make a request to the WeatherAPI."""
|
||||
params['key'] = self.api_key
|
||||
url = f"{self.base_url}/{endpoint}"
|
||||
|
||||
try:
|
||||
response = self.session.get(url, params=params)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"API request failed: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing API response: {e}")
|
||||
raise
|
||||
|
||||
def get_current_weather(self, location: str) -> Optional[WeatherHour]:
|
||||
"""Get current weather data."""
|
||||
try:
|
||||
data = self._make_request('current.json', {'q': location, 'aqi': 'no'})
|
||||
|
||||
current = data['current']
|
||||
current_time = datetime.fromtimestamp(current['last_updated_epoch'])
|
||||
|
||||
return WeatherHour(
|
||||
datetime=current_time,
|
||||
temperature_f=current['temp_f'],
|
||||
uv_index=current.get('uv'),
|
||||
condition=current['condition']['text'],
|
||||
is_forecast=False
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching current weather: {e}")
|
||||
return None
|
||||
|
||||
def get_historical_weather(self, location: str, date: datetime) -> List[WeatherHour]:
|
||||
"""Get historical weather data for a specific date."""
|
||||
try:
|
||||
date_str = date.strftime('%Y-%m-%d')
|
||||
data = self._make_request('history.json', {
|
||||
'q': location,
|
||||
'dt': date_str
|
||||
})
|
||||
|
||||
weather_hours = []
|
||||
for hour_data in data['forecast']['forecastday'][0]['hour']:
|
||||
hour_time = datetime.fromtimestamp(hour_data['time_epoch'])
|
||||
|
||||
weather_hours.append(WeatherHour(
|
||||
datetime=hour_time,
|
||||
temperature_f=hour_data['temp_f'],
|
||||
uv_index=hour_data.get('uv'),
|
||||
condition=hour_data['condition']['text'],
|
||||
is_forecast=False
|
||||
))
|
||||
|
||||
return weather_hours
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching historical weather: {e}")
|
||||
return []
|
||||
|
||||
def get_forecast_weather(self, location: str, days: int = 1) -> List[WeatherHour]:
|
||||
"""Get forecast weather data."""
|
||||
try:
|
||||
data = self._make_request('forecast.json', {
|
||||
'q': location,
|
||||
'days': days,
|
||||
'aqi': 'no',
|
||||
'alerts': 'no'
|
||||
})
|
||||
|
||||
weather_hours = []
|
||||
for day_data in data['forecast']['forecastday']:
|
||||
for hour_data in day_data['hour']:
|
||||
hour_time = datetime.fromtimestamp(hour_data['time_epoch'])
|
||||
|
||||
# Only include future hours
|
||||
if hour_time > datetime.now():
|
||||
weather_hours.append(WeatherHour(
|
||||
datetime=hour_time,
|
||||
temperature_f=hour_data['temp_f'],
|
||||
uv_index=hour_data.get('uv'),
|
||||
condition=hour_data['condition']['text'],
|
||||
is_forecast=True
|
||||
))
|
||||
|
||||
return weather_hours
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching forecast weather: {e}")
|
||||
return []
|
||||
|
||||
def get_full_day_weather(self, location: str, target_date: Optional[datetime] = None) -> List[WeatherHour]:
|
||||
"""Get complete weather data for a day (historical + current + forecast)."""
|
||||
if target_date is None:
|
||||
target_date = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
|
||||
all_weather = []
|
||||
now = datetime.now()
|
||||
|
||||
# Get historical data for the day (if target_date is today and some hours have passed)
|
||||
if target_date.date() == now.date() and now.hour > 0:
|
||||
historical = self.get_historical_weather(location, target_date)
|
||||
# Filter to only include hours that have already passed
|
||||
for hour in historical:
|
||||
if hour.datetime < now:
|
||||
all_weather.append(hour)
|
||||
|
||||
# Get current weather if target_date is today
|
||||
if target_date.date() == now.date():
|
||||
current = self.get_current_weather(location)
|
||||
if current:
|
||||
all_weather.append(current)
|
||||
|
||||
# Get forecast data
|
||||
if target_date.date() >= now.date():
|
||||
forecast = self.get_forecast_weather(location, days=1)
|
||||
# Filter forecast to only include hours for the target date
|
||||
for hour in forecast:
|
||||
if hour.datetime.date() == target_date.date():
|
||||
all_weather.append(hour)
|
||||
|
||||
# Sort by datetime and remove duplicates
|
||||
all_weather.sort(key=lambda x: x.datetime)
|
||||
|
||||
# Remove duplicates (keep the most recent data for each hour)
|
||||
unique_weather = {}
|
||||
for hour in all_weather:
|
||||
hour_key = hour.datetime.replace(minute=0, second=0, microsecond=0)
|
||||
if hour_key not in unique_weather or not hour.is_forecast:
|
||||
unique_weather[hour_key] = hour
|
||||
|
||||
return list(unique_weather.values())
|
||||
|
||||
def validate_location(self, location: str) -> bool:
|
||||
"""Validate if a location is valid for the API.
|
||||
|
||||
Supports multiple location formats:
|
||||
- City names: "New York"
|
||||
- City, State: "Phoenix, AZ"
|
||||
- City, Country: "London, UK"
|
||||
- Zip codes: "10001" or "90210"
|
||||
- Coordinates: "40.7128,-74.0060"
|
||||
"""
|
||||
try:
|
||||
self._make_request('current.json', {'q': location, 'aqi': 'no'})
|
||||
return True
|
||||
except:
|
||||
return False
|
||||
|
||||
def create_weather_client() -> WeatherAPIClient:
|
||||
"""Create a weather client using the configured API key."""
|
||||
config = get_config()
|
||||
return WeatherAPIClient(config.weather_api_key)
|
Reference in New Issue
Block a user