Awesome Cursor Rules Collection

Showing 2497-2508 of 2626 matches

TypeScript
# Codebase Overview

This codebase is a web application built with TypeScript, React, and Next.js, utilizing Shadcn UI, Radix, and Tailwind CSS for styling. It features a landing page, authentication flows, and a dashboard for logged-in users, integrating Supabase for backend services.

# Stack and Key Technologies

- Frontend: React with Next.js (App Router)
- Language: TypeScript
- UI Components: Shadcn UI, NextUI, MagicUI, Radix
- Styling: Tailwind CSS
- Backend Services: Supabase (authentication, database, storage)
- Icons: Iconify

# Code Style and Structure

- Functional and declarative programming patterns
- Modular file structure: exported component, subcomponents, helpers, static content, types
- Descriptive variable names (e.g., isLoading, hasError)
- Lowercase with dashes for directories (e.g., components/auth-wizard)
- Named exports for components

# TypeScript Usage

- Interfaces preferred over types
- Avoid enums; use maps instead
- Functional components with TypeScript interfaces

# Syntax and Formatting

- Use "function" keyword for pure functions
- Concise conditional syntax
- Declarative JSX

# Error Handling and Validation

- Early returns and guard clauses
- Zod for form validation
- Error boundaries for unexpected errors
- Server Actions model expected errors as return values

# UI and Styling

- Shadcn UI, NextUI, MagicUI, Radix, and Tailwind Aria for components
- Responsive design with Tailwind CSS (mobile-first approach)

# Performance Optimization

- Minimize 'use client', 'useEffect', and 'setState'
- Favor React Server Components (RSC)
- Dynamic loading for non-critical components
- Image optimization: WebP format, size data, lazy loading

# Key Conventions

- 'nuqs' for URL search parameter state management
- Optimize Web Vitals (LCP, CLS, FID)
- Limited 'use client' usage:
  - Favor server components and Next.js SSR
  - Use only for Web API access in small components
  - Avoid for data fetching or state management

# Application Structure

## Authentication

- Login: Email/password and GitHub OAuth (frontend/app/(landing-page)/login/action.ts)
- Signup: Email and password (frontend/app/(landing-page)/login/action.ts)
- Logout: frontend/app/(landing-page)/logout/action.ts
- Email Confirmation: frontend/app/auth/callback/confirm/route.ts

## User Interface

- Landing Page: SubmitButton, LoginPage, LogoutModal components
- Dashboard: Personalized content and navigation sidebar
- Error Handling: Generic error component with retry mechanism

## Navigation and Layout

- Navbar: Responsive design for landing and public pages
- Sidebar: Collapsible for dashboard navigation

Follow Next.js documentation for Data Fetching, Rendering, and Routing best practices.
css
dockerfile
golang
javascript
next.js
oauth
radix-ui
react
+4 more
humblFINANCE/humblFINANCE-frontend

Used in 1 repository

Python
{
  "name": "CrewAI Project Rules",
  "description": "Standard conventions for CrewAI v0.95.0 projects with examples",
  "version": "1.0.0",
  "metadata": {
    "project_type": "python",
    "framework": "crewai",
    "maintainer": "CTK Advisors"
  },
  "rules": [
    {
      "name": "Project Structure",
      "description": "Standard CrewAI project structure",
      "pattern": "**/*",
      "conventions": [
        "Use src/<project_name> for main project code",
        "Keep configuration files in config/ directory",
        "Store custom tools in tools/ directory",
        "Place templates in templates/ directory",
        "Use pyproject.toml and poetry for dependency management",
        "Include .env.example for environment variables",
        "Example structure:",
        "```",
        "src/",
        "  └── your_project/",
        "      ├── config/",
        "      │   ├── agents.yaml",
        "      │   ├── tasks.yaml",
        "      │   └── templates.json",
        "      ├── tools/",
        "      │   ├── browser_tools.py",
        "      │   ├── file_tools.py",
        "      │   └── search_tools.py",
        "      ├── templates/",
        "      ├── crew.py",
        "      └── main.py",
        "```"
      ]
    },
    {
      "name": "Crew Definitions",
      "description": "Guidelines for defining CrewAI crews",
      "pattern": "**/crew.py",
      "conventions": [
        "Use @CrewBase decorator for crew classes",
        "Define agents_config and tasks_config as class variables",
        "Use @agent decorator for agent definitions",
        "Use @task decorator for task definitions",
        "Use @crew decorator for crew assembly",
        "Example:",
        "```python",
        "@CrewBase",
        "class ExampleCrew:",
        "    agents_config = 'config/agents.yaml'",
        "    tasks_config = 'config/tasks.yaml'",
        "",
        "    @agent",
        "    def expert_agent(self) -> Agent:",
        "        return Agent(",
        "            config=self.agents_config['expert'],",
        "            allow_delegation=False,",
        "            tools=[SearchTools.search_internet],",
        "            verbose=True",
        "        )",
        "",
        "    @task",
        "    def analyze_task(self) -> Task:",
        "        return Task(",
        "            config=self.tasks_config['analyze'],",
        "            agent=self.expert_agent()",
        "        )",
        "",
        "    @crew",
        "    def crew(self) -> Crew:",
        "        return Crew(",
        "            agents=self.agents,",
        "            tasks=self.tasks,",
        "            process=Process.sequential,",
        "            verbose=True",
        "        )",
        "```"
      ]
    },
    {
      "name": "Tool Definitions",
      "description": "Conventions for CrewAI tools with examples from the project",
      "pattern": "**/tools/*.py",
      "conventions": [
        "Use @tool decorator with clear description",
        "Implement standard tool interface",
        "Include proper error handling",
        "Group tools by functionality",
        "Example browser tool:",
        "```python",
        "class BrowserTools:",
        "    @tool('Scrape website content')",
        "    def scrape_and_summarize_website(website):",
        "        '''Useful to scrape and summarize website content'''",
        "        # Implementation",
        "```",
        "Example file tool:",
        "```python",
        "class FileTools:",
        "    @tool('Write File with content')",
        "    def write_file(data):",
        "        '''Expects pipe-separated path and content: ./path|content'''",
        "        try:",
        "            path, content = data.split('|')",
        "            # Implementation",
        "        except Exception:",
        "            return 'Error with input format'",
        "```"
      ]
    },
    {
      "name": "Configuration Management",
      "description": "Guidelines for configuration files",
      "pattern": "**/config/*.{yaml,json}",
      "conventions": [
        "Use YAML for agent and task configurations",
        "Use JSON for templates and static data",
        "Example agents.yaml structure:",
        "```yaml",
        "senior_react_engineer:",
        "  role: Senior React Engineer",
        "  goal: Build high-quality React components",
        "  backstory: Expert in React development",
        "senior_content_editor:",
        "  role: Senior Content Editor",
        "  goal: Create engaging content",
        "  backstory: Experienced in content creation",
        "```",
        "Example templates.json structure:",
        "```json",
        "{",
        "  \"templates\": {",
        "    \"basic\": {",
        "      \"description\": \"Basic landing page\",",
        "      \"components\": [\"Hero\", \"Features\"]",
        "    }",
        "  }",
        "}"
      ]
    },
    {
      "name": "Environment Variables",
      "description": "Environment variable structure",
      "pattern": "**/.env*",
      "conventions": [
        "Required variables from project:",
        "```",
        "BROWSERLESS_API_KEY=your_key_here",
        "SERPER_API_KEY=your_key_here",
        "OPENAI_API_KEY=your_key_here",
        "```",
        "Include descriptions in .env.example:",
        "```",
        "# Browser automation API key",
        "BROWSERLESS_API_KEY=",
        "",
        "# Search API key",
        "SERPER_API_KEY=",
        "",
        "# OpenAI API key for agent interactions",
        "OPENAI_API_KEY=",
        "```"
      ]
    },
    {
      "name": "Tool Integration",
      "description": "Patterns for tool integration with agents",
      "pattern": "**/crew.py",
      "conventions": [
        "Group related tools in agent definitions",
        "Example from project:",
        "```python",
        "@agent",
        "def senior_react_engineer_agent(self) -> Agent:",
        "    return Agent(",
        "        config=self.agents_config['senior_react_engineer'],",
        "        allow_delegation=False,",
        "        tools=[",
        "            SearchTools.search_internet,",
        "            BrowserTools.scrape_and_summarize_website,",
        "            TemplateTools.learn_landing_page_options,",
        "            FileTools.write_file",
        "        ] + self.toolkit.get_tools(),",
        "        verbose=True",
        "    )",
        "```"
      ]
    },
    {
      "name": "Testing Conventions",
      "description": "Standards for test files and testing practices",
      "pattern": "**/tests/**/*.py",
      "conventions": [
        "Use pytest for testing",
        "Name test files with test_ prefix",
        "Group tests by functionality",
        "Use fixtures for common setup",
        "Example test structure:",
        "```python",
        "import pytest",
        "from src.dev_crew.tools import SearchTools",
        "",
        "@pytest.fixture",
        "def search_tool():",
        "    return SearchTools()",
        "",
        "def test_search_internet(search_tool):",
        "    '''Test internet search functionality'''",
        "    result = search_tool.search_internet('test query')",
        "    assert result is not None",
        "    assert isinstance(result, dict)",
        "```"
      ]
    },
    {
      "name": "Type Checking",
      "description": "Type annotation and checking standards",
      "pattern": "**/*.py",
      "conventions": [
        "Use type hints for all function parameters and return values",
        "Enable strict type checking with mypy",
        "Use Optional[] for nullable types",
        "Example type usage:",
        "```python",
        "from typing import Optional, List, Dict",
        "",
        "def process_search_results(",
        "    query: str,",
        "    results: List[Dict[str, str]],",
        "    max_results: Optional[int] = None",
        ") -> List[str]:",
        "    '''Process search results with proper typing'''",
        "    filtered = results[:max_results] if max_results else results",
        "    return [result['title'] for result in filtered]",
        "```"
      ]
    }
  ]
}
dockerfile
golang
less
openai
python
react

First seen in:

chrisknu/dev_crew

Used in 1 repository

Python
# Håndbold Statistik Projekt Regler

Always start answers with: "WUHUUU"

## 1. Projekt Oversigt
```markdown
# Projekt Detaljer
- Projektnavn: Håndbold Statistik
- Beskrivelse: Python-baseret system til processering og analyse af håndboldkampe
- Primære Teknologier:
  - Backend: Python
  - Database: SQLite
  - Web Interface: Flask
  - Data Processering: PyPDF2, Papaparse
  - AI Integration: DeepSeek API
- Primær Funktionalitet: PDF-parsing, kampstatistik analyse, web visualisering
```

## 2. Kodningsstandarder
```markdown
# Kodningsstandarder
## Navngivningskonventioner
- Python filer: snake_case (f.eks. process_output.py, scrape_matches.py)
- Funktioner: snake_case (f.eks. convert_pdf_to_text, get_match_data)
- Klasser: Undgås (funktionel tilgang foretrækkes)
- Konstanter: UPPERCASE_SNAKE_CASE (f.eks. MAX_FILE_SIZE)

## Fil Struktur Retningslinjer
C:.
├───CSV
├───Databases
├───Downloads
├───Error_Appeared
├───logs
├───Not_Processed
├───Processed
├───website
│   ├───static
│   └───templates
└───__pycache__

- Hovemapper:
  - /: Primære processerings-scripts
  - /website: Web interface komponenter
  - /Databases: SQLite databaser
  - /logs: Logging filer
  - /Not_Processed: Ubehandlede PDF'er
  - /Processed: Behandlede PDF'er
```

## 3. AI Interaktions Retningslinjer
```markdown
# AI Interaktions Protokoller
## Generelle Kodningsinstruktioner
- Prioriter læsbar og vedligeholdelsesvenlig kode
- Anvend type hints (typing module)
- Implementer omfattende fejlhåndtering
- Lav altid print statements til at logge fejl for at kunne se hvor fejl opstår
- Dokumenter kompleks logik med docstrings
- Følg PEP 8 standarder

## Prompt Engineering
- Beskriv specifikke dataprocesseringsudfordringer
- Angiv forventede input/output formater
- Referencér eksisterende projektkode
- Fokuser på modulær og genbrugelig kode
```

## 4. Sikkerheds- og Miljøkonfiguration
```markdown
# Sikkerheds Regler
## Miljøvariabel Håndtering
- Brug .env fil til API nøgler
- Gem aldrig sensitive data i kildekode
- Tilføj .env til .gitignore
- Anvend python-dotenv til indlæsning

## API Interaktion
- Implementer retry mekanismer ved API fejl
- Log API kald og fejl
- Tilføj timeout og fejlhåndtering
- Beskyt mod rate limiting
```

## 5. Ydelsesoptimering
```markdown
# Ydelsesoptimering
## Database Optimering
- Anvend indekser på hyppigt forespurgte kolonner
- Batch indsættelser i stedet for enkeltindsættelser
- Begræns returneringsstørrelse ved store datasæt
- Anvend SQLite connection pooling

## Data Processering
- Anvend generatorer ved store datasæt
- Implementer lazy loading
- Begræns hukommelsesforbruget
- Anvend Papaparse med optimerede indstillinger
```

## 6. Test og Kvalitetssikring
```markdown
# Test og Kvalitetssikring
## Test Krav
- Unit tests for kritiske funktioner
- Dækningsgrad over 70%
- Anvend pytest
- Test grænsetilfælde i data processering
- Validér PDF parsing

## Kodekvalitet
- Anvend flake8 til statisk kodekontrol
- Brug type checking med mypy
- Undgå print statements i produktion
- Fuld fejlhåndtering i alle funktioner
```

## 7. Deployment og Kontinuerlig Integration
```markdown
# Deployment Konfiguration
## Kontinuerlig Integration
- Automatiseret test ved hver commit
- Verificér afhængigheder
- Byg og test på tværs af Python versioner

## Deployment Strategi
- Dokumentér installationstrin
- Anvend requirements.txt
- Understøt virtuelle miljøer
- Validér miljøopsætning
```

## 8. AI Agent Specifikke Instruktioner
```markdown
# AI Agent Instruktioner
## Flerfilshåndtering
- Vedligehold konsistent projektkontekst
- Opdatér relaterede filer ved ændringer
- Generer beskrivende commit beskeder
- Respektér eksisterende projektstruktur

## Terminal Interaktioner
- Anvend pip til pakehåndtering
- Foretræk reproducerbare kommandoer
- Log vigtige handlinger
```

## 9. Projektspecifikke Retningslinjer
```markdown
# Specifikke Håndbold Statistik Regler
- Standardisér holdnavne konsekvent
- Validér data ved hver transformation
- Gem rådata uændret
- Log alle datarensnings- og standardiseringstrin
- Understøt flere datakilder (PDF, web scraping)
```

## Tilpasning og Udvidelse
```markdown
# Tilpasningsnotes
- Dette dokument er dynamisk
- Gennemgå kvartalsvist
- Tilpas efter projektets evolution
- Implementér teamfeedback
```
flask
html
python
sqlite
jonas0711/haandbold-data-projekt

Used in 1 repository

TypeScript
Project Overview:
Typyst is a modern, desktop-based text editor application built using Electron and React, focusing on providing advanced writing assistance and suggestions. The application features a clean, modular architecture with clear separation of concerns between the main process and renderer components.

Technical Stack:
- Frontend: React with TypeScript
- Desktop Runtime: Electron
- Editor Framework: TipTap (ProseMirror-based rich text editor)
- Build Tools: Vite
- Writing Assistant: Vale
- AI Integration: LLama
- State Management: React Hooks
- Styling: CSS Modules

Architecture:
1. /electron
   - Main Process Architecture:
     * index.ts: Main entry point
       - Window management and lifecycle
       - Custom frameless window with macOS integration
       - Development tools configuration
       - App event handling (window creation, quit, activate)
     
     * preload.ts: Electron-Renderer Bridge
       - Secure IPC communication setup
       - Exposes safe APIs to renderer process
       - Type-safe bridge interfaces
       - File system operations bridge
       - Vale integration bridge

   - RPC (Remote Procedure Call) System:
     * fileSystemRpc.ts: File operations handler
       - Markdown file processing
       - File system access
       - Content conversion utilities
     * llmRpc.ts: LLama model integration
       - AI-powered suggestions
       - Context-aware completions
     * Bidirectional communication using birpc

   - Services:
     * vale/: Writing assistance service
       - Style checking integration
       - Real-time linting
       - Custom rule management
     * autocomplete/: Text completion service
       - AI-powered suggestions
       - Context-aware completions
       - Prediction management

2. /src
   - Application Architecture:
     * index.tsx: Application Entry Point
       - React initialization
       - Root component mounting
       - IPC bridge setup
       - Global style injection
     
     * /app: Core Application Setup
       - App.tsx: Root component
       - Application-wide providers
       - Global state management
       - Layout structure

   - Feature-based Organization:
     * /features: Core Application Features
       - editor/: Main Editor Implementation
         * components/: React Components
           - Editor.tsx: Main editor wrapper
             * Theme provider integration
             * Global layout management
           - EditorContent.tsx: Core editor functionality
             * TipTap integration
             * Content management
             * Feature coordination
           - FileSelector.tsx: File handling
             * Markdown file processing
             * File input management
           - MenuBar.tsx: Toolbar and controls
           - ValeSidebar.tsx: Writing suggestions panel
           - ErrorOverlay.tsx: Error display
           - RawContentPreview.tsx: Debug view
         * hooks/: Custom React Hooks
           - useEditorCore.ts: Core editor state and operations
           - useValeState.ts: Writing suggestions management
           - useEditorShortcuts.ts: Keyboard interactions
           - useEditorSpellcheck.ts: Spellcheck integration
         * services/: Feature-specific Logic
           - fileSystemService.ts: File operations
           - valeService.ts: Vale integration
           - eventHandlers.ts: Editor event management
         * types/: Feature-specific Types
         * constants/: Feature Configuration

       - theme/: Theming System
         * themeContext.tsx: Theme provider
         * themes/: Theme definitions
         * hooks/: Theme utilities

   - Editor Extensions:
     * /extensions: TipTap Extensions
       - extensions.ts: Extension configuration
       - indent/: Indentation handling
       - predictions/: AI suggestions
       - Custom ProseMirror plugins

   - Shared Resources:
     * /types: Global TypeScript Definitions
       - Global type declarations
       - API interfaces
       - Shared type utilities
     
     * /services: Shared Business Logic
       - Authentication
       - State management
       - Shared utilities
     
     * /styles: Global Styling
       - index.css: Global styles
       - Editor.css: Editor-specific styles
       - MenuBar.css: Toolbar styles
       - Theme variables

Key Features:
1. Rich Text Editing
   - TipTap/ProseMirror foundation
   - Advanced text formatting
   - Code block highlighting
   - Custom extensions
   - Real-time content updates

2. Writing Assistance
   - Vale integration for style checking
   - Real-time writing suggestions
   - Customizable rules and style guides
   - Interactive suggestion sidebar
   - Warning/error management
   - Ignorable warnings system

3. File Management
   - Markdown file support
   - File selection interface
   - Content conversion utilities
   - Error handling

4. Theme Support
   - Theme system
   - Customizable styling
   - Dark/light mode support

5. Advanced Editor Features
   - Keyboard shortcuts
   - Spellchecking
   - Raw content preview
   - Error handling
   - Sidebar management

Core Components:
1. Editor Component
   - Modular architecture:
     * Editor.tsx: Theme and layout wrapper
     * EditorContent.tsx: Core functionality
     * FileSelector.tsx: File handling
   - Feature integration through hooks:
     * useEditorCore
     * useValeState
     * useEditorShortcuts
     * useEditorSpellcheck

2. MenuBar
   - Editing tools and controls
   - Feature toggles
   - Raw output toggle

3. ValeSidebar
   - Writing suggestions display
   - Warning/error management
   - Interactive feedback
   - Ignorable warnings system

Technical Features:
1. RPC System
   - birpc for Electron-React communication
   - Structured service architecture
   - Type-safe interfaces

2. AI Integration
   - LLama model integration
   - AI-powered writing assistance
   - Context-aware suggestions

3. Extension System
   - Modular editor extensions
   - Custom TipTap extensions
   - ProseMirror plugins

Development:
- Modern development tooling
- TypeScript for type safety
- Linting and formatting
- Automated setup scripts
- Cross-platform support
- Modular component architecture
- Hook-based state management
css
html
javascript
less
react
typescript
vite
Mats-Dodd/typyst-electron

Used in 1 repository

JavaScript
You are an expert in TypeScript, Node.js, Shadcn UI, Radix, Tailwind, and xbar plugins.  You are also have deep domain expertise in the renewable energy sector, an understanding of the grid's carbon intensity, and a great source of inspiration on how to help electricity customers reduce their carbon footprint.  

Key Principles
- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Structure files: exported component, subcomponents, helpers, static content, types.

Naming Conventions
- Use lowercase with dashes for directories (e.g., components/auth-wizard).
- Favor named exports for components.

TypeScript Usage
- Use TypeScript for all code; prefer interfaces over types.
- Avoid enums; use maps instead.
- Use functional components with TypeScript interfaces.

Syntax and Formatting
- Use the "function" keyword for pure functions.
- Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
- Use declarative JSX.

UI and Styling
- Use Shadcn UI, Radix, and Tailwind for components and styling.
- Implement responsive design with Tailwind CSS; use a mobile-first approach.

Performance Optimization
- Minimize 'use client', 'useEffect', and 'setState'; favor React Server Components (RSC).
- Wrap client components in Suspense with fallback.
- Use dynamic loading for non-critical components.
- Optimize images: use WebP format, include size data, implement lazy loading.

Key Conventions
- Use 'nuqs' for URL search parameter state management.
- Optimize Web Vitals (LCP, CLS, FID).
- Limit 'use client':
- Use only for Web API access in small components.
- Avoid for data fetching or state management.
javascript
radix-ui
react
shadcn/ui
tailwindcss
typescript
jasonm-jones/carbon-intensity-xbar

Used in 1 repository

Python
# Project Instructions

Use the project specification and guidelines as you build the app.

Write the complete code for every step. Do not get lazy.

Your goal is to completely finish whatever I ask for.

## Overview

This is a Python project that creates a Python CLI tool that can be installed via pip. The CLI tools should be called chartbook. Let's start with just one subcommand, `chartbook generate`. The full command that I'm interested in running is the following: 

`chartbook generate ./_docs`

This will generate the HTML files in the `./_docs/_build/html` directory.
What this does is that it will run sphinx-build to create a sphinx site as I have defined here in the `dodo.py` file. It uses some templates and other .md files in the `docs_src` directory, copies those files to `_docs` and then runs sphinx-build. I would like all of this to happen in the command that I run. As you can see in the `dodo.py` file, I have two tasks that are related to what I want to happen. Here are the two tasks:

```python
# This is part of the dodo.py file that uses PyDoit.

def task_pipeline_publish():
    """Create Pipeline Docs for Use in Sphinx"""

    file_dep = [
        "./docs_src/_templates/chart_entry_bottom.md",
        "./docs_src/_templates/chart_entry_top.md",
        "./docs_src/_templates/dataframe_specs.md",
        "./docs_src/_templates/pipeline_specs.md",
        "./docs_src/charts.md",
        "./docs_src/conf.py",
        "./docs_src/dataframes.md",
        "./docs_src/pipelines.md",
        "./pipeline.json",
        "./README.md",
        "./src/pipeline_publish.py",
        *pipeline_doc_file_deps,
    ]

    targets = [
        # "./_docs/index.md", 
        *generated_md_targets,
    ]

    return {
        "actions": [
            "ipython ./src/pipeline_publish.py",
            ],
        "targets": targets,
        "file_dep": file_dep,
        "clean": True,
    }


sphinx_targets = [
    "./_docs/_build/html/index.html",
    "./_docs/_build/html/myst_markdown_demos.html",
]


def task_compile_sphinx_docs():
    """Compile Sphinx Docs"""

    file_dep = [
        "./docs_src/conf.py",
        "./docs_src/contributing.md",
        "./docs_src/index.md",
        "./docs_src/myst_markdown_demos.md",
        # Pipeline docs
        "./docs_src/_templates/chart_entry_bottom.md",
        "./docs_src/_templates/chart_entry_top.md",
        "./docs_src/_templates/dataframe_specs.md",
        "./docs_src/_templates/pipeline_specs.md",
        "./docs_src/charts.md",
        "./docs_src/dataframes.md",
        "./docs_src/pipelines.md",
        "./pipeline.json",
        "./README.md",
        "./src/pipeline_publish.py",
    ]
    return {
        "actions": [
            "sphinx-build -M html ./_docs/ ./_docs/_build",
        ],  # Use docs as build destination
        # "actions": ["sphinx-build -M html ./docs/ ./docs/_build"], # Previous standard organization
        "targets": sphinx_targets,
        "file_dep": file_dep,
        "task_dep": [
            "pipeline_publish",
        ],
        "clean": True,
    }
```

I want the cli tool to do both of these tasks in a single shot. 


## Tech Stack

- Core: Python
- Package Management: uv

## Rules

Follow these rules when working on this project.

### Environment Rules

- Keep all environment variables in `.env`
- Use `python-dotenv` for environment management
- Store API keys and sensitive data in `.env`
- Add data source URLs to `config/data_sources.yaml`

### Type Rules

- Use Python type hints consistently
- Prefer composition over inheritance


## UV Guide: Working on Projects

uv supports managing Python projects, which define their dependencies in a `pyproject.toml` file.

### Creating a new project

You can create a new Python project using the `uv init` command:```console
$ uv init hello-world
$ cd hello-world
```

Alternatively, you can initialize a project in the working directory:

```console
$ mkdir hello-world
$ cd hello-world
$ uv init
```

uv will create the following files:

```text
.
├── .python-version
├── README.md
├── hello.py
└── pyproject.toml
```

The `hello.py` file contains a simple "Hello world" program. Try it out with `uv run`:

```console
$ uv run hello.py
Hello from hello-world!
```

### Project structure

A project consists of a few important parts that work together and allow uv to manage your project.
In addition to the files created by `uv init`, uv will create a virtual environment and `uv.lock`
file in the root of your project the first time you run a project command, i.e., `uv run`,
`uv sync`, or `uv lock`.

A complete listing would look like:

```text
.
├── .venv
│   ├── bin
│   ├── lib
│   └── pyvenv.cfg
├── .python-version
├── README.md
├── hello.py
├── pyproject.toml
└── uv.lock
```

#### `pyproject.toml`

The `pyproject.toml` contains metadata about your project:

```toml title="pyproject.toml"
[project]
name = "hello-world"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
dependencies = []
```

You'll use this file to specify dependencies, as well as details about the project such as its
description or license. You can edit this file manually, or use commands like `uv add` and
`uv remove` to manage your project from the terminal.

!!! tip

    See the official [`pyproject.toml` guide](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/)
    for more details on getting started with the `pyproject.toml` format.

You'll also use this file to specify uv [configuration options](../configuration/files.md) in a
[`[tool.uv]`](../reference/settings.md) section.

#### `.python-version`

The `.python-version` file contains the project's default Python version. This file tells uv which
Python version to use when creating the project's virtual environment.

#### `.venv`

The `.venv` folder contains your project's virtual environment, a Python environment that is
isolated from the rest of your system. This is where uv will install your project's dependencies.

See the [project environment](../concepts/projects/layout.md#the-project-environment) documentation
for more details.

#### `uv.lock`

`uv.lock` is a cross-platform lockfile that contains exact information about your project's
dependencies. Unlike the `pyproject.toml` which is used to specify the broad requirements of your
project, the lockfile contains the exact resolved versions that are installed in the project
environment. This file should be checked into version control, allowing for consistent and
reproducible installations across machines.

`uv.lock` is a human-readable TOML file but is managed by uv and should not be edited manually.

See the [lockfile](../concepts/projects/layout.md#the-lockfile) documentation for more details.

### Managing dependencies

You can add dependencies to your `pyproject.toml` with the `uv add` command. This will also update
the lockfile and project environment:

```console
$ uv add requests
```

You can also specify version constraints or alternative sources:

```console
$ # Specify a version constraint
$ uv add 'requests==2.31.0'

$ # Add a git dependency
$ uv add git+https://github.com/psf/requests
```

To remove a package, you can use `uv remove`:

```console
$ uv remove requests
```

To upgrade a package, run `uv lock` with the `--upgrade-package` flag:

```console
$ uv lock --upgrade-package requests
```

The `--upgrade-package` flag will attempt to update the specified package to the latest compatible
version, while keeping the rest of the lockfile intact.

See the documentation on [managing dependencies](../concepts/projects/dependencies.md) for more
details.

### Running commands

`uv run` can be used to run arbitrary scripts or commands in your project environment.

Prior to every `uv run` invocation, uv will verify that the lockfile is up-to-date with the
`pyproject.toml`, and that the environment is up-to-date with the lockfile, keeping your project
in-sync without the need for manual intervention. `uv run` guarantees that your command is run in a
consistent, locked environment.

For example, to use `flask`:

```console
$ uv add flask
$ uv run -- flask run -p 3000
```

Or, to run a script:

```python title="example.py"
# Require a project dependency
import flask

print("hello world")
```

```console
$ uv run example.py
```

Alternatively, you can use `uv sync` to manually update the environment then activate it before
executing a command:

```console
$ uv sync
$ source .venv/bin/activate
$ flask run -p 3000
$ python example.py
```

!!! note

    The virtual environment must be active to run scripts and commands in the project without `uv run`. Virtual environment activation differs per shell and platform.

See the documentation on [running commands and scripts](../concepts/projects/run.md) in projects for
more details.

### Building distributions

`uv build` can be used to build source distributions and binary distributions (wheel) for your
project.

By default, `uv build` will build the project in the current directory, and place the built
artifacts in a `dist/` subdirectory:

```console
$ uv build
$ ls dist/
hello-world-0.1.0-py3-none-any.whl
hello-world-0.1.0.tar.gz
```

See the documentation on [building projects](../concepts/projects/build.md) for more details.

### Next steps

To learn more about working on projects with uv, see the
[projects concept](../concepts/projects/index.md) page and the
[command reference](../reference/cli.md#uv).

Or, read on to learn how to [publish your project as a package](./publish.md).

## UV Guide: Publishing a package

uv supports building Python packages into source and binary distributions via `uv build` and
uploading them to a registry with `uv publish`.

### Preparing your project for packaging

Before attempting to publish your project, you'll want to make sure it's ready to be packaged for
distribution.

If your project does not include a `[build-system]` definition in the `pyproject.toml`, uv will not
build it by default. This means that your project may not be ready for distribution. Read more about
the effect of declaring a build system in the
[project concept](../concepts/projects/config.md#build-systems) documentation.

!!! note

    If you have internal packages that you do not want to be published, you can mark them as
    private:

    ```toml
    [project]
    classifiers = ["Private :: Do Not Upload"]
    ```

    This setting makes PyPI reject your uploaded package from publishing. It does not affect
    security or privacy settings on alternative registries.

    We also recommend only generating per-project tokens: Without a PyPI token matching the project,
    it can't be accidentally published.

### Building your package

Build your package with `uv build`:

```console
$ uv build
```

By default, `uv build` will build the project in the current directory, and place the built
artifacts in a `dist/` subdirectory.

Alternatively, `uv build <SRC>` will build the package in the specified directory, while
`uv build --package <PACKAGE>` will build the specified package within the current workspace.

!!! info

    By default, `uv build` respects `tool.uv.sources` when resolving build dependencies from the
    `build-system.requires` section of the `pyproject.toml`. When publishing a package, we recommend
    running `uv build --no-sources` to ensure that the package builds correctly when `tool.uv.sources`
    is disabled, as is the case when using other build tools, like [`pypa/build`](https://github.com/pypa/build).

### Publishing your package

Publish your package with `uv publish`:

```console
$ uv publish
```

Set a PyPI token with `--token` or `UV_PUBLISH_TOKEN`, or set a username with `--username` or
`UV_PUBLISH_USERNAME` and password with `--password` or `UV_PUBLISH_PASSWORD`. For publishing to
PyPI from GitHub Actions, you don't need to set any credentials. Instead,
[add a trusted publisher to the PyPI project](https://docs.pypi.org/trusted-publishers/adding-a-publisher/).

!!! note

    PyPI does not support publishing with username and password anymore, instead you need to
    generate a token. Using a token is equivalent to setting `--username __token__` and using the
    token as password.

Even though `uv publish` retries failed uploads, it can happen that publishing fails in the middle,
with some files uploaded and some files still missing. With PyPI, you can retry the exact same
command, existing identical files will be ignored. With other registries, use
`--check-url <index url>` with the index URL (not the publish URL) the packages belong to. uv will
skip uploading files that are identical to files in the registry, and it will also handle raced
parallel uploads. Note that existing files need to match exactly with those previously uploaded to
the registry, this avoids accidentally publishing source distribution and wheels with different
contents for the same version.

### Installing your package

Test that the package can be installed and imported with `uv run`:

```console
$ uv run --with <PACKAGE> --no-project -- python -c "import <PACKAGE>"
```

The `--no-project` flag is used to avoid installing the package from your local project directory.

!!! tip

    If you have recently installed the package, you may need to include the
    `--refresh-package <PACKAGE>` option to avoid using a cached version of the package.

### Next steps

To learn more about publishing packages, check out the
[PyPA guides](https://packaging.python.org/en/latest/guides/section-build-and-publish/) on building
and publishing.

Or, read on for [guides](./integration/index.md) on integrating uv with other software.


## UV Guide: Running Scripts

A Python script is a file intended for standalone execution, e.g., with `python <script>.py`. Using
uv to execute scripts ensures that script dependencies are managed without manually managing
environments.

!!! note

    If you are not familiar with Python environments: every Python installation has an environment
    that packages can be installed in. Typically, creating [_virtual_ environments](https://docs.python.org/3/library/venv.html) is recommended to
    isolate packages required by each script. uv automatically manages virtual environments for you
    and prefers a [declarative](#declaring-script-dependencies) approach to dependencies.

### Running a script without dependencies

If your script has no dependencies, you can execute it with `uv run`:

```python title="example.py"
print("Hello world")
```

```console
$ uv run example.py
Hello world
```

<!-- TODO(zanieb): Once we have a `python` shim, note you can execute it with `python` here -->

Similarly, if your script depends on a module in the standard library, there's nothing more to do:

```python title="example.py"
import os

print(os.path.expanduser("~"))
```

```console
$ uv run example.py
/Users/astral
```

Arguments may be provided to the script:

```python title="example.py"
import sys

print(" ".join(sys.argv[1:]))
```

```console
$ uv run example.py test
test

$ uv run example.py hello world!
hello world!
```

Additionally, your script can be read directly from stdin:

```console
$ echo 'print("hello world!")' | uv run -
```

Or, if your shell supports [here-documents](https://en.wikipedia.org/wiki/Here_document):

```bash
uv run - <<EOF
print("hello world!")
EOF
```

Note that if you use `uv run` in a _project_, i.e. a directory with a `pyproject.toml`, it will
install the current project before running the script. If your script does not depend on the
project, use the `--no-project` flag to skip this:

```console
$ # Note, it is important that the flag comes _before_ the script
$ uv run --no-project example.py
```

See the [projects guide](./projects.md) for more details on working in projects.

### Running a script with dependencies

When your script requires other packages, they must be installed into the environment that the
script runs in. uv prefers to create these environments on-demand instead of using a long-lived
virtual environment with manually managed dependencies. This requires explicit declaration of
dependencies that are required for the script. Generally, it's recommended to use a
[project](./projects.md) or [inline metadata](#declaring-script-dependencies) to declare
dependencies, but uv supports requesting dependencies per invocation as well.

For example, the following script requires `rich`.

```python title="example.py"
import time
from rich.progress import track

for i in track(range(20), description="For example:"):
    time.sleep(0.05)
```

If executed without specifying a dependency, this script will fail:

```console
$ uv run --no-project example.py
Traceback (most recent call last):
  File "/Users/astral/example.py", line 2, in <module>
    from rich.progress import track
ModuleNotFoundError: No module named 'rich'
```

Request the dependency using the `--with` option:

```console
$ uv run --with rich example.py
For example: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:01
```

Constraints can be added to the requested dependency if specific versions are needed:

```console
$ uv run --with 'rich>12,<13' example.py
```

Multiple dependencies can be requested by repeating with `--with` option.

Note that if `uv run` is used in a _project_, these dependencies will be included _in addition_ to
the project's dependencies. To opt-out of this behavior, use the `--no-project` flag.

### Creating a Python script

Python recently added a standard format for
[inline script metadata](https://packaging.python.org/en/latest/specifications/inline-script-metadata/#inline-script-metadata).
It allows for selecting Python versions and defining dependencies. Use `uv init --script` to
initialize scripts with the inline metadata:

```console
$ uv init --script example.py --python 3.12
```

### Declaring script dependencies

The inline metadata format allows the dependencies for a script to be declared in the script itself.

uv supports adding and updating inline script metadata for you. Use `uv add --script` to declare the
dependencies for the script:

```console
$ uv add --script example.py 'requests<3' 'rich'
```

This will add a `script` section at the top of the script declaring the dependencies using TOML:

```python title="example.py"
# /// script
# dependencies = [
#   "requests<3",
#   "rich",
# ]
# ///

import requests
from rich.pretty import pprint

resp = requests.get("https://peps.python.org/api/peps.json")
data = resp.json()
pprint([(k, v["title"]) for k, v in data.items()][:10])
```

uv will automatically create an environment with the dependencies necessary to run the script, e.g.:

```console
$ uv run example.py
[
│   ('1', 'PEP Purpose and Guidelines'),
│   ('2', 'Procedure for Adding New Modules'),
│   ('3', 'Guidelines for Handling Bug Reports'),
│   ('4', 'Deprecation of Standard Modules'),
│   ('5', 'Guidelines for Language Evolution'),
│   ('6', 'Bug Fix Releases'),
│   ('7', 'Style Guide for C Code'),
│   ('8', 'Style Guide for Python Code'),
│   ('9', 'Sample Plaintext PEP Template'),
│   ('10', 'Voting Guidelines')
]
```

!!! important

    When using inline script metadata, even if `uv run` is [used in a _project_](../concepts/projects/run.md), the project's dependencies will be ignored. The `--no-project` flag is not required.

uv also respects Python version requirements:

```python title="example.py"
# /// script
# requires-python = ">=3.12"
# dependencies = []
# ///

# Use some syntax added in Python 3.12
type Point = tuple[float, float]
print(Point)
```

!!! note

    The `dependencies` field must be provided even if empty.

`uv run` will search for and use the required Python version. The Python version will download if it
is not installed — see the documentation on [Python versions](../concepts/python-versions.md) for
more details.

### Improving reproducibility

uv supports an `exclude-newer` field in the `tool.uv` section of inline script metadata to limit uv
to only considering distributions released before a specific date. This is useful for improving the
reproducibility of your script when run at a later point in time.

The date must be specified as an [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339.html) timestamp
(e.g., `2006-12-02T02:07:43Z`).

```python title="example.py"
# /// script
# dependencies = [
#   "requests",
# ]
# [tool.uv]
# exclude-newer = "2023-10-16T00:00:00Z"
# ///

import requests

print(requests.__version__)
```

### Using different Python versions

uv allows arbitrary Python versions to be requested on each script invocation, for example:

```python title="example.py"
import sys

print(".".join(map(str, sys.version_info[:3])))
```

```console
$ # Use the default Python version, may differ on your machine
$ uv run example.py
3.12.6
```

```console
$ # Use a specific Python version
$ uv run --python 3.10 example.py
3.10.15
```

See the [Python version request](../concepts/python-versions.md#requesting-a-version) documentation
for more details on requesting Python versions.

### Using GUI scripts

On Windows `uv` will run your script ending with `.pyw` extension using `pythonw`:

```python title="example.pyw"
from tkinter import Tk, ttk

root = Tk()
root.title("uv")
frm = ttk.Frame(root, padding=10)
frm.grid()
ttk.Label(frm, text="Hello World").grid(column=0, row=0)
root.mainloop()
```

```console
PS> uv run example.pyw
```

![Run Result](../assets/uv_gui_script_hello_world.png){: style="height:50px;width:150px"}

Similarly, it works with dependencies as well:

```python title="example_pyqt.pyw"
import sys
from PyQt5.QtWidgets import QApplication, QWidget, QLabel, QGridLayout

app = QApplication(sys.argv)
widget = QWidget()
grid = QGridLayout()

text_label = QLabel()
text_label.setText("Hello World!")
grid.addWidget(text_label)

widget.setLayout(grid)
widget.setGeometry(100, 100, 200, 50)
widget.setWindowTitle("uv")
widget.show()
sys.exit(app.exec_())
```

```console
PS> uv run --with PyQt5 example_pyqt.pyw
```

![Run Result](../assets/uv_gui_script_hello_world_pyqt.png){: style="height:50px;width:150px"}

### Next steps

To learn more about `uv run`, see the [command reference](../reference/cli.md#uv-run).

Or, read on to learn how to [run and install tools](./tools.md) with uv.

## UV Guide: Using tools

Many Python packages provide applications that can be used as tools. uv has specialized support for
easily invoking and installing tools.

### Running tools

The `uvx` command invokes a tool without installing it.

For example, to run `ruff`:

```console
$ uvx ruff
```

!!! note

    This is exactly equivalent to:

    ```console
    $ uv tool run ruff
    ```

    `uvx` is provided as an alias for convenience.

Arguments can be provided after the tool name:

```console
$ uvx pycowsay hello from uv

  -------------
< hello from uv >
  -------------
   \   ^__^
    \  (oo)\_______
       (__)\       )\/\
           ||----w |
           ||     ||

```

Tools are installed into temporary, isolated environments when using `uvx`.

!!! note

    If you are running a tool in a [_project_](../concepts/projects/index.md) and the tool requires that
    your project is installed, e.g., when using `pytest` or `mypy`, you'll want to use
    [`uv run`](./projects.md#running-commands) instead of `uvx`. Otherwise, the tool will be run in
    a virtual environment that is isolated from your project.

    If your project has a flat structure, e.g., instead of using a `src` directory for modules,
    the project itself does not need to be installed and `uvx` is fine. In this case, using
    `uv run` is only beneficial if you want to pin the version of the tool in the project's
    dependencies.

### Commands with different package names

When `uvx ruff` is invoked, uv installs the `ruff` package which provides the `ruff` command.
However, sometimes the package and command names differ.

The `--from` option can be used to invoke a command from a specific package, e.g. `http` which is
provided by `httpie`:

```console
$ uvx --from httpie http
```

### Requesting specific versions

To run a tool at a specific version, use `command@<version>`:

```console
$ uvx ruff@0.3.0 check
```

To run a tool at the latest version, use `command@latest`:

```console
$ uvx ruff@latest check
```

The `--from` option can also be used to specify package versions, as above:

```console
$ uvx --from 'ruff==0.3.0' ruff check
```

Or, to constrain to a range of versions:

```console
$ uvx --from 'ruff>0.2.0,<0.3.0' ruff check
```

Note the `@` syntax cannot be used for anything other than an exact version.

### Requesting extras

The `--from` option can be used to run a tool with extras:

```console
$ uvx --from 'mypy[faster-cache,reports]' mypy --xml-report mypy_report
```

This can also be combined with version selection:

```console
$ uvx --from 'mypy[faster-cache,reports]==1.13.0' mypy --xml-report mypy_report
```

### Requesting different sources

The `--from` option can also be used to install from alternative sources.

For example, to pull from git:

```console
$ uvx --from git+https://github.com/httpie/cli httpie
```

You can also pull the latest commit from a specific named branch:

```console
$ uvx --from git+https://github.com/httpie/cli@master httpie
```

Or pull a specific tag:

```console
$ uvx --from git+https://github.com/httpie/cli@3.2.4 httpie
```

Or even a specific commit:

```console
$ uvx --from git+https://github.com/httpie/cli@2843b87 httpie
```

### Commands with plugins

Additional dependencies can be included, e.g., to include `mkdocs-material` when running `mkdocs`:

```console
$ uvx --with mkdocs-material mkdocs --help
```

### Installing tools

If a tool is used often, it is useful to install it to a persistent environment and add it to the
`PATH` instead of invoking `uvx` repeatedly.

!!! tip

    `uvx` is a convenient alias for `uv tool run`. All of the other commands for interacting with
    tools require the full `uv tool` prefix.

To install `ruff`:

```console
$ uv tool install ruff
```

When a tool is installed, its executables are placed in a `bin` directory in the `PATH` which allows
the tool to be run without uv. If it's not on the `PATH`, a warning will be displayed and
`uv tool update-shell` can be used to add it to the `PATH`.

After installing `ruff`, it should be available:

```console
$ ruff --version
```

Unlike `uv pip install`, installing a tool does not make its modules available in the current
environment. For example, the following command will fail:

```console
$ python -c "import ruff"
```

This isolation is important for reducing interactions and conflicts between dependencies of tools,
scripts, and projects.

Unlike `uvx`, `uv tool install` operates on a _package_ and will install all executables provided by
the tool.

For example, the following will install the `http`, `https`, and `httpie` executables:

```console
$ uv tool install httpie
```

Additionally, package versions can be included without `--from`:

```console
$ uv tool install 'httpie>0.1.0'
```

And, similarly, for package sources:

```console
$ uv tool install git+https://github.com/httpie/cli
```

As with `uvx`, installations can include additional packages:

```console
$ uv tool install mkdocs --with mkdocs-material
```

### Upgrading tools

To upgrade a tool, use `uv tool upgrade`:

```console
$ uv tool upgrade ruff
```

Tool upgrades will respect the version constraints provided when installing the tool. For example,
`uv tool install ruff >=0.3,<0.4` followed by `uv tool upgrade ruff` will upgrade Ruff to the latest
version in the range `>=0.3,<0.4`.

To instead replace the version constraints, re-install the tool with `uv tool install`:

```console
$ uv tool install ruff>=0.4
```

To instead upgrade all tools:

```console
$ uv tool upgrade --all
```

### Next steps

To learn more about managing tools with uv, see the [Tools concept](../concepts/tools.md) page and
the [command reference](../reference/cli.md#uv-tool).

Or, read on to learn how to to [work on projects](./projects.md).

flask
golang
html
python
rest-api
rust

First seen in:

jmbejara/chartbook

Used in 1 repository

CSS
You are a Front-End Developer and an Expert in JavaScript, TypeScript, HTML, CSS, SCSS, WebGL,Three.js, p5.js, PHP, WordPress, Vite, webpack.
You excel at selecting and choosing the best tools, avoiding unnecessary duplication and complexity.
You are thoughtful, give nuanced answers, and are brilliant at reasoning.

You carefully provide accurate, factual, thoughtful answers, and are a genius at reasoning.
- Follow the user’s requirements carefully & to the letter.
- First think step-by-step - describe your plan for what to build in pseudocode, written out in great detail.
- Confirm, then write code!
- Always write correct, best practice, DRY principle (Dont Repeat Yourself), bug free, fully functional and working code also it should be aligned to listed rules down below at Code Implementation Guidelines .
- Focus on easy and readability code, over being performant.
- Fully implement all requested functionality.
- Leave NO todo’s, placeholders or missing pieces.
- Ensure code is complete! Verify thoroughly finalised.
- Include all required imports, and ensure proper naming of key components.
- Be concise Minimize any other prose.
- If you think there might not be a correct answer, you say so.
- If you do not know the answer, say so, instead of guessing.

The user asks questions about the following coding languages:
- JavaScript
- TypeScript
- HTML
- CSS
- SCSS
- WebGL
- Three.js
- p5.js
- PHP
- WordPress
- Vite
- webpack

Key Principles
Follow these rules when you write code:
- Use early returns whenever possible to make the code more readable.
- Use consts instead of functions, for example, “const toggle = () =>”. Also, define a type if possible.
- Provide precise, technical PHP and WordPress examples.
- Adhere to PHP and WordPress best practices for consistency and readability.
- Follow WordPress PHP coding standards throughout the codebase.
- Write semantic HTML to improve accessibility and SEO.
- Ensure responsive design using media queries and flexible layouts.
- Prioritize accessibility by using ARIA roles and attributes.
- Refer to MDN Web Docs for HTML and CSS best practices and to the W3C guidelines for accessibility standards.

HTML
- Use semantic elements (e.g., <header>, <main>, <footer>, <article>, <section>).
- Use <button> for clickable elements, not <div> or <span>.
- Use <a> for links, ensuring href attribute is present.
- Use <img> with alt attribute for images.
- Use <form> for forms, with appropriate input types and labels.
- Avoid using deprecated elements (e.g., <font>, <center>).

CSS
- Use external stylesheets for CSS.
- Use class selectors over ID selectors for styling.
- Use Flexbox and Grid for layout.
- Use rem and em units for scalable and accessible typography.
- Use CSS variables for consistent theming.
- Use BEM (Block Element Modifier) methodology for naming classes.
- Avoid !important; use specificity to manage styles.

Accessibility
- Use ARIA roles and attributes to enhance accessibility.
- Ensure sufficient color contrast for text.
- Provide keyboard navigation for interactive elements.
- Use focus styles to indicate focus state.
- Use landmarks (e.g., <nav>, <main>, <aside>) for screen readers.
css
html
java
javascript
php
typescript
vite
webpack
hitomikazuma/vite_templete

Used in 1 repository

Python
You are an expert in Python, Streamlit, TensorFlow, PyTorch, Keras, LangChain, Hugging Face, and AI/ML engineering.

Key Principles
- Design with simplicity and clarity in mind.
- Generate clear and technical responses with examples using TensorFlow, PyTorch, Keras, LangChain, Hugging Face, and scikit-learn
- Generate clear and technical responses with examples using TensorFlow, PyTorch, and scikit-learn.
- Prioritize code that is production-ready and scalable, following best practices in software engineering and AI/ML.
- Use clear, descriptive variable names and follow PEP 8 for Python code standards.
- Encourage modular and reusable code design with object-oriented principles.
- Ensure proper separation of data preprocessing, model training, and evaluation logic for maintainability.

AI/ML Workflow

- Use `scikit-learn` for classical machine learning models (e.g., RandomForest, SVM) and `PyTorch` or `TensorFlow` for deep learning.
- Leverage TensorFlow and PyTorch’s GPU support for large-scale training and inference, preferring PyTorch for research projects and TensorFlow for production deployment.
- Automate hyperparameter tuning using `Optuna` or `Hyperopt`.
- For data preprocessing, encourage the use of `pandas` for tabular data and `numpy` for numerical computations.
- Always split data into training, validation, and test sets, and use cross-validation for model performance estimation.
- Ensure models follow best practices like applying regularization (L2, dropout), and model checkpointing for long-running training sessions.

Model Training

- Implement learning rate schedules (e.g., cosine annealing, exponential decay) for optimal model convergence.
- Use transfer learning when appropriate (e.g., fine-tune pre-trained models like BERT or ResNet for new tasks).
- Encourage distributed training (e.g., Horovod) for faster experimentation on large datasets.
- Implement gradient clipping for RNNs or LSTMs to avoid exploding gradients.

Model Evaluation and Metrics

- Always include metrics like accuracy, precision, recall, F1 score, and AUC for classification tasks.
- Use MAE, RMSE, and R-squared for regression models, and evaluate models on a held-out test set.
- Leverage confusion matrices and ROC curves to visually assess model performance.
- For unsupervised learning tasks, use silhouette score or Davies-Bouldin index for clustering evaluation.

Best Practices for AI/ML Projects

- Use Docker for containerized environments to ensure reproducibility in ML experiments.
- Keep track of experiments using tools like MLflow or Weights & Biases for hyperparameter tuning, metrics tracking, and model versioning.
- Use TensorBoard or Matplotlib for visualizing model performance and loss during training.

Deployment and Inference

- Convert models to ONNX for optimizing inference times and ensuring cross-platform compatibility.
- Use TensorFlow Lite for deploying models on mobile or edge devices.
- Implement model serving using FastAPI or Flask for lightweight APIs, and Dockerize the deployment process.
- Apply techniques like quantization and pruning for model optimization during inference, reducing latency and memory footprint.

Error Handling and Debugging

- Ensure models are robust by validating inputs before feeding them into models, using try-except blocks to catch errors.
- Implement custom exception handling for failed predictions or invalid input during inference.
- Log training progress and errors in a structured format (JSON logs) for easy debugging.

Dependencies

- TensorFlow or PyTorch (core deep learning frameworks)
- Scikit-learn (traditional machine learning)
- Optuna or Hyperopt (hyperparameter tuning)
- Pandas and NumPy (data manipulation)
- MLflow or Weights & Biases (experiment tracking)
- ONNX (model conversion for optimized inference)

Performance Optimization

- Use `DataLoader` (PyTorch) or `tf.data` (TensorFlow) for efficient data pipeline processing during training.
- Optimize batch sizes and use GPU memory efficiently by monitoring resource usage.
- Enable mixed precision training with TensorFlow or PyTorch to accelerate model training on GPUs.
- Apply caching and database indexing for data-heavy projects to reduce I/O bottlenecks.

Key Conventions

1. Follow a clean, modular structure to separate model code, data preprocessing, and evaluation scripts.
2. Implement logging and monitoring at every stage, especially during training and inference.
3. Emphasize scalability and production-readiness in every step of model development, including deployment.

Security Considerations

- Use environment variables for sensitive information like API keys or credentials in deployment code.
- Ensure models are robust against adversarial attacks using defensive techniques such as input validation and model regularization.

Refer to the latest versions of TensorFlow, PyTorch, and scikit-learn documentation for updates and advanced features.
docker
fastapi
flask
langchain
python
pytorch
rest-api
tensorflow

First seen in:

StMarkFx/Efiko

Used in 1 repository