Running Test Scripts with uv: No Dependencies Management Required
Learn how to run Python test scripts instantly with uv without managing virtual environments or installing packages manually.
Table of Contents
- Why uv for Script Execution?
- Running Scripts Without Dependencies
- Running Scripts with External Dependencies
- Using Inline Script Metadata (PEP 723)
- Web Scraping and Markdown Conversion
- Creating Executable Scripts with Shebang
- Managing Python Versions in Scripts
- Best Practices for uv Scripts
- Performance Benefits
- Common Use Cases
- Conclusion
- Related Articles
Python testing and script execution has never been easier than with uv
. Gone are the days of manually creating virtual environments, installing dependencies, and managing package conflicts for simple test scripts. With uv
, you can execute Python scripts with their dependencies in seconds, without any setup overhead.
In this comprehensive guide, we’ll explore how to leverage uv
’s powerful script execution capabilities to run test scripts, one-off utilities, and experimental code without the traditional friction of Python dependency management.
New to uv?
If you’re new to uv or want to learn how to set up full Python projects, start with our comprehensive guide Getting Started with uv: Setting Up Your Python Project in 2025 before diving into script execution.
Why uv for Script Execution?
Traditional Python script execution often involves multiple steps:
- Creating a virtual environment
- Activating the environment
- Installing dependencies
- Running the script
- Deactivating and cleaning up
With uv
, this entire process is reduced to a single command. The tool automatically:
- Creates isolated environments per script
- Resolves and installs dependencies lightning-fast
- Executes your script
- Cleans up automatically
This makes uv
perfect for:
- Testing API endpoints with libraries like
requests
- Data analysis scripts using
pandas
ornumpy
- Web scraping utilities with
beautifulsoup4
andmarkdownify
- Content extraction and conversion from web pages to markdown
- One-off automation scripts with various dependencies
- Prototyping and experimentation without environment pollution
Running Scripts Without Dependencies
The simplest use case is running a script that only uses Python’s standard library. With uv
, this is straightforward:
# test_basic.py
import os
import sys
import json
def system_info():
info = {
"python_version": sys.version,
"platform": sys.platform,
"user_home": os.path.expanduser("~"),
"current_directory": os.getcwd()
}
print(json.dumps(info, indent=2))
if __name__ == "__main__":
system_info()
Execute it with:
uv run test_basic.py
Output:
{
"python_version": "3.12.7 (main, Oct 1 2024, 11:15:50)",
"platform": "darwin",
"user_home": "/Users/username",
"current_directory": "/path/to/current/directory"
}
Running Scripts with External Dependencies
Here’s where uv
really shines. Let’s create a script that needs external packages:
# test_api.py
import requests
import json
from datetime import datetime
def test_github_api():
"""Test script to fetch GitHub API data"""
try:
response = requests.get("https://api.github.com/users/astral-sh")
data = response.json()
print(f"✅ API Response Status: {response.status_code}")
print(f"🏢 Organization: {data.get('name', 'N/A')}")
print(f"📍 Location: {data.get('location', 'N/A')}")
print(f"👥 Public Repos: {data.get('public_repos', 0)}")
print(f"⏰ Tested at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
except requests.exceptions.RequestException as e:
print(f"❌ API request failed: {e}")
except KeyError as e:
print(f"❌ Unexpected API response format: {e}")
if __name__ == "__main__":
test_github_api()
Run it with the --with
flag to specify dependencies:
uv run --with requests test_api.py
Output:
✅ API Response Status: 200
🏢 Organization: Astral
📍 Location: United States of America
👥 Public Repos: 47
⏰ Tested at: 2025-07-08 08:31:53
You can specify multiple dependencies:
uv run --with requests --with pandas --with matplotlib data_analysis.py
Or use version constraints:
uv run --with 'requests>=2.31.0,<3.0.0' --with 'pandas>=2.0.0' test_script.py
Using Inline Script Metadata (PEP 723)
For scripts you’ll run multiple times, uv
supports embedding dependency information directly in the script using Python’s PEP 723 standard. This inline metadata format allows you to declare dependencies and Python requirements directly in your script files.
Understanding the PEP 723 Header Format
The inline script metadata uses a special comment block at the top of your Python file:
# /// script
# dependencies = [
# "package-name>=version",
# "another-package"
# ]
# requires-python = ">=3.10"
# [tool.uv]
# exclude-newer = "2024-01-01T00:00:00Z"
# ///
Let’s break down each field:
dependencies
- Purpose: Lists all Python packages required by your script
- Format: Array of package specifications using pip-style syntax
- Examples:
"requests>=2.31.0"
- Minimum version constraint"pandas>=2.0.0,<3.0.0"
- Version range"rich"
- Latest available version"django==4.2.7"
- Exact version pin
requires-python
- Purpose: Specifies the minimum Python version required
- Format: Version specification using PEP 440 syntax
- Examples:
">=3.10"
- Python 3.10 or newer">=3.11,<3.13"
- Python 3.11 or 3.12 only"==3.12.*"
- Any Python 3.12 version
[tool.uv]
Section
Optional configuration specific to uv:
exclude-newer
: Only consider packages released before this date (improves reproducibility)index-url
: Custom package index URLextra-index-url
: Additional package indexes
Complete Example with Detailed Metadata
Here’s a comprehensive example showing all available PEP 723 fields:
# /// script
# dependencies = [
# "requests>=2.31.0",
# "rich>=13.0.0",
# "click>=8.0.0"
# ]
# requires-python = ">=3.10"
# [tool.uv]
# exclude-newer = "2024-12-01T00:00:00Z"
# ///
import requests
import click
from rich.console import Console
from rich.table import Table
console = Console()
@click.command()
@click.option('--username', prompt='GitHub username', help='GitHub username to analyze')
def analyze_github_user(username):
"""Analyze a GitHub user's profile and repositories"""
with console.status(f"[bold green]Fetching data for {username}..."):
try:
# Fetch user data
user_response = requests.get(f"https://api.github.com/users/{username}")
user_response.raise_for_status()
user_data = user_response.json()
# Fetch repositories
repos_response = requests.get(f"https://api.github.com/users/{username}/repos")
repos_response.raise_for_status()
repos_data = repos_response.json()
except requests.exceptions.RequestException as e:
console.print(f"[bold red]Error fetching data: {e}")
return
# Display user information
console.print(f"\n[bold blue]GitHub User Analysis: {username}[/bold blue]")
console.print(f"Name: {user_data.get('name', 'N/A')}")
console.print(f"Bio: {user_data.get('bio', 'N/A')}")
console.print(f"Public Repos: {user_data.get('public_repos', 0)}")
console.print(f"Followers: {user_data.get('followers', 0)}")
console.print(f"Following: {user_data.get('following', 0)}")
# Create a table for top repositories
table = Table(title=f"Top Repositories for {username}")
table.add_column("Repository", style="cyan")
table.add_column("Stars", style="magenta")
table.add_column("Language", style="green")
table.add_column("Description", style="yellow")
# Sort by stars and take top 10
top_repos = sorted(repos_data, key=lambda x: x.get('stargazers_count', 0), reverse=True)[:10]
for repo in top_repos:
table.add_row(
repo['name'],
str(repo.get('stargazers_count', 0)),
repo.get('language', 'N/A'),
repo.get('description', 'N/A')[:50] + "..." if repo.get('description', '') else 'N/A'
)
console.print(table)
if __name__ == "__main__":
analyze_github_user()
The script will automatically install requests
, rich
, and click
before execution.
More PEP 723 Examples for Different Use Cases
Simple Data Analysis Script
# /// script
# dependencies = [
# "pandas>=2.0.0",
# "matplotlib>=3.7.0"
# ]
# requires-python = ">=3.9"
# ///
import pandas as pd
import matplotlib.pyplot as plt
# Your data analysis code here
data = pd.read_csv('data.csv')
plt.plot(data['x'], data['y'])
plt.show()
Web API Testing Script
# /// script
# dependencies = [
# "httpx>=0.25.0",
# "pytest>=7.0.0"
# ]
# requires-python = ">=3.8"
# [tool.uv]
# exclude-newer = "2024-11-01T00:00:00Z"
# ///
import httpx
import pytest
def test_api_endpoint():
response = httpx.get("https://api.example.com/health")
assert response.status_code == 200
Machine Learning Experiment
# /// script
# dependencies = [
# "scikit-learn>=1.3.0",
# "numpy>=1.24.0",
# "joblib>=1.3.0"
# ]
# requires-python = ">=3.10"
# [tool.uv]
# exclude-newer = "2024-10-01T00:00:00Z"
# ///
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
# Create sample data and train model
X, y = make_classification(n_samples=1000, n_features=20)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = RandomForestClassifier()
model.fit(X_train, y_train)
print(f"Model accuracy: {model.score(X_test, y_test):.2f}")
Benefits of Using PEP 723 Metadata
- Reproducibility: Exact dependency versions are preserved
- Portability: Scripts work the same way across different environments
- Self-documenting: Dependencies are clearly visible in the script
- Version control friendly: Everything is in one file
- No external files: No need for separate
requirements.txt
orpyproject.toml
Save this asgithub_analyzer.py
and run it simply with:
uv run github_analyzer.py
Web Scraping and Markdown Conversion
One of the most powerful use cases for uv
script execution is web scraping and content conversion. Let’s create a script that accepts a URL, scrapes the content, and converts it to clean markdown format:
# /// script
# dependencies = [
# "requests>=2.31.0",
# "beautifulsoup4>=4.12.0",
# "markdownify>=0.11.6",
# "typer>=0.9.0",
# "rich>=13.0.0"
# ]
# requires-python = ">=3.10"
# ///
import requests
import typer
from bs4 import BeautifulSoup
from markdownify import markdownify as md
from rich.console import Console
from rich.markdown import Markdown
from rich.panel import Panel
from urllib.parse import urljoin, urlparse
import re
from typing import Optional
from datetime import datetime
console = Console()
app = typer.Typer()
def clean_markdown(markdown_text: str) -> str:
"""Clean and format markdown text"""
# Remove excessive whitespace
markdown_text = re.sub(r'\n\s*\n\s*\n', '\n\n', markdown_text)
# Remove leading/trailing whitespace
markdown_text = markdown_text.strip()
# Fix list formatting
markdown_text = re.sub(r'\n\s*\*\s*', '\n* ', markdown_text)
markdown_text = re.sub(r'\n\s*\d+\.\s*', '\n1. ', markdown_text)
return markdown_text
def extract_metadata(soup: BeautifulSoup) -> dict:
"""Extract page metadata"""
metadata = {}
# Title
title_tag = soup.find('title')
metadata['title'] = title_tag.get_text().strip() if title_tag else 'No Title'
# Meta description
description_tag = soup.find('meta', attrs={'name': 'description'})
if description_tag:
metadata['description'] = description_tag.get('content', '').strip()
# Open Graph data
og_title = soup.find('meta', property='og:title')
og_description = soup.find('meta', property='og:description')
if og_title:
metadata['og_title'] = og_title.get('content', '').strip()
if og_description:
metadata['og_description'] = og_description.get('content', '').strip()
return metadata
@app.command()
def scrape_to_markdown(
url: str = typer.Argument(..., help="URL to scrape and convert to markdown"),
output_file: Optional[str] = typer.Option(None, "--output", "-o", help="Output file path"),
include_links: bool = typer.Option(True, "--links/--no-links", help="Include links in output"),
include_images: bool = typer.Option(True, "--images/--no-images", help="Include images in output"),
show_preview: bool = typer.Option(True, "--preview/--no-preview", help="Show preview in terminal"),
selector: Optional[str] = typer.Option(None, "--selector", "-s", help="CSS selector for content extraction")
):
"""
Scrape a webpage and convert it to clean markdown format.
Examples:
uv run web_scraper.py https://example.com
uv run web_scraper.py https://blog.example.com --output article.md
uv run web_scraper.py https://docs.example.com --selector "article" --no-links
"""
# Validate URL
if not url.startswith(('http://', 'https://')):
url = f"https://{url}"
try:
# Fetch the webpage
with console.status(f"[bold blue]Fetching content from {url}..."):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
# Parse HTML
soup = BeautifulSoup(response.content, 'html.parser')
# Extract metadata
metadata = extract_metadata(soup)
# Remove script and style elements
for script in soup(["script", "style", "nav", "footer", "header", "aside"]):
script.decompose()
# Find content based on selector or common content areas
if selector:
content = soup.select_one(selector)
if not content:
console.print(f"[bold red]No content found with selector: {selector}")
return
else:
# Try common content selectors
content_selectors = [
'article', 'main', '.content', '#content', '.post', '.article',
'.entry-content', '.post-content', '.article-content'
]
content = None
for sel in content_selectors:
content = soup.select_one(sel)
if content:
break
# If no specific content area found, use body
if not content:
content = soup.find('body')
if not content:
console.print("[bold red]No content found to extract")
return
# Convert to markdown
markdown_options = {
'heading_style': 'ATX',
'bullets': '*'
}
# Add elements to convert based on options
convert_tags = ['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'ul', 'ol', 'li', 'strong', 'em', 'code', 'pre', 'blockquote']
if include_links:
convert_tags.extend(['a'])
if include_images:
convert_tags.extend(['img'])
markdown_content = md(str(content), convert=convert_tags, **markdown_options)
markdown_content = clean_markdown(markdown_content)
# Create final markdown with metadata
final_markdown = f"""---
title: "{metadata['title']}"
source_url: "{url}"
scraped_at: "{datetime.now().isoformat()}"
---
# {metadata['title']}
**Source:** [{url}]({url})
"""
if metadata.get('description'):
final_markdown += f"**Description:** {metadata['description']}\n\n"
final_markdown += "---\n\n"
final_markdown += markdown_content
# Show preview if requested
if show_preview:
console.print("\n" + "="*60)
console.print("[bold green]Preview of scraped content:")
console.print("="*60)
# Show metadata
metadata_text = f"**Title:** {metadata['title']}\n**URL:** {url}\n**Content Length:** {len(markdown_content)} characters"
console.print(Panel(metadata_text, title="Metadata", border_style="blue"))
# Show markdown preview (truncated)
preview_content = markdown_content[:1000] + "..." if len(markdown_content) > 1000 else markdown_content
console.print(Panel(Markdown(preview_content), title="Content Preview", border_style="green"))
# Save to file if specified
if output_file:
with open(output_file, 'w', encoding='utf-8') as f:
f.write(final_markdown)
console.print(f"[bold green]✅ Content saved to: {output_file}")
else:
# Output to stdout
console.print("\n" + "="*60)
console.print("[bold green]Markdown Output:")
console.print("="*60)
console.print(final_markdown)
except requests.exceptions.RequestException as e:
console.print(f"[bold red]❌ Error fetching URL: {e}")
except Exception as e:
console.print(f"[bold red]❌ Error processing content: {e}")
if __name__ == "__main__":
app()
This powerful web scraping script can be used in multiple ways:
# Basic usage - scrape and preview
uv run web_scraper.py https://example.com
# Save to file
uv run web_scraper.py https://blog.example.com --output article.md
# Extract specific content using CSS selector
uv run web_scraper.py https://docs.example.com --selector "article"
# Scrape without links or images
uv run web_scraper.py https://news.example.com --no-links --no-images
# Just save without preview
uv run web_scraper.py https://example.com --output content.md --no-preview
The script features:
- Automatic content detection using common selectors
- Clean markdown conversion with proper formatting
- Metadata extraction including title, description, and Open Graph data
- Flexible output options (file or stdout)
- Rich terminal preview with syntax highlighting
- Error handling for network issues and parsing errors
- Customizable content selection via CSS selectors
Creating Executable Scripts with Shebang
For scripts you use frequently, you can make them directly executable using a shebang:
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = [
# "httpx>=0.25.0",
# "typer>=0.9.0"
# ]
# requires-python = ">=3.10"
# ///
import httpx
import typer
from typing import Optional
app = typer.Typer()
@app.command()
def check_website(
url: str = typer.Argument(..., help="Website URL to check"),
timeout: Optional[int] = typer.Option(10, help="Request timeout in seconds")
):
"""Check if a website is up and running"""
if not url.startswith(('http://', 'https://')):
url = f"https://{url}"
try:
with httpx.Client(timeout=timeout) as client:
response = client.get(url)
if response.status_code == 200:
typer.echo(f"✅ {url} is UP (Status: {response.status_code})")
typer.echo(f"Response time: {response.elapsed.total_seconds():.2f}s")
else:
typer.echo(f"⚠️ {url} returned status {response.status_code}")
except httpx.RequestError as e:
typer.echo(f"❌ {url} is DOWN - {e}")
except httpx.TimeoutException:
typer.echo(f"❌ {url} timed out after {timeout} seconds")
if __name__ == "__main__":
app()
Make it executable and run it:
chmod +x website_checker.py
./website_checker.py github.com
Managing Python Versions in Scripts
You can specify Python versions for your scripts:
# Run with Python 3.11
uv run --python 3.11 test_script.py
# Run with Python 3.12
uv run --python 3.12 test_script.py
# Use specific Python version in inline metadata
# /// script
# requires-python = ">=3.11"
# dependencies = ["asyncio", "aiohttp"]
# ///
import asyncio
import aiohttp
async def fetch_url(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
urls = [
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/2",
"https://httpbin.org/delay/3"
]
async with aiohttp.ClientSession() as session:
tasks = [fetch_url(session, url) for url in urls]
results = await asyncio.gather(*tasks)
print(f"Fetched {len(results)} URLs successfully!")
if __name__ == "__main__":
asyncio.run(main())
Best Practices for uv Scripts
1. Use Inline Metadata for Reusable Scripts
For scripts you’ll run multiple times, embed dependencies directly in the file using PEP 723 syntax.
2. Specify Python Version Requirements
Always specify requires-python
to ensure compatibility.
3. Use Version Constraints
Pin dependencies to avoid compatibility issues:
# /// script
# dependencies = [
# "requests>=2.31.0,<3.0.0",
# "pandas>=2.0.0,<3.0.0"
# ]
# ///
4. Handle Errors Gracefully
Always include error handling for network requests and file operations.
5. Make Scripts Self-Documenting
Include docstrings and clear variable names.
Performance Benefits
The speed difference is remarkable:
- Traditional approach: 30-60 seconds for environment setup + dependency installation
- uv approach: 1-3 seconds for the same operation
This makes uv
perfect for:
- CI/CD pipelines where speed matters
- Development workflows with frequent script execution
- Data science experimentation with different libraries
- System administration tasks requiring various tools
Common Use Cases
1. API Testing and Monitoring
# Quick API health check
uv run --with requests api_health_check.py
# Test API with different HTTP methods
uv run --with httpx --with typer api_tester.py --method POST --url https://api.example.com
2. Data Processing and Analysis
# Process CSV files
uv run --with pandas --with matplotlib data_processor.py input.csv
# Web scraping and markdown conversion
uv run --with beautifulsoup4 --with requests --with markdownify web_scraper.py https://example.com
# Batch web scraping with analysis
uv run --with beautifulsoup4 --with requests --with markdownify --with pandas batch_scraper.py
3. System Administration
# Server monitoring
uv run --with psutil --with click server_monitor.py
# Log analysis
uv run --with click --with rich log_analyzer.py /var/log/app.log
4. Prototyping and Experimentation
# Test new libraries quickly
uv run --with fastapi --with uvicorn prototype_api.py
# Machine learning experiments
uv run --with scikit-learn --with matplotlib ml_experiment.py
# Content extraction and conversion
uv run --with beautifulsoup4 --with markdownify --with typer content_converter.py
Conclusion
uv
revolutionizes Python script execution by eliminating the friction of dependency management. Whether you’re testing APIs, processing data, or prototyping applications, uv
lets you focus on your code rather than environment setup.
The combination of speed, simplicity, and standards compliance makes uv
an essential tool for modern Python development. Start using it today for your test scripts and experience the difference!
Key takeaways:
- Zero setup required - just write and run
- Lightning fast - 10-100x faster than traditional tools
- Standards compliant - uses PEP 723 for inline metadata
- Flexible - supports ad-hoc dependencies and inline metadata
- Isolated - each script runs in its own environment
- Versatile - perfect for web scraping, data processing, and content conversion
Ready to supercharge your Python scripting workflow? Install uv
and start running scripts at the speed of thought!
Related Articles
- Getting Started: New to uv? Learn the basics in our guide Getting Started with uv: Setting Up Your Python Project in 2025
- Text-to-Speech: Build a powerful audio generation script with Text-to-Speech with uv: Create Audio from Text in Python
- Deployment: Ready to deploy your uv projects? Check out Deploying a Python uv Project with Git and Railpack in Dokploy
Try the Web Scraping Example
Save the web scraping script as web_scraper.py
and try it out:
# Scrape any webpage to markdown
uv run web_scraper.py https://docs.python.org/3/tutorial/
# Save to file
uv run web_scraper.py https://realpython.com/python-requests/ --output tutorial.md
# Extract specific content
uv run web_scraper.py https://github.com/astral-sh/uv --selector "article"
The script will automatically install all dependencies (requests
, beautifulsoup4
, markdownify
, typer
, rich
) and provide you with clean, formatted markdown output ready for documentation, analysis, or further processing.
Related Posts

30+ Best Python Web Frameworks for 2024
A complete list with the best Python Web Frameworks for 2024 that you can use to add an interface to your Python applications.

NiceGUI For Beginners: Build An UI to Python App in 5 Minutes
Master NiceGUI quickly! Learn to add a user interface to your Python app in just 5 minutes with our beginner-friendly guide.

Getting Started with uv: Setting Up Your Python Project in 2025
See how you can get started with uv, a next-generation Python package and project manager written in Rust by the Astral team.