mirror of
https://github.com/vinta/awesome-python.git
synced 2026-04-11 02:11:42 +08:00
Merge branch 'master' into add-logfmter-logging-library
This commit is contained in:
commit
393cf3d537
7
.gitignore
vendored
7
.gitignore
vendored
@ -3,6 +3,7 @@
|
||||
|
||||
# python
|
||||
.venv/
|
||||
__pycache__/
|
||||
*.py[co]
|
||||
|
||||
# website
|
||||
@ -11,6 +12,10 @@ website/data/
|
||||
|
||||
# claude code
|
||||
.claude/skills/
|
||||
.superpowers/
|
||||
.gstack/
|
||||
.playwright-cli/
|
||||
.superpowers/
|
||||
skills-lock.json
|
||||
|
||||
# codex
|
||||
.agents/
|
||||
|
||||
41
CLAUDE.md
41
CLAUDE.md
@ -2,31 +2,36 @@
|
||||
|
||||
## Repository Overview
|
||||
|
||||
This is the awesome-python repository - a curated list of Python frameworks, libraries, software and resources. The repository serves as a comprehensive directory about Python ecosystem.
|
||||
An opinionated list of Python frameworks, libraries, tools, and resources. Published at [awesome-python.com](https://awesome-python.com/).
|
||||
|
||||
## PR Review Guidelines
|
||||
|
||||
**For all PR review tasks, refer to [CONTRIBUTING.md](CONTRIBUTING.md)** which contains:
|
||||
**Refer to [CONTRIBUTING.md](CONTRIBUTING.md)** for acceptance criteria, quality requirements, rejection rules, and entry format.
|
||||
|
||||
- Acceptance criteria (Industry Standard, Rising Star, Hidden Gem)
|
||||
- Quality requirements
|
||||
- Automatic rejection criteria
|
||||
- Entry format reference
|
||||
- PR description template
|
||||
## Structure
|
||||
|
||||
## Architecture & Structure
|
||||
- **README.md**: Source of truth. Hierarchical categories with alphabetically ordered entries.
|
||||
- **CONTRIBUTING.md**: Submission guidelines and review criteria.
|
||||
- **website/**: Static site generator that builds awesome-python.com from README.md.
|
||||
- `build.py`: Parses README.md and renders HTML via Jinja2 templates.
|
||||
- `fetch_github_stars.py`: Fetches star counts into `website/data/`.
|
||||
- `readme_parser.py`: Markdown-to-structured-data parser.
|
||||
- `templates/`, `static/`: Jinja2 templates and CSS/JS assets.
|
||||
- `tests/`: Pytest tests for the build pipeline.
|
||||
- **Makefile**: `make install`, `make build`, `make preview`, `make test`, `make fetch_github_stars`.
|
||||
- **pyproject.toml**: Uses `uv` for dependency management. Python >=3.13.
|
||||
|
||||
The repository follows a single-file architecture:
|
||||
## Entry Format
|
||||
|
||||
- **README.md**: All content in hierarchical structure (categories, subcategories, entries)
|
||||
- **CONTRIBUTING.md**: Submission guidelines and review criteria
|
||||
- **sort.py**: Script to enforce alphabetical ordering
|
||||
```markdown
|
||||
- [project-name](https://github.com/owner/repo) - Description ending with period.
|
||||
```
|
||||
|
||||
Entry format: `* [project-name](url) - Concise description ending with period.`
|
||||
Use PyPI package name as display name. If not on PyPI, use the GitHub repo name. Use GitHub URLs when available.
|
||||
|
||||
## Key Considerations
|
||||
## Key Rules
|
||||
|
||||
- This is a curated list, not a code project
|
||||
- Quality over quantity - only "awesome" projects
|
||||
- Alphabetical ordering within categories is mandatory
|
||||
- README.md is the source of truth for all content
|
||||
- Alphabetical ordering within categories is mandatory.
|
||||
- Quality over quantity. Only "awesome" projects.
|
||||
- One project per PR.
|
||||
- README.md is the single source of content truth.
|
||||
|
||||
1
Makefile
1
Makefile
@ -14,7 +14,6 @@ build:
|
||||
uv run python website/build.py
|
||||
|
||||
preview: build
|
||||
@echo "Check the website on http://localhost:8000"
|
||||
uv run watchmedo shell-command \
|
||||
--patterns='*.md;*.html;*.css;*.js;*.py' \
|
||||
--recursive \
|
||||
|
||||
61
README.md
61
README.md
@ -1,8 +1,10 @@
|
||||
# Awesome Python
|
||||
|
||||
An opinionated list of awesome Python frameworks, libraries, tools, software and resources.
|
||||
An opinionated list of Python frameworks, libraries, tools, and resources.
|
||||
|
||||
> The **#10 most-starred repo on GitHub**. Put your product where Python developers discover tools. [Become a sponsor](SPONSORSHIP.md).
|
||||
# **Sponsors**
|
||||
|
||||
> The **#10 most-starred repo on GitHub**. Put your product in front of Python developers. [Become a sponsor](SPONSORSHIP.md).
|
||||
|
||||
# Categories
|
||||
|
||||
@ -15,7 +17,7 @@ An opinionated list of awesome Python frameworks, libraries, tools, software and
|
||||
- [Computer Vision](#computer-vision)
|
||||
- [Recommender Systems](#recommender-systems)
|
||||
|
||||
**Web**
|
||||
**Web Development**
|
||||
|
||||
- [Web Frameworks](#web-frameworks)
|
||||
- [Web APIs](#web-apis)
|
||||
@ -125,17 +127,23 @@ An opinionated list of awesome Python frameworks, libraries, tools, software and
|
||||
|
||||
_Libraries for building AI applications, LLM integrations, and autonomous agents._
|
||||
|
||||
- Frameworks
|
||||
- Agent Skills
|
||||
- [django-ai-plugins](https://github.com/vintasoftware/django-ai-plugins) - Django backend agent skills for Django, DRF, Celery, and Django-specific code review.
|
||||
- [sentry-skills](https://github.com/getsentry/skills) - Python-focused engineering skills for code review, debugging, and backend workflows.
|
||||
- [trailofbits-skills](https://github.com/trailofbits/skills) - Python-friendly security skills for auditing, testing, and safer backend development.
|
||||
- Orchestration
|
||||
- [autogen](https://github.com/microsoft/autogen) - A programming framework for building agentic AI applications.
|
||||
- [crewai](https://github.com/crewAIInc/crewAI) - A framework for orchestrating role-playing autonomous AI agents for collaborative task solving.
|
||||
- [dspy](https://github.com/stanfordnlp/dspy) - A framework for programming, not prompting, language models.
|
||||
- [instructor](https://github.com/567-labs/instructor) - A library for extracting structured data from LLMs, powered by Pydantic.
|
||||
- [langchain](https://github.com/langchain-ai/langchain) - Building applications with LLMs through composability.
|
||||
- [llama_index](https://github.com/run-llama/llama_index) - A data framework for your LLM application.
|
||||
- [pydantic-ai](https://github.com/pydantic/pydantic-ai) - A Python agent framework for building generative AI applications with structured schemas.
|
||||
- Pretrained Models and Inference
|
||||
- [diffusers](https://github.com/huggingface/diffusers) - A library that provides pretrained diffusion models for generating and editing images, audio, and video.
|
||||
- [transformers](https://github.com/huggingface/transformers) - A framework that lets you easily use pretrained transformer models for NLP, vision, and audio tasks.
|
||||
- Data Layer
|
||||
- [instructor](https://github.com/567-labs/instructor) - A library for extracting structured data from LLMs, powered by Pydantic.
|
||||
- [llama-index](https://github.com/run-llama/llama_index) - A data framework for your LLM application.
|
||||
- [mem0](https://github.com/mem0ai/mem0) - An intelligent memory layer for AI agents enabling personalized interactions.
|
||||
- Pre-trained Models and Inference
|
||||
- [diffusers](https://github.com/huggingface/diffusers) - A library that provides pre-trained diffusion models for generating and editing images, audio, and video.
|
||||
- [transformers](https://github.com/huggingface/transformers) - A framework that lets you easily use pre-trained transformer models for NLP, vision, and audio tasks.
|
||||
- [vllm](https://github.com/vllm-project/vllm) - A high-throughput and memory-efficient inference and serving engine for LLMs.
|
||||
|
||||
## Deep Learning
|
||||
@ -193,7 +201,7 @@ _Libraries for building recommender systems._
|
||||
- [implicit](https://github.com/benfred/implicit) - A fast Python implementation of collaborative filtering for implicit datasets.
|
||||
- [scikit-surprise](https://github.com/NicolasHug/Surprise) - A scikit for building and analyzing recommender systems.
|
||||
|
||||
**Web**
|
||||
**Web Development**
|
||||
|
||||
## Web Frameworks
|
||||
|
||||
@ -543,7 +551,6 @@ _Python implementation of data structures, algorithms and design patterns. Also
|
||||
- [sortedcontainers](https://github.com/grantjenks/python-sortedcontainers) - Fast and pure-Python implementation of sorted collections.
|
||||
- [thealgorithms](https://github.com/TheAlgorithms/Python) - All Algorithms implemented in Python.
|
||||
- Design Patterns
|
||||
- [python-cqrs](https://github.com/pypatterns/python-cqrs) - Event-Driven Architecture Framework with CQRS/CQS, Transaction Outbox, Saga orchestration.
|
||||
- [python-patterns](https://github.com/faif/python-patterns) - A collection of design patterns in Python.
|
||||
- [transitions](https://github.com/pytransitions/transitions) - A lightweight, object-oriented finite state machine implementation.
|
||||
|
||||
@ -573,14 +580,15 @@ _Tools of static analysis, linters and code quality checkers. Also see [awesome-
|
||||
- Code Formatters
|
||||
- [black](https://github.com/psf/black) - The uncompromising Python code formatter.
|
||||
- [isort](https://github.com/PyCQA/isort) - A Python utility / library to sort imports.
|
||||
- Static Type Checkers, also see [awesome-python-typing](https://github.com/typeddjango/awesome-python-typing)
|
||||
- [ruff](https://github.com/astral-sh/ruff) - An extremely fast Python linter and code formatter.
|
||||
- Refactoring
|
||||
- [rope](https://github.com/python-rope/rope) - Rope is a python refactoring library.
|
||||
- Type Checkers - [awesome-python-typing](https://github.com/typeddjango/awesome-python-typing)
|
||||
- [mypy](https://github.com/python/mypy) - Check variable types during compile time.
|
||||
- [pyre-check](https://github.com/facebook/pyre-check) - Performant type checking.
|
||||
- [ty](https://github.com/astral-sh/ty) - An extremely fast Python type checker and language server.
|
||||
- [typeshed](https://github.com/python/typeshed) - Collection of library stubs for Python, with static types.
|
||||
- Refactoring
|
||||
- [rope](https://github.com/python-rope/rope) - Rope is a python refactoring library.
|
||||
- Static Type Annotations Generators
|
||||
- Type Annotations Generators
|
||||
- [monkeytype](https://github.com/Instagram/MonkeyType) - A system for Python that generates static type annotations by collecting runtime types.
|
||||
- [pytype](https://github.com/google/pytype) - Pytype checks and infers types for Python code - without requiring type annotations.
|
||||
|
||||
@ -588,7 +596,7 @@ _Tools of static analysis, linters and code quality checkers. Also see [awesome-
|
||||
|
||||
_Libraries for testing codebases and generating test data._
|
||||
|
||||
- Testing Frameworks
|
||||
- Frameworks
|
||||
- [hypothesis](https://github.com/HypothesisWorks/hypothesis) - Hypothesis is an advanced Quickcheck style property based testing library.
|
||||
- [pytest](https://github.com/pytest-dev/pytest) - A mature full-featured Python testing tool.
|
||||
- [robotframework](https://github.com/robotframework/robotframework) - A generic test automation framework.
|
||||
@ -599,7 +607,7 @@ _Libraries for testing codebases and generating test data._
|
||||
- [tox](https://github.com/tox-dev/tox) - Auto builds and tests distributions in multiple Python versions
|
||||
- GUI / Web Testing
|
||||
- [locust](https://github.com/locustio/locust) - Scalable user load testing tool written in Python.
|
||||
- [playwright](https://github.com/microsoft/playwright-python) - Python version of the Playwright testing and automation library.
|
||||
- [playwright-python](https://github.com/microsoft/playwright-python) - Python version of the Playwright testing and automation library.
|
||||
- [pyautogui](https://github.com/asweigart/pyautogui) - PyAutoGUI is a cross-platform GUI automation Python module for human beings.
|
||||
- [schemathesis](https://github.com/schemathesis/schemathesis) - A tool for automatic property-based testing of web applications built with Open API / Swagger specifications.
|
||||
- [selenium](https://github.com/SeleniumHQ/selenium) - Python bindings for [Selenium](https://selenium.dev/) [WebDriver](https://selenium.dev/documentation/webdriver/).
|
||||
@ -737,11 +745,11 @@ _Tools and libraries for Virtual Networking and SDN (Software Defined Networking
|
||||
|
||||
**CLI & GUI**
|
||||
|
||||
## Command-line Interface Development
|
||||
## CLI Development
|
||||
|
||||
_Libraries for building command-line applications._
|
||||
|
||||
- Command-line Application Development
|
||||
- CLI Development
|
||||
- [argparse](https://docs.python.org/3/library/argparse.html) - (Python standard library) Command-line option and argument parsing.
|
||||
- [cement](https://github.com/datafolklabs/cement) - CLI Application Framework for Python.
|
||||
- [click](https://github.com/pallets/click/) - A package for creating beautiful command line interfaces in a composable way.
|
||||
@ -756,7 +764,7 @@ _Libraries for building command-line applications._
|
||||
- [textual](https://github.com/Textualize/textual) - A framework for building interactive user interfaces that run in the terminal and the browser.
|
||||
- [tqdm](https://github.com/tqdm/tqdm) - Fast, extensible progress bar for loops and CLI.
|
||||
|
||||
## Command-line Tools
|
||||
## CLI Tools
|
||||
|
||||
_Useful CLI-based tools for productivity._
|
||||
|
||||
@ -834,11 +842,11 @@ _Libraries for parsing and manipulating plain texts._
|
||||
_Libraries for working with HTML and XML._
|
||||
|
||||
- [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) - Providing Pythonic idioms for iterating, searching, and modifying HTML or XML.
|
||||
- [cssutils](https://github.com/jaraco/cssutils) - A CSS library for Python.
|
||||
- [justhtml](https://github.com/EmilStenstrom/justhtml/) - A pure Python HTML5 parser that just works.
|
||||
- [lxml](https://github.com/lxml/lxml) - A very fast, easy-to-use and versatile library for handling HTML and XML.
|
||||
- [markupsafe](https://github.com/pallets/markupsafe) - Implements a XML/HTML/XHTML Markup safe string for Python.
|
||||
- [pyquery](https://github.com/gawel/pyquery) - A jQuery-like library for parsing HTML.
|
||||
- [tinycss2](https://github.com/Kozea/tinycss2) - A low-level CSS parser and generator written in Python.
|
||||
- [xmltodict](https://github.com/martinblech/xmltodict) - Working with XML feel like you are working with JSON.
|
||||
|
||||
## File Format Processing
|
||||
@ -850,14 +858,14 @@ _Libraries for parsing and manipulating specific text formats._
|
||||
- [kreuzberg](https://github.com/kreuzberg-dev/kreuzberg) - High-performance document extraction library with a Rust core, supporting 62+ formats including PDF, Office, images with OCR, HTML, email, and archives.
|
||||
- [pyelftools](https://github.com/eliben/pyelftools) - Parsing and analyzing ELF files and DWARF debugging information.
|
||||
- [tablib](https://github.com/jazzband/tablib) - A module for Tabular Datasets in XLS, CSV, JSON, YAML.
|
||||
- Office
|
||||
- MS Office
|
||||
- [docxtpl](https://github.com/elapouya/python-docx-template) - Editing a docx document by jinja2 template
|
||||
- [openpyxl](https://openpyxl.readthedocs.io/en/stable/) - A library for reading and writing Excel 2010 xlsx/xlsm/xltx/xltm files.
|
||||
- [pyexcel](https://github.com/pyexcel/pyexcel) - Providing one API for reading, manipulating and writing csv, ods, xls, xlsx and xlsm files.
|
||||
- [python-docx](https://github.com/python-openxml/python-docx) - Reads, queries and modifies Microsoft Word 2007/2008 docx files.
|
||||
- [python-pptx](https://github.com/scanny/python-pptx) - Python library for creating and updating PowerPoint (.pptx) files.
|
||||
- [xlsxwriter](https://github.com/jmcnamara/XlsxWriter) - A Python module for creating Excel .xlsx files.
|
||||
- [xlwings](https://github.com/ZoomerAnalytics/xlwings) - A BSD-licensed library that makes it easy to call Python from Excel and vice versa.
|
||||
- [xlwings](https://github.com/xlwings/xlwings) - A BSD-licensed library that makes it easy to call Python from Excel and vice versa.
|
||||
- PDF
|
||||
- [pdf_oxide](https://github.com/yfedoseev/pdf_oxide) - A fast PDF library for text extraction, image extraction, and markdown conversion, powered by Rust.
|
||||
- [pdfminer.six](https://github.com/pdfminer/pdfminer.six) - Pdfminer.six is a community maintained fork of the original PDFMiner.
|
||||
@ -891,14 +899,14 @@ _Libraries for file manipulation._
|
||||
|
||||
_Libraries for manipulating images._
|
||||
|
||||
- [pillow](https://github.com/python-pillow/Pillow) - Pillow is the friendly [PIL](http://www.pythonware.com/products/pil/) fork.
|
||||
- [pillow](https://github.com/python-pillow/Pillow) - Pillow is the friendly [PIL](https://www.pythonware.com/products/pil/) fork.
|
||||
- [pymatting](https://github.com/pymatting/pymatting) - A library for alpha matting.
|
||||
- [python-barcode](https://github.com/WhyNotHugo/python-barcode) - Create barcodes in Python with no extra dependencies.
|
||||
- [python-qrcode](https://github.com/lincolnloop/python-qrcode) - A pure Python QR Code generator.
|
||||
- [pyvips](https://github.com/libvips/pyvips) - A fast image processing library with low memory needs.
|
||||
- [scikit-image](https://github.com/scikit-image/scikit-image) - A Python library for (scientific) image processing.
|
||||
- [thumbor](https://github.com/thumbor/thumbor) - A smart imaging service. It enables on-demand crop, re-sizing and flipping of images.
|
||||
- [wand](https://github.com/emcconville/wand) - Python bindings for [MagickWand](http://www.imagemagick.org/script/magick-wand.php), C API for ImageMagick.
|
||||
- [wand](https://github.com/emcconville/wand) - Python bindings for [MagickWand](https://www.imagemagick.org/script/magick-wand.php), C API for ImageMagick.
|
||||
|
||||
## Audio & Video Processing
|
||||
|
||||
@ -1090,7 +1098,6 @@ Where to discover learning resources or new Python libraries.
|
||||
- [Django Chat](https://djangochat.com/)
|
||||
- [PyPodcats](https://pypodcats.live)
|
||||
- [Python Bytes](https://pythonbytes.fm)
|
||||
- [Python Test](https://podcast.pythontest.com/)
|
||||
- [Talk Python To Me](https://talkpython.fm/)
|
||||
- [The Real Python Podcast](https://realpython.com/podcasts/rpp/)
|
||||
|
||||
@ -1100,4 +1107,4 @@ Your contributions are always welcome! Please take a look at the [contribution g
|
||||
|
||||
---
|
||||
|
||||
If you have any question about this opinionated list, do not hesitate to contact [@VintaChen](https://twitter.com/VintaChen) on Twitter.
|
||||
If you have any question about this opinionated list, do not hesitate to contact [@vinta](https://x.com/vinta) on X (Twitter).
|
||||
|
||||
@ -2,11 +2,11 @@
|
||||
|
||||
**The #10 most-starred repository on all of GitHub.**
|
||||
|
||||
awesome-python is where Python developers go to discover tools. When someone searches Google for "best Python libraries," they land here. When ChatGPT recommends Python tools, it references this list. When developers evaluate frameworks, this is the list they check.
|
||||
awesome-python is where Python developers go to discover tools. It ranks on the first page of Google for "best Python libraries," is referenced by ChatGPT and other LLMs when recommending Python tools, and is the list developers check when evaluating frameworks.
|
||||
|
||||
Your sponsorship puts your product in front of developers at the exact moment they're choosing what to use.
|
||||
|
||||
## By the Numbers
|
||||
## Audience
|
||||
|
||||
| Metric | Value |
|
||||
| ------------ | ---------------------------------------------------------------------------------------------------- |
|
||||
@ -15,22 +15,35 @@ Your sponsorship puts your product in front of developers at the exact moment th
|
||||
| Watchers |  |
|
||||
| Contributors |  |
|
||||
|
||||
Top referrers: GitHub, Google Search, YouTube, Reddit, ChatGPT — developers actively searching for and evaluating Python tools.
|
||||
**Who visits:** Professional Python developers evaluating libraries and tools for production use. Not beginners browsing tutorials. People making adoption decisions.
|
||||
|
||||
**Top referrers:** Google Search, GitHub, ChatGPT/LLMs, YouTube, Reddit, Hacker News.
|
||||
|
||||
## Sponsorship Tiers
|
||||
|
||||
### Logo Sponsor — $500/month (2 slots)
|
||||
### Logo Sponsor - $500/month
|
||||
|
||||
Your logo and a one-line description at the top of the README, seen by every visitor.
|
||||
Your logo and a one-line description pinned to the top of the README, above all project entries. Every visitor to the repo or awesome-python.com sees it first.
|
||||
|
||||
### Link Sponsor — $150/month (5 slots)
|
||||
**What you get:**
|
||||
- Logo + one-line description in the README header
|
||||
- Logo on awesome-python.com sponsor section
|
||||
- Permanent placement for the duration of your sponsorship
|
||||
|
||||
A text link with your product name at the top of the README, right below logo sponsors.
|
||||
### Link Sponsor - $150/month
|
||||
|
||||
A text link with your product name at the top of the README, directly below logo sponsors.
|
||||
|
||||
**What you get:**
|
||||
- Text link in the README header
|
||||
- Link on awesome-python.com sponsor section
|
||||
|
||||
## Past Sponsors
|
||||
|
||||
- [Warp](https://www.warp.dev/) - https://github.com/vinta/awesome-python/pull/2766
|
||||
- [Warp](https://www.warp.dev/) - The terminal for modern developers.
|
||||
|
||||
## Get Started
|
||||
|
||||
Email [vinta.chen@gmail.com](mailto:vinta.chen@gmail.com?subject=awesome-python%20Sponsorship) with your company name and preferred tier. Most sponsors are set up within 24 hours.
|
||||
Email [vinta.chen@gmail.com](mailto:vinta.chen@gmail.com?subject=awesome-python%20Sponsorship) with your company name and preferred tier.
|
||||
|
||||
Setup takes less than 24 hours. Month-to-month billing, cancel anytime.
|
||||
|
||||
@ -4,30 +4,12 @@
|
||||
import json
|
||||
import re
|
||||
import shutil
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import TypedDict
|
||||
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
from readme_parser import parse_readme, slugify
|
||||
|
||||
|
||||
def group_categories(
|
||||
parsed_groups: list[dict],
|
||||
resources: list[dict],
|
||||
) -> list[dict]:
|
||||
"""Combine parsed groups with resources for template rendering."""
|
||||
groups = list(parsed_groups)
|
||||
|
||||
if resources:
|
||||
groups.append(
|
||||
{
|
||||
"name": "Resources",
|
||||
"slug": slugify("Resources"),
|
||||
"categories": list(resources),
|
||||
}
|
||||
)
|
||||
|
||||
return groups
|
||||
from readme_parser import parse_readme
|
||||
|
||||
|
||||
class StarData(TypedDict):
|
||||
@ -120,6 +102,11 @@ def extract_entries(
|
||||
existing["categories"].append(cat["name"])
|
||||
if group_name not in existing["groups"]:
|
||||
existing["groups"].append(group_name)
|
||||
subcat = entry["subcategory"]
|
||||
if subcat:
|
||||
scoped = f"{cat['name']} > {subcat}"
|
||||
if not any(s["value"] == scoped for s in existing["subcategories"]):
|
||||
existing["subcategories"].append({"name": subcat, "value": scoped})
|
||||
else:
|
||||
merged = {
|
||||
"name": entry["name"],
|
||||
@ -127,6 +114,7 @@ def extract_entries(
|
||||
"description": entry["description"],
|
||||
"categories": [cat["name"]],
|
||||
"groups": [group_name],
|
||||
"subcategories": [{"name": entry["subcategory"], "value": f"{cat['name']} > {entry['subcategory']}"}] if entry["subcategory"] else [],
|
||||
"stars": None,
|
||||
"owner": None,
|
||||
"last_commit_at": None,
|
||||
@ -138,6 +126,13 @@ def extract_entries(
|
||||
return entries
|
||||
|
||||
|
||||
def format_stars_short(stars: int) -> str:
|
||||
"""Format star count as compact string like '230k'."""
|
||||
if stars >= 1000:
|
||||
return f"{stars // 1000}k"
|
||||
return str(stars)
|
||||
|
||||
|
||||
def build(repo_root: str) -> None:
|
||||
"""Main build: parse README, render single-page HTML via Jinja2 templates."""
|
||||
repo = Path(repo_root)
|
||||
@ -151,14 +146,17 @@ def build(repo_root: str) -> None:
|
||||
subtitle = stripped
|
||||
break
|
||||
|
||||
parsed_groups, resources = parse_readme(readme_text)
|
||||
parsed_groups = parse_readme(readme_text)
|
||||
|
||||
categories = [cat for g in parsed_groups for cat in g["categories"]]
|
||||
total_entries = sum(c["entry_count"] for c in categories)
|
||||
groups = group_categories(parsed_groups, resources)
|
||||
entries = extract_entries(categories, groups)
|
||||
entries = extract_entries(categories, parsed_groups)
|
||||
|
||||
stars_data = load_stars(website / "data" / "github_stars.json")
|
||||
|
||||
repo_self = stars_data.get("vinta/awesome-python", {})
|
||||
repo_stars = format_stars_short(repo_self["stars"]) if "stars" in repo_self else None
|
||||
|
||||
for entry in entries:
|
||||
repo_key = extract_github_repo(entry["url"])
|
||||
if not repo_key and entry.get("source_type") == "Built-in":
|
||||
@ -185,12 +183,12 @@ def build(repo_root: str) -> None:
|
||||
(site_dir / "index.html").write_text(
|
||||
tpl_index.render(
|
||||
categories=categories,
|
||||
resources=resources,
|
||||
groups=groups,
|
||||
subtitle=subtitle,
|
||||
entries=entries,
|
||||
total_entries=total_entries,
|
||||
total_categories=len(categories),
|
||||
repo_stars=repo_stars,
|
||||
build_date=datetime.now(timezone.utc).strftime("%B %d, %Y"),
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
@ -202,7 +200,7 @@ def build(repo_root: str) -> None:
|
||||
|
||||
(site_dir / "llms.txt").write_text(readme_text, encoding="utf-8")
|
||||
|
||||
print(f"Built single page with {len(parsed_groups)} groups, {len(categories)} categories + {len(resources)} resources")
|
||||
print(f"Built single page with {len(parsed_groups)} groups, {len(categories)} categories")
|
||||
print(f"Total entries: {total_entries}")
|
||||
print(f"Output: {site_dir}")
|
||||
|
||||
|
||||
@ -103,6 +103,7 @@ def main() -> None:
|
||||
|
||||
readme_text = README_PATH.read_text(encoding="utf-8")
|
||||
current_repos = extract_github_repos(readme_text)
|
||||
current_repos.add("vinta/awesome-python")
|
||||
print(f"Found {len(current_repos)} GitHub repos in README.md")
|
||||
|
||||
cache = load_stars(CACHE_FILE)
|
||||
|
||||
@ -20,6 +20,7 @@ class ParsedEntry(TypedDict):
|
||||
url: str
|
||||
description: str # inline HTML, properly escaped
|
||||
also_see: list[AlsoSee]
|
||||
subcategory: str # sub-category label, empty if none
|
||||
|
||||
|
||||
class ParsedSection(TypedDict):
|
||||
@ -28,8 +29,6 @@ class ParsedSection(TypedDict):
|
||||
description: str # plain text, links resolved to text
|
||||
entries: list[ParsedEntry]
|
||||
entry_count: int
|
||||
preview: str
|
||||
content_html: str # rendered HTML, properly escaped
|
||||
|
||||
|
||||
class ParsedGroup(TypedDict):
|
||||
@ -131,6 +130,7 @@ def _extract_description(nodes: list[SyntaxTreeNode]) -> str:
|
||||
# --- Entry extraction --------------------------------------------------------
|
||||
|
||||
_DESC_SEP_RE = re.compile(r"^\s*[-\u2013\u2014]\s*")
|
||||
_SUBCAT_TRAILING_RE = re.compile(r"[\s,\-\u2013\u2014]+(also\s+see\s*)?$", re.IGNORECASE)
|
||||
|
||||
|
||||
def _find_child(node: SyntaxTreeNode, child_type: str) -> SyntaxTreeNode | None:
|
||||
@ -178,7 +178,11 @@ def _extract_description_html(inline: SyntaxTreeNode, first_link: SyntaxTreeNode
|
||||
return _DESC_SEP_RE.sub("", html)
|
||||
|
||||
|
||||
def _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]:
|
||||
def _parse_list_entries(
|
||||
bullet_list: SyntaxTreeNode,
|
||||
*,
|
||||
subcategory: str = "",
|
||||
) -> list[ParsedEntry]:
|
||||
"""Extract entries from a bullet_list AST node.
|
||||
|
||||
Handles three patterns:
|
||||
@ -199,10 +203,16 @@ def _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]:
|
||||
first_link = _find_first_link(inline)
|
||||
|
||||
if first_link is None or not _is_leading_link(inline, first_link):
|
||||
# Subcategory label (plain text or text-before-link) — recurse into nested list
|
||||
# Subcategory label: take text before the first link, strip trailing separators
|
||||
pre_link = []
|
||||
for child in inline.children:
|
||||
if child.type == "link":
|
||||
break
|
||||
pre_link.append(child)
|
||||
label = _SUBCAT_TRAILING_RE.sub("", render_inline_text(pre_link)) if pre_link else render_inline_text(inline.children)
|
||||
nested = _find_child(list_item, "bullet_list")
|
||||
if nested:
|
||||
entries.extend(_parse_list_entries(nested))
|
||||
entries.extend(_parse_list_entries(nested, subcategory=label))
|
||||
continue
|
||||
|
||||
# Entry with a link
|
||||
@ -231,6 +241,7 @@ def _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]:
|
||||
url=url,
|
||||
description=desc_html,
|
||||
also_see=also_see,
|
||||
subcategory=subcategory,
|
||||
))
|
||||
|
||||
return entries
|
||||
@ -245,69 +256,6 @@ def _parse_section_entries(content_nodes: list[SyntaxTreeNode]) -> list[ParsedEn
|
||||
return entries
|
||||
|
||||
|
||||
# --- Content HTML rendering --------------------------------------------------
|
||||
|
||||
|
||||
def _render_bullet_list_html(
|
||||
bullet_list: SyntaxTreeNode,
|
||||
*,
|
||||
is_sub: bool = False,
|
||||
) -> str:
|
||||
"""Render a bullet_list node to HTML with entry/entry-sub/subcat classes."""
|
||||
out: list[str] = []
|
||||
|
||||
for list_item in bullet_list.children:
|
||||
if list_item.type != "list_item":
|
||||
continue
|
||||
|
||||
inline = _find_inline(list_item)
|
||||
if inline is None:
|
||||
continue
|
||||
|
||||
first_link = _find_first_link(inline)
|
||||
|
||||
if first_link is None or not _is_leading_link(inline, first_link):
|
||||
# Subcategory label (plain text or text-before-link)
|
||||
label = str(escape(render_inline_text(inline.children)))
|
||||
out.append(f'<div class="subcat">{label}</div>')
|
||||
nested = _find_child(list_item, "bullet_list")
|
||||
if nested:
|
||||
out.append(_render_bullet_list_html(nested, is_sub=False))
|
||||
continue
|
||||
|
||||
# Entry with a link
|
||||
name = str(escape(render_inline_text(first_link.children)))
|
||||
url = str(escape(first_link.attrGet("href") or ""))
|
||||
|
||||
if is_sub:
|
||||
out.append(f'<div class="entry-sub"><a href="{url}">{name}</a></div>')
|
||||
else:
|
||||
desc = _extract_description_html(inline, first_link)
|
||||
if desc:
|
||||
out.append(
|
||||
f'<div class="entry"><a href="{url}">{name}</a>'
|
||||
f'<span class="sep">—</span>{desc}</div>'
|
||||
)
|
||||
else:
|
||||
out.append(f'<div class="entry"><a href="{url}">{name}</a></div>')
|
||||
|
||||
# Nested items under an entry with a link are sub-entries
|
||||
nested = _find_child(list_item, "bullet_list")
|
||||
if nested:
|
||||
out.append(_render_bullet_list_html(nested, is_sub=True))
|
||||
|
||||
return "\n".join(out)
|
||||
|
||||
|
||||
def _render_section_html(content_nodes: list[SyntaxTreeNode]) -> str:
|
||||
"""Render a section's content nodes to HTML."""
|
||||
parts: list[str] = []
|
||||
for node in content_nodes:
|
||||
if node.type == "bullet_list":
|
||||
parts.append(_render_bullet_list_html(node))
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
# --- Section splitting -------------------------------------------------------
|
||||
|
||||
|
||||
@ -317,45 +265,15 @@ def _build_section(name: str, body: list[SyntaxTreeNode]) -> ParsedSection:
|
||||
content_nodes = body[1:] if desc else body
|
||||
entries = _parse_section_entries(content_nodes)
|
||||
entry_count = len(entries) + sum(len(e["also_see"]) for e in entries)
|
||||
preview = ", ".join(e["name"] for e in entries[:4])
|
||||
content_html = _render_section_html(content_nodes)
|
||||
return ParsedSection(
|
||||
name=name,
|
||||
slug=slugify(name),
|
||||
description=desc,
|
||||
entries=entries,
|
||||
entry_count=entry_count,
|
||||
preview=preview,
|
||||
content_html=content_html,
|
||||
)
|
||||
|
||||
|
||||
def _group_by_h2(
|
||||
nodes: list[SyntaxTreeNode],
|
||||
) -> list[ParsedSection]:
|
||||
"""Group AST nodes into sections by h2 headings."""
|
||||
sections: list[ParsedSection] = []
|
||||
current_name: str | None = None
|
||||
current_body: list[SyntaxTreeNode] = []
|
||||
|
||||
def flush() -> None:
|
||||
nonlocal current_name
|
||||
if current_name is None:
|
||||
return
|
||||
sections.append(_build_section(current_name, current_body))
|
||||
current_name = None
|
||||
|
||||
for node in nodes:
|
||||
if node.type == "heading" and node.tag == "h2":
|
||||
flush()
|
||||
current_name = _heading_text(node)
|
||||
current_body = []
|
||||
elif current_name is not None:
|
||||
current_body.append(node)
|
||||
|
||||
flush()
|
||||
return sections
|
||||
|
||||
|
||||
def _is_bold_marker(node: SyntaxTreeNode) -> str | None:
|
||||
"""Detect a bold-only paragraph used as a group marker.
|
||||
@ -432,43 +350,30 @@ def _parse_grouped_sections(
|
||||
return groups
|
||||
|
||||
|
||||
def parse_readme(text: str) -> tuple[list[ParsedGroup], list[ParsedSection]]:
|
||||
"""Parse README.md text into grouped categories and resources.
|
||||
def parse_readme(text: str) -> list[ParsedGroup]:
|
||||
"""Parse README.md text into grouped categories.
|
||||
|
||||
Returns (groups, resources) where groups is a list of ParsedGroup dicts
|
||||
containing nested categories, and resources is a flat list of ParsedSection.
|
||||
Returns a list of ParsedGroup dicts containing nested categories.
|
||||
Content between the thematic break (---) and # Resources or # Contributing
|
||||
is parsed as categories grouped by bold markers (**Group Name**).
|
||||
"""
|
||||
md = MarkdownIt("commonmark")
|
||||
tokens = md.parse(text)
|
||||
root = SyntaxTreeNode(tokens)
|
||||
children = root.children
|
||||
|
||||
# Find thematic break (---), # Resources, and # Contributing in one pass
|
||||
# Find thematic break (---) and section boundaries in one pass
|
||||
hr_idx = None
|
||||
resources_idx = None
|
||||
contributing_idx = None
|
||||
cat_end_idx = None
|
||||
for i, node in enumerate(children):
|
||||
if hr_idx is None and node.type == "hr":
|
||||
hr_idx = i
|
||||
elif node.type == "heading" and node.tag == "h1":
|
||||
text_content = _heading_text(node)
|
||||
if text_content == "Resources":
|
||||
resources_idx = i
|
||||
elif text_content == "Contributing":
|
||||
contributing_idx = i
|
||||
if cat_end_idx is None and text_content in ("Resources", "Contributing"):
|
||||
cat_end_idx = i
|
||||
if hr_idx is None:
|
||||
return [], []
|
||||
return []
|
||||
|
||||
# Slice into category and resource ranges
|
||||
cat_end = resources_idx or contributing_idx or len(children)
|
||||
cat_nodes = children[hr_idx + 1 : cat_end]
|
||||
|
||||
res_nodes: list[SyntaxTreeNode] = []
|
||||
if resources_idx is not None:
|
||||
res_end = contributing_idx or len(children)
|
||||
res_nodes = children[resources_idx + 1 : res_end]
|
||||
|
||||
groups = _parse_grouped_sections(cat_nodes)
|
||||
resources = _group_by_h2(res_nodes)
|
||||
|
||||
return groups, resources
|
||||
cat_nodes = children[hr_idx + 1 : cat_end_idx or len(children)]
|
||||
return _parse_grouped_sections(cat_nodes)
|
||||
|
||||
@ -1,75 +1,137 @@
|
||||
// State
|
||||
var activeFilter = null; // { type: "cat"|"group", value: "..." }
|
||||
var activeSort = { col: 'stars', order: 'desc' };
|
||||
var searchInput = document.querySelector('.search');
|
||||
var filterBar = document.querySelector('.filter-bar');
|
||||
var filterValue = document.querySelector('.filter-value');
|
||||
var filterClear = document.querySelector('.filter-clear');
|
||||
var noResults = document.querySelector('.no-results');
|
||||
var rows = document.querySelectorAll('.table tbody tr.row');
|
||||
var tags = document.querySelectorAll('.tag');
|
||||
var tbody = document.querySelector('.table tbody');
|
||||
const reducedMotion = window.matchMedia("(prefers-reduced-motion: reduce)");
|
||||
|
||||
// Relative time formatting
|
||||
function relativeTime(isoStr) {
|
||||
var date = new Date(isoStr);
|
||||
var now = new Date();
|
||||
var diffMs = now - date;
|
||||
var diffHours = Math.floor(diffMs / 3600000);
|
||||
var diffDays = Math.floor(diffMs / 86400000);
|
||||
if (diffHours < 1) return 'just now';
|
||||
if (diffHours < 24) return diffHours === 1 ? '1 hour ago' : diffHours + ' hours ago';
|
||||
if (diffDays === 1) return 'yesterday';
|
||||
if (diffDays < 30) return diffDays + ' days ago';
|
||||
var diffMonths = Math.floor(diffDays / 30);
|
||||
if (diffMonths < 12) return diffMonths === 1 ? '1 month ago' : diffMonths + ' months ago';
|
||||
var diffYears = Math.floor(diffDays / 365);
|
||||
return diffYears === 1 ? '1 year ago' : diffYears + ' years ago';
|
||||
function getScrollBehavior() {
|
||||
return reducedMotion.matches ? "auto" : "smooth";
|
||||
}
|
||||
|
||||
// Format all commit date cells
|
||||
document.querySelectorAll('.col-commit[data-commit]').forEach(function (td) {
|
||||
var time = td.querySelector('time');
|
||||
let activeFilter = null;
|
||||
let activeSort = { col: "stars", order: "desc" };
|
||||
const searchInput = document.querySelector(".search");
|
||||
const filterBar = document.querySelector(".filter-bar");
|
||||
const filterValue = document.querySelector(".filter-value");
|
||||
const filterClear = document.querySelector(".filter-clear");
|
||||
const noResults = document.querySelector(".no-results");
|
||||
const rows = document.querySelectorAll(".table tbody tr.row");
|
||||
const tags = document.querySelectorAll(".tag");
|
||||
const tbody = document.querySelector(".table tbody");
|
||||
|
||||
function initRevealSections() {
|
||||
const sections = document.querySelectorAll("[data-reveal]");
|
||||
if (!sections.length) return;
|
||||
|
||||
if (!("IntersectionObserver" in window)) {
|
||||
sections.forEach(function (section) {
|
||||
section.classList.add("is-visible");
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const observer = new IntersectionObserver(
|
||||
function (entries) {
|
||||
entries.forEach(function (entry) {
|
||||
if (!entry.isIntersecting) return;
|
||||
entry.target.classList.add("is-visible");
|
||||
observer.unobserve(entry.target);
|
||||
});
|
||||
},
|
||||
{
|
||||
threshold: 0.12,
|
||||
rootMargin: "0px 0px -8% 0px",
|
||||
},
|
||||
);
|
||||
|
||||
sections.forEach(function (section, index) {
|
||||
section.classList.add("will-reveal");
|
||||
section.style.transitionDelay = Math.min(index * 70, 180) + "ms";
|
||||
observer.observe(section);
|
||||
});
|
||||
}
|
||||
|
||||
initRevealSections();
|
||||
|
||||
// Smooth scroll without hash in URL
|
||||
document.querySelectorAll("[data-scroll-to]").forEach(function (link) {
|
||||
link.addEventListener("click", function (e) {
|
||||
const el = document.getElementById(link.dataset.scrollTo);
|
||||
if (!el) return;
|
||||
e.preventDefault();
|
||||
el.scrollIntoView({ behavior: getScrollBehavior() });
|
||||
});
|
||||
});
|
||||
|
||||
// Pause hero animations when scrolled out of view
|
||||
(function () {
|
||||
const hero = document.querySelector(".hero");
|
||||
if (!hero || !("IntersectionObserver" in window)) return;
|
||||
const observer = new IntersectionObserver(function (entries) {
|
||||
hero.classList.toggle("offscreen", !entries[0].isIntersecting);
|
||||
});
|
||||
observer.observe(hero);
|
||||
})();
|
||||
|
||||
function relativeTime(isoStr) {
|
||||
const date = new Date(isoStr);
|
||||
const now = new Date();
|
||||
const diffMs = now - date;
|
||||
const diffHours = Math.floor(diffMs / 3600000);
|
||||
const diffDays = Math.floor(diffMs / 86400000);
|
||||
if (diffHours < 1) return "just now";
|
||||
if (diffHours < 24)
|
||||
return diffHours === 1 ? "1 hour ago" : diffHours + " hours ago";
|
||||
if (diffDays === 1) return "yesterday";
|
||||
if (diffDays < 30) return diffDays + " days ago";
|
||||
const diffMonths = Math.floor(diffDays / 30);
|
||||
if (diffMonths < 12)
|
||||
return diffMonths === 1 ? "1 month ago" : diffMonths + " months ago";
|
||||
const diffYears = Math.floor(diffDays / 365);
|
||||
return diffYears === 1 ? "1 year ago" : diffYears + " years ago";
|
||||
}
|
||||
|
||||
document.querySelectorAll(".col-commit[data-commit]").forEach(function (td) {
|
||||
const time = td.querySelector("time");
|
||||
if (time) time.textContent = relativeTime(td.dataset.commit);
|
||||
});
|
||||
|
||||
// Store original row order for sort reset
|
||||
document
|
||||
.querySelectorAll(".expand-commit time[datetime]")
|
||||
.forEach(function (time) {
|
||||
time.textContent = relativeTime(time.getAttribute("datetime"));
|
||||
});
|
||||
|
||||
rows.forEach(function (row, i) {
|
||||
row._origIndex = i;
|
||||
row._expandRow = row.nextElementSibling;
|
||||
});
|
||||
|
||||
function collapseAll() {
|
||||
var openRows = document.querySelectorAll('.table tbody tr.row.open');
|
||||
if (!tbody) return;
|
||||
const openRows = tbody.querySelectorAll("tr.row.open");
|
||||
openRows.forEach(function (row) {
|
||||
row.classList.remove('open');
|
||||
row.setAttribute('aria-expanded', 'false');
|
||||
row.classList.remove("open");
|
||||
row.setAttribute("aria-expanded", "false");
|
||||
});
|
||||
}
|
||||
|
||||
function applyFilters() {
|
||||
var query = searchInput ? searchInput.value.toLowerCase().trim() : '';
|
||||
var visibleCount = 0;
|
||||
const query = searchInput ? searchInput.value.toLowerCase().trim() : "";
|
||||
let visibleCount = 0;
|
||||
|
||||
// Collapse all expanded rows on filter/search change
|
||||
collapseAll();
|
||||
|
||||
rows.forEach(function (row) {
|
||||
var show = true;
|
||||
let show = true;
|
||||
|
||||
// Category/group filter
|
||||
if (activeFilter) {
|
||||
var attr = activeFilter.type === 'cat' ? row.dataset.cats : row.dataset.groups;
|
||||
show = attr ? attr.split('||').indexOf(activeFilter.value) !== -1 : false;
|
||||
const rowTags = row.dataset.tags;
|
||||
show = rowTags ? rowTags.split("||").includes(activeFilter) : false;
|
||||
}
|
||||
|
||||
// Text search
|
||||
if (show && query) {
|
||||
if (!row._searchText) {
|
||||
var text = row.textContent.toLowerCase();
|
||||
var next = row.nextElementSibling;
|
||||
if (next && next.classList.contains('expand-row')) {
|
||||
text += ' ' + next.textContent.toLowerCase();
|
||||
let text = row.textContent.toLowerCase();
|
||||
const next = row.nextElementSibling;
|
||||
if (next && next.classList.contains("expand-row")) {
|
||||
text += " " + next.textContent.toLowerCase();
|
||||
}
|
||||
row._searchText = text;
|
||||
}
|
||||
@ -80,7 +142,7 @@ function applyFilters() {
|
||||
|
||||
if (show) {
|
||||
visibleCount++;
|
||||
var numCell = row.cells[0];
|
||||
const numCell = row.cells[0];
|
||||
if (numCell.textContent !== String(visibleCount)) {
|
||||
numCell.textContent = String(visibleCount);
|
||||
}
|
||||
@ -89,21 +151,16 @@ function applyFilters() {
|
||||
|
||||
if (noResults) noResults.hidden = visibleCount > 0;
|
||||
|
||||
// Update tag highlights
|
||||
tags.forEach(function (tag) {
|
||||
var isActive = activeFilter
|
||||
&& tag.dataset.type === activeFilter.type
|
||||
&& tag.dataset.value === activeFilter.value;
|
||||
tag.classList.toggle('active', isActive);
|
||||
tag.classList.toggle("active", activeFilter === tag.dataset.value);
|
||||
});
|
||||
|
||||
// Filter bar
|
||||
if (filterBar) {
|
||||
if (activeFilter) {
|
||||
filterBar.classList.add('visible');
|
||||
if (filterValue) filterValue.textContent = activeFilter.value;
|
||||
filterBar.classList.add("visible");
|
||||
if (filterValue) filterValue.textContent = activeFilter;
|
||||
} else {
|
||||
filterBar.classList.remove('visible');
|
||||
filterBar.classList.remove("visible");
|
||||
}
|
||||
}
|
||||
|
||||
@ -111,40 +168,43 @@ function applyFilters() {
|
||||
}
|
||||
|
||||
function updateURL() {
|
||||
var params = new URLSearchParams();
|
||||
var query = searchInput ? searchInput.value.trim() : '';
|
||||
if (query) params.set('q', query);
|
||||
const params = new URLSearchParams();
|
||||
const query = searchInput ? searchInput.value.trim() : "";
|
||||
if (query) params.set("q", query);
|
||||
if (activeFilter) {
|
||||
params.set(activeFilter.type === 'cat' ? 'category' : 'group', activeFilter.value);
|
||||
params.set("filter", activeFilter);
|
||||
}
|
||||
if (activeSort.col !== 'stars' || activeSort.order !== 'desc') {
|
||||
params.set('sort', activeSort.col);
|
||||
params.set('order', activeSort.order);
|
||||
if (activeSort.col !== "stars" || activeSort.order !== "desc") {
|
||||
params.set("sort", activeSort.col);
|
||||
params.set("order", activeSort.order);
|
||||
}
|
||||
var qs = params.toString();
|
||||
history.replaceState(null, '', qs ? '?' + qs : location.pathname);
|
||||
const qs = params.toString();
|
||||
history.replaceState(null, "", qs ? "?" + qs : location.pathname);
|
||||
}
|
||||
|
||||
function getSortValue(row, col) {
|
||||
if (col === 'name') {
|
||||
return row.querySelector('.col-name a').textContent.trim().toLowerCase();
|
||||
if (col === "name") {
|
||||
return row.querySelector(".col-name a").textContent.trim().toLowerCase();
|
||||
}
|
||||
if (col === 'stars') {
|
||||
var text = row.querySelector('.col-stars').textContent.trim().replace(/,/g, '');
|
||||
var num = parseInt(text, 10);
|
||||
if (col === "stars") {
|
||||
const text = row
|
||||
.querySelector(".col-stars")
|
||||
.textContent.trim()
|
||||
.replace(/,/g, "");
|
||||
const num = parseInt(text, 10);
|
||||
return isNaN(num) ? -1 : num;
|
||||
}
|
||||
if (col === 'commit-time') {
|
||||
var attr = row.querySelector('.col-commit').getAttribute('data-commit');
|
||||
if (col === "commit-time") {
|
||||
const attr = row.querySelector(".col-commit").getAttribute("data-commit");
|
||||
return attr ? new Date(attr).getTime() : 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
function sortRows() {
|
||||
var arr = Array.prototype.slice.call(rows);
|
||||
var col = activeSort.col;
|
||||
var order = activeSort.order;
|
||||
const arr = Array.prototype.slice.call(rows);
|
||||
const col = activeSort.col;
|
||||
const order = activeSort.order;
|
||||
|
||||
// Cache sort values once to avoid DOM queries per comparison
|
||||
arr.forEach(function (row) {
|
||||
@ -152,22 +212,22 @@ function sortRows() {
|
||||
});
|
||||
|
||||
arr.sort(function (a, b) {
|
||||
var aVal = a._sortVal;
|
||||
var bVal = b._sortVal;
|
||||
if (col === 'name') {
|
||||
var cmp = aVal < bVal ? -1 : aVal > bVal ? 1 : 0;
|
||||
const aVal = a._sortVal;
|
||||
const bVal = b._sortVal;
|
||||
if (col === "name") {
|
||||
const cmp = aVal < bVal ? -1 : aVal > bVal ? 1 : 0;
|
||||
if (cmp === 0) return a._origIndex - b._origIndex;
|
||||
return order === 'desc' ? -cmp : cmp;
|
||||
return order === "desc" ? -cmp : cmp;
|
||||
}
|
||||
if (aVal <= 0 && bVal <= 0) return a._origIndex - b._origIndex;
|
||||
if (aVal <= 0) return 1;
|
||||
if (bVal <= 0) return -1;
|
||||
var cmp = aVal - bVal;
|
||||
const cmp = aVal - bVal;
|
||||
if (cmp === 0) return a._origIndex - b._origIndex;
|
||||
return order === 'desc' ? -cmp : cmp;
|
||||
return order === "desc" ? -cmp : cmp;
|
||||
});
|
||||
|
||||
var frag = document.createDocumentFragment();
|
||||
const frag = document.createDocumentFragment();
|
||||
arr.forEach(function (row) {
|
||||
frag.appendChild(row);
|
||||
frag.appendChild(row._expandRow);
|
||||
@ -176,78 +236,86 @@ function sortRows() {
|
||||
applyFilters();
|
||||
}
|
||||
|
||||
const sortHeaders = document.querySelectorAll("th[data-sort]");
|
||||
|
||||
function updateSortIndicators() {
|
||||
document.querySelectorAll('th[data-sort]').forEach(function (th) {
|
||||
th.classList.remove('sort-asc', 'sort-desc');
|
||||
if (activeSort && th.dataset.sort === activeSort.col) {
|
||||
th.classList.add('sort-' + activeSort.order);
|
||||
sortHeaders.forEach(function (th) {
|
||||
th.classList.remove("sort-asc", "sort-desc");
|
||||
if (th.dataset.sort === activeSort.col) {
|
||||
th.classList.add("sort-" + activeSort.order);
|
||||
th.setAttribute(
|
||||
"aria-sort",
|
||||
activeSort.order === "asc" ? "ascending" : "descending",
|
||||
);
|
||||
} else {
|
||||
th.removeAttribute("aria-sort");
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Expand/collapse: event delegation on tbody
|
||||
if (tbody) {
|
||||
tbody.addEventListener('click', function (e) {
|
||||
tbody.addEventListener("click", function (e) {
|
||||
// Don't toggle if clicking a link or tag button
|
||||
if (e.target.closest('a') || e.target.closest('.tag')) return;
|
||||
if (e.target.closest("a") || e.target.closest(".tag")) return;
|
||||
|
||||
var row = e.target.closest('tr.row');
|
||||
const row = e.target.closest("tr.row");
|
||||
if (!row) return;
|
||||
|
||||
var isOpen = row.classList.contains('open');
|
||||
const isOpen = row.classList.contains("open");
|
||||
if (isOpen) {
|
||||
row.classList.remove('open');
|
||||
row.setAttribute('aria-expanded', 'false');
|
||||
row.classList.remove("open");
|
||||
row.setAttribute("aria-expanded", "false");
|
||||
} else {
|
||||
row.classList.add('open');
|
||||
row.setAttribute('aria-expanded', 'true');
|
||||
row.classList.add("open");
|
||||
row.setAttribute("aria-expanded", "true");
|
||||
}
|
||||
});
|
||||
|
||||
// Keyboard: Enter or Space on focused .row toggles expand
|
||||
tbody.addEventListener('keydown', function (e) {
|
||||
if (e.key !== 'Enter' && e.key !== ' ') return;
|
||||
var row = e.target.closest('tr.row');
|
||||
tbody.addEventListener("keydown", function (e) {
|
||||
if (e.key !== "Enter" && e.key !== " ") return;
|
||||
const row = e.target.closest("tr.row");
|
||||
if (!row) return;
|
||||
e.preventDefault();
|
||||
row.click();
|
||||
});
|
||||
}
|
||||
|
||||
// Tag click: filter by category or group
|
||||
tags.forEach(function (tag) {
|
||||
tag.addEventListener('click', function (e) {
|
||||
tag.addEventListener("click", function (e) {
|
||||
e.preventDefault();
|
||||
var type = tag.dataset.type;
|
||||
var value = tag.dataset.value;
|
||||
|
||||
// Toggle: click same filter again to clear
|
||||
if (activeFilter && activeFilter.type === type && activeFilter.value === value) {
|
||||
activeFilter = null;
|
||||
} else {
|
||||
activeFilter = { type: type, value: value };
|
||||
}
|
||||
const value = tag.dataset.value;
|
||||
activeFilter = activeFilter === value ? null : value;
|
||||
applyFilters();
|
||||
});
|
||||
});
|
||||
|
||||
// Clear filter
|
||||
if (filterClear) {
|
||||
filterClear.addEventListener('click', function () {
|
||||
filterClear.addEventListener("click", function () {
|
||||
activeFilter = null;
|
||||
applyFilters();
|
||||
});
|
||||
}
|
||||
|
||||
// Column sorting
|
||||
document.querySelectorAll('th[data-sort]').forEach(function (th) {
|
||||
th.addEventListener('click', function () {
|
||||
var col = th.dataset.sort;
|
||||
var defaultOrder = col === 'name' ? 'asc' : 'desc';
|
||||
var altOrder = defaultOrder === 'asc' ? 'desc' : 'asc';
|
||||
if (activeSort && activeSort.col === col) {
|
||||
if (activeSort.order === defaultOrder) activeSort = { col: col, order: altOrder };
|
||||
else activeSort = { col: 'stars', order: 'desc' };
|
||||
const noResultsClear = document.querySelector(".no-results-clear");
|
||||
if (noResultsClear) {
|
||||
noResultsClear.addEventListener("click", function () {
|
||||
if (searchInput) searchInput.value = "";
|
||||
activeFilter = null;
|
||||
applyFilters();
|
||||
});
|
||||
}
|
||||
|
||||
sortHeaders.forEach(function (th) {
|
||||
th.addEventListener("click", function () {
|
||||
const col = th.dataset.sort;
|
||||
const defaultOrder = col === "name" ? "asc" : "desc";
|
||||
const altOrder = defaultOrder === "asc" ? "desc" : "asc";
|
||||
if (activeSort.col === col) {
|
||||
if (activeSort.order === defaultOrder)
|
||||
activeSort = { col: col, order: altOrder };
|
||||
else activeSort = { col: "stars", order: "desc" };
|
||||
} else {
|
||||
activeSort = { col: col, order: defaultOrder };
|
||||
}
|
||||
@ -256,22 +324,27 @@ document.querySelectorAll('th[data-sort]').forEach(function (th) {
|
||||
});
|
||||
});
|
||||
|
||||
// Search input
|
||||
if (searchInput) {
|
||||
var searchTimer;
|
||||
searchInput.addEventListener('input', function () {
|
||||
let searchTimer;
|
||||
searchInput.addEventListener("input", function () {
|
||||
clearTimeout(searchTimer);
|
||||
searchTimer = setTimeout(applyFilters, 150);
|
||||
});
|
||||
|
||||
// Keyboard shortcuts
|
||||
document.addEventListener('keydown', function (e) {
|
||||
if (e.key === '/' && !['INPUT', 'TEXTAREA', 'SELECT'].includes(document.activeElement.tagName) && !e.ctrlKey && !e.metaKey) {
|
||||
document.addEventListener("keydown", function (e) {
|
||||
if (
|
||||
e.key === "/" &&
|
||||
!["INPUT", "TEXTAREA", "SELECT"].includes(
|
||||
document.activeElement.tagName,
|
||||
) &&
|
||||
!e.ctrlKey &&
|
||||
!e.metaKey
|
||||
) {
|
||||
e.preventDefault();
|
||||
searchInput.focus();
|
||||
}
|
||||
if (e.key === 'Escape' && document.activeElement === searchInput) {
|
||||
searchInput.value = '';
|
||||
if (e.key === "Escape" && document.activeElement === searchInput) {
|
||||
searchInput.value = "";
|
||||
activeFilter = null;
|
||||
applyFilters();
|
||||
searchInput.blur();
|
||||
@ -279,39 +352,60 @@ if (searchInput) {
|
||||
});
|
||||
}
|
||||
|
||||
// Back to top
|
||||
var backToTop = document.querySelector('.back-to-top');
|
||||
const backToTop = document.querySelector(".back-to-top");
|
||||
const resultsSection = document.querySelector("#library-index");
|
||||
const tableWrap = document.querySelector(".table-wrap");
|
||||
const stickyHeaderCell = backToTop ? backToTop.closest("th") : null;
|
||||
|
||||
function updateBackToTopVisibility() {
|
||||
if (!backToTop || !tableWrap || !stickyHeaderCell) return;
|
||||
|
||||
const tableRect = tableWrap.getBoundingClientRect();
|
||||
const headRect = stickyHeaderCell.getBoundingClientRect();
|
||||
const hasPassedHeader = tableRect.top <= 0 && headRect.bottom > 0;
|
||||
|
||||
backToTop.classList.toggle("visible", hasPassedHeader);
|
||||
}
|
||||
|
||||
if (backToTop) {
|
||||
var scrollTicking = false;
|
||||
window.addEventListener('scroll', function () {
|
||||
let scrollTicking = false;
|
||||
window.addEventListener("scroll", function () {
|
||||
if (!scrollTicking) {
|
||||
requestAnimationFrame(function () {
|
||||
backToTop.classList.toggle('visible', window.scrollY > 600);
|
||||
updateBackToTopVisibility();
|
||||
scrollTicking = false;
|
||||
});
|
||||
scrollTicking = true;
|
||||
}
|
||||
});
|
||||
backToTop.addEventListener('click', function () {
|
||||
window.scrollTo({ top: 0 });
|
||||
|
||||
window.addEventListener("resize", updateBackToTopVisibility);
|
||||
|
||||
backToTop.addEventListener("click", function () {
|
||||
const target = searchInput || resultsSection;
|
||||
if (!target) return;
|
||||
target.scrollIntoView({ behavior: getScrollBehavior(), block: "center" });
|
||||
if (searchInput) searchInput.focus();
|
||||
});
|
||||
|
||||
updateBackToTopVisibility();
|
||||
}
|
||||
|
||||
// Restore state from URL
|
||||
(function () {
|
||||
var params = new URLSearchParams(location.search);
|
||||
var q = params.get('q');
|
||||
var cat = params.get('category');
|
||||
var group = params.get('group');
|
||||
var sort = params.get('sort');
|
||||
var order = params.get('order');
|
||||
const params = new URLSearchParams(location.search);
|
||||
const q = params.get("q");
|
||||
const filter = params.get("filter");
|
||||
const sort = params.get("sort");
|
||||
const order = params.get("order");
|
||||
if (q && searchInput) searchInput.value = q;
|
||||
if (cat) activeFilter = { type: 'cat', value: cat };
|
||||
else if (group) activeFilter = { type: 'group', value: group };
|
||||
if ((sort === 'name' || sort === 'stars' || sort === 'commit-time') && (order === 'desc' || order === 'asc')) {
|
||||
if (filter) activeFilter = filter;
|
||||
if (
|
||||
(sort === "name" || sort === "stars" || sort === "commit-time") &&
|
||||
(order === "desc" || order === "asc")
|
||||
) {
|
||||
activeSort = { col: sort, order: order };
|
||||
}
|
||||
if (q || cat || group || sort) {
|
||||
if (q || filter || sort) {
|
||||
sortRows();
|
||||
}
|
||||
updateSortIndicators();
|
||||
|
||||
BIN
website/static/og-image.png
Normal file
BIN
website/static/og-image.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 146 KiB |
54
website/static/og-image.svg
Normal file
54
website/static/og-image.svg
Normal file
@ -0,0 +1,54 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="1200" height="630" viewBox="0 0 1200 630">
|
||||
<defs>
|
||||
<linearGradient id="bg" x1="0" y1="0" x2="1200" y2="630" gradientUnits="userSpaceOnUse">
|
||||
<stop offset="0%" stop-color="#1a1310"/>
|
||||
<stop offset="52%" stop-color="#221a14"/>
|
||||
<stop offset="100%" stop-color="#352a1e"/>
|
||||
</linearGradient>
|
||||
<radialGradient id="w1" cx="0.18" cy="0.18" r="0.45">
|
||||
<stop offset="0%" stop-color="#8b5a2b" stop-opacity="0.30"/>
|
||||
<stop offset="100%" stop-color="#8b5a2b" stop-opacity="0"/>
|
||||
</radialGradient>
|
||||
<radialGradient id="w2" cx="0.78" cy="0.35" r="0.50">
|
||||
<stop offset="0%" stop-color="#9a7a3a" stop-opacity="0.14"/>
|
||||
<stop offset="100%" stop-color="#9a7a3a" stop-opacity="0"/>
|
||||
</radialGradient>
|
||||
<linearGradient id="gold" x1="0" y1="0" x2="1" y2="0">
|
||||
<stop offset="0%" stop-color="#d4b882"/>
|
||||
<stop offset="100%" stop-color="#c08a40"/>
|
||||
</linearGradient>
|
||||
<pattern id="grid" width="112" height="112" patternUnits="userSpaceOnUse">
|
||||
<path d="M0 112V0h112" fill="none" stroke="#ffffff" stroke-opacity="0.035" stroke-width="1"/>
|
||||
</pattern>
|
||||
</defs>
|
||||
|
||||
<!-- Background -->
|
||||
<rect width="1200" height="630" fill="url(#bg)"/>
|
||||
<rect width="1200" height="630" fill="url(#w1)"/>
|
||||
<rect width="1200" height="630" fill="url(#w2)"/>
|
||||
<rect width="1200" height="630" fill="url(#grid)"/>
|
||||
|
||||
<!-- Top accent bar -->
|
||||
<rect width="1200" height="4" fill="url(#gold)"/>
|
||||
|
||||
<!-- Python logo watermark -->
|
||||
<g transform="translate(896, 190) scale(7)" opacity="0.10">
|
||||
<path d="M8 2h16a6 6 0 0 1 6 6v8H2V8a6 6 0 0 1 6-6z" fill="#f0c73e"/>
|
||||
<path d="M2 16h28v8a6 6 0 0 1-6 6H8a6 6 0 0 1-6-6z" fill="#6b93c4"/>
|
||||
<circle cx="11.5" cy="9.5" r="2.2" fill="#6b93c4"/>
|
||||
<circle cx="20.5" cy="22.5" r="2.2" fill="#f0c73e"/>
|
||||
</g>
|
||||
|
||||
<!-- Kicker -->
|
||||
<text x="80" y="200" font-family="'Helvetica Neue', Helvetica, Arial, sans-serif" font-size="16" font-weight="700" fill="#c9ba9e" letter-spacing="2">THE FIELD GUIDE TO THE PYTHON ECOSYSTEM</text>
|
||||
|
||||
<!-- Title -->
|
||||
<text x="80" y="326" font-family="Georgia, 'Times New Roman', 'Palatino Linotype', serif" font-size="96" fill="#f5f0e8" letter-spacing="-2">Awesome Python</text>
|
||||
|
||||
<!-- Subtitle -->
|
||||
<text x="80" y="394" font-family="'Helvetica Neue', Helvetica, Arial, sans-serif" font-size="26" fill="#b5a890">An opinionated list of awesome Python frameworks,</text>
|
||||
<text x="80" y="430" font-family="'Helvetica Neue', Helvetica, Arial, sans-serif" font-size="26" fill="#b5a890">libraries, tools, and resources.</text>
|
||||
|
||||
<!-- URL -->
|
||||
<text x="80" y="556" font-family="'Helvetica Neue', Helvetica, Arial, sans-serif" font-size="20" font-weight="700" fill="#c08a40" letter-spacing="0.5">awesome-python.com</text>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 2.7 KiB |
File diff suppressed because it is too large
Load Diff
@ -6,18 +6,36 @@
|
||||
<title>{% block title %}Awesome Python{% endblock %}</title>
|
||||
<meta
|
||||
name="description"
|
||||
content="{% block description %}An opinionated list of awesome Python frameworks, libraries, software and resources. {{ total_entries }} libraries across {{ categories | length }} categories.{% endblock %}"
|
||||
content="{% block description %}An opinionated list of Python frameworks, libraries, tools, and resources. {{ total_entries }} projects across {{ categories | length }} categories.{% endblock %}"
|
||||
/>
|
||||
<link rel="canonical" href="https://awesome-python.com/" />
|
||||
<meta property="og:type" content="website" />
|
||||
<meta property="og:title" content="Awesome Python" />
|
||||
<meta
|
||||
property="og:description"
|
||||
content="An opinionated list of awesome Python frameworks, libraries, software and resources."
|
||||
content="An opinionated list of Python frameworks, libraries, tools, and resources."
|
||||
/>
|
||||
<meta
|
||||
property="og:image"
|
||||
content="https://awesome-python.com/static/og-image.png"
|
||||
/>
|
||||
<meta property="og:url" content="https://awesome-python.com/" />
|
||||
<meta name="twitter:card" content="summary" />
|
||||
<meta name="theme-color" content="#1c1410" />
|
||||
<link rel="icon" href="/static/favicon.svg" type="image/svg+xml" />
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=Cormorant+Garamond:wght@600&family=Manrope:wght@400;600;700;800&display=swap"
|
||||
rel="stylesheet"
|
||||
media="print"
|
||||
onload="this.media = 'all'"
|
||||
/>
|
||||
<noscript
|
||||
><link
|
||||
href="https://fonts.googleapis.com/css2?family=Cormorant+Garamond:wght@600&family=Manrope:wght@400;600;700;800&display=swap"
|
||||
rel="stylesheet"
|
||||
/></noscript>
|
||||
<link rel="stylesheet" href="/static/style.css" />
|
||||
<script
|
||||
async
|
||||
@ -35,9 +53,21 @@
|
||||
<body>
|
||||
<a href="#content" class="skip-link">Skip to content</a>
|
||||
|
||||
<main id="content">{% block content %}{% endblock %}</main>
|
||||
{% block header %}{% endblock %}
|
||||
<main id="content">
|
||||
<noscript
|
||||
><p class="noscript-msg">
|
||||
Search and filtering require JavaScript.
|
||||
</p></noscript
|
||||
>
|
||||
{% block content %}{% endblock %}
|
||||
</main>
|
||||
|
||||
<footer class="footer">
|
||||
<div class="footer-left">
|
||||
<span class="footer-brand">Awesome Python</span>
|
||||
</div>
|
||||
<div class="footer-links">
|
||||
<span
|
||||
>Made by
|
||||
<a href="https://vinta.ws/" target="_blank" rel="noopener"
|
||||
@ -49,16 +79,12 @@
|
||||
>GitHub</a
|
||||
>
|
||||
<span class="footer-sep">/</span>
|
||||
<a href="https://twitter.com/vinta" target="_blank" rel="noopener"
|
||||
>Twitter</a
|
||||
<a href="https://x.com/vinta" target="_blank" rel="noopener"
|
||||
>X (Twitter)</a
|
||||
>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<noscript
|
||||
><p class="noscript-msg">
|
||||
JavaScript is needed for search and filtering.
|
||||
</p></noscript
|
||||
>
|
||||
<script src="/static/main.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@ -1,7 +1,26 @@
|
||||
{% extends "base.html" %} {% block content %}
|
||||
{% extends "base.html" %}
|
||||
{% block header %}
|
||||
<header class="hero">
|
||||
<div class="hero-main">
|
||||
<div>
|
||||
<div class="hero-sheen" aria-hidden="true"></div>
|
||||
<div class="hero-noise" aria-hidden="true"></div>
|
||||
|
||||
<div class="hero-shell">
|
||||
<nav class="hero-topbar" aria-label="Site">
|
||||
<span class="hero-brand-mini">Awesome Python</span>
|
||||
<div class="hero-topbar-actions">
|
||||
<a
|
||||
href="https://github.com/vinta/awesome-python/blob/master/CONTRIBUTING.md"
|
||||
class="hero-topbar-link hero-topbar-link-strong"
|
||||
target="_blank"
|
||||
rel="noopener"
|
||||
>Submit a project</a
|
||||
>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<div class="hero-grid">
|
||||
<div class="hero-copy">
|
||||
<p class="hero-kicker">The field guide to the Python ecosystem</p>
|
||||
<h1>Awesome Python</h1>
|
||||
<p class="hero-sub">
|
||||
{{ subtitle }}<br />Maintained by
|
||||
@ -16,26 +35,49 @@
|
||||
>@JinyangWang27</a
|
||||
>.
|
||||
</p>
|
||||
|
||||
<div class="hero-actions">
|
||||
<a
|
||||
href="#library-index"
|
||||
class="hero-action hero-action-primary"
|
||||
data-scroll-to="library-index"
|
||||
>Browse the List</a
|
||||
>
|
||||
<a
|
||||
href="https://github.com/vinta/awesome-python"
|
||||
class="hero-gh"
|
||||
class="hero-action hero-action-secondary"
|
||||
target="_blank"
|
||||
rel="noopener"
|
||||
>awesome-python on GitHub →</a
|
||||
>View on GitHub</a
|
||||
>
|
||||
</div>
|
||||
<a
|
||||
href="https://github.com/vinta/awesome-python/blob/master/CONTRIBUTING.md"
|
||||
class="hero-submit"
|
||||
target="_blank"
|
||||
rel="noopener"
|
||||
>Submit a Project</a
|
||||
>
|
||||
|
||||
{% if repo_stars or build_date %}
|
||||
<p class="hero-proof">
|
||||
{% if repo_stars %}{{ repo_stars }}+ stars on GitHub{% endif %} {% if
|
||||
repo_stars and build_date %}/{% endif %} {% if build_date %}Updated {{
|
||||
build_date }}{% endif %}
|
||||
</p>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
{% endblock %}
|
||||
{% block content %}
|
||||
<section class="results-section" id="library-index">
|
||||
<div class="results-intro section-shell" data-reveal>
|
||||
<div>
|
||||
<h2>Search every project in one place</h2>
|
||||
</div>
|
||||
<p class="results-note">
|
||||
Press <kbd>/</kbd> to search. Tap a tag to filter. Click any row for
|
||||
details.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="controls section-shell" data-reveal>
|
||||
<h2 class="sr-only">Search and filter</h2>
|
||||
<div class="controls">
|
||||
<div class="search-wrap">
|
||||
<svg
|
||||
class="search-icon"
|
||||
@ -54,30 +96,43 @@
|
||||
<input
|
||||
type="search"
|
||||
class="search"
|
||||
placeholder="Search {{ entries | length }} libraries across {{ total_categories }} categories..."
|
||||
aria-label="Search libraries"
|
||||
placeholder="Search {{ entries | length }} projects across {{ total_categories }} categories..."
|
||||
aria-label="Search projects"
|
||||
/>
|
||||
</div>
|
||||
<div class="filter-bar">
|
||||
<span>Showing <strong class="filter-value"></strong></span>
|
||||
<div class="filter-bar" aria-live="polite">
|
||||
<span>Filtering for <strong class="filter-value"></strong></span>
|
||||
<button class="filter-clear" aria-label="Clear filter">
|
||||
× Clear
|
||||
Clear filter
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h2 class="sr-only">Results</h2>
|
||||
<div class="table-wrap" tabindex="0" role="region" aria-label="Libraries table">
|
||||
<div
|
||||
class="table-wrap"
|
||||
tabindex="0"
|
||||
role="region"
|
||||
aria-label="Libraries table"
|
||||
>
|
||||
<table class="table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th class="col-num"><span class="sr-only">#</span></th>
|
||||
<th class="col-name" data-sort="name">Project Name</th>
|
||||
<th class="col-stars" data-sort="stars">GitHub Stars</th>
|
||||
<th class="col-commit" data-sort="commit-time">Last Commit</th>
|
||||
<th class="col-cat">Category</th>
|
||||
<th class="col-num"><span class="sr-only">Row number</span></th>
|
||||
<th class="col-name" data-sort="name">
|
||||
<button type="button" class="sort-btn">Project Name</button>
|
||||
</th>
|
||||
<th class="col-stars" data-sort="stars">
|
||||
<button type="button" class="sort-btn">GitHub Stars</button>
|
||||
</th>
|
||||
<th class="col-commit" data-sort="commit-time">
|
||||
<button type="button" class="sort-btn">Last Commit</button>
|
||||
</th>
|
||||
<th class="col-cat">Tags</th>
|
||||
<th class="col-arrow">
|
||||
<button class="back-to-top" aria-label="Back to top">↑</button>
|
||||
<button class="back-to-top" aria-label="Back to top">
|
||||
Top ↑
|
||||
</button>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
@ -85,9 +140,7 @@
|
||||
{% for entry in entries %}
|
||||
<tr
|
||||
class="row"
|
||||
role="button"
|
||||
data-cats="{{ entry.categories | join('||') }}{% if entry.source_type == 'Built-in' %}||Built-in{% endif %}"
|
||||
data-groups="{{ entry.groups | join('||') }}"
|
||||
data-tags="{{ entry.categories | join('||') }}{% if entry.subcategories %}||{{ entry.subcategories | map(attribute='value') | join('||') }}{% endif %}||{{ entry.groups | join('||') }}{% if entry.source_type == 'Built-in' %}||Built-in{% endif %}"
|
||||
tabindex="0"
|
||||
aria-expanded="false"
|
||||
aria-controls="expand-{{ loop.index }}"
|
||||
@ -97,11 +150,16 @@
|
||||
<a href="{{ entry.url }}" target="_blank" rel="noopener"
|
||||
>{{ entry.name }}</a
|
||||
>
|
||||
<span class="mobile-cat"
|
||||
>{% if entry.subcategories %}{{ entry.subcategories[0].name }}{%
|
||||
else %}{{ entry.categories[0] }}{% endif %}</span
|
||||
>
|
||||
</td>
|
||||
<td class="col-stars">
|
||||
{% if entry.stars is not none %}{{ "{:,}".format(entry.stars) }}{%
|
||||
elif entry.source_type %}<span class="source-badge">{{ entry.source_type }}</span>{%
|
||||
else %}—{% endif %}
|
||||
elif entry.source_type %}<span class="source-badge"
|
||||
>{{ entry.source_type }}</span
|
||||
>{% else %}—{% endif %}
|
||||
</td>
|
||||
<td
|
||||
class="col-commit"
|
||||
@ -119,19 +177,21 @@
|
||||
>{% else %}—{% endif %}
|
||||
</td>
|
||||
<td class="col-cat">
|
||||
{% for cat in entry.categories %}
|
||||
<button class="tag" data-type="cat" data-value="{{ cat }}">
|
||||
{{ cat }}
|
||||
{% for subcat in entry.subcategories %}
|
||||
<button class="tag" data-value="{{ subcat.value }}">
|
||||
{{ subcat.name }}
|
||||
</button>
|
||||
{% endfor %} {% for cat in entry.categories %}
|
||||
<button class="tag" data-value="{{ cat }}">{{ cat }}</button>
|
||||
{% endfor %}
|
||||
<button class="tag tag-group" data-value="{{ entry.groups[0] }}">
|
||||
{{ entry.groups[0] }}
|
||||
</button>
|
||||
{% if entry.source_type == 'Built-in' %}
|
||||
<button class="tag tag-source" data-type="cat" data-value="Built-in">
|
||||
<button class="tag tag-source" data-value="Built-in">
|
||||
Built-in
|
||||
</button>
|
||||
{% endif %}
|
||||
<button class="tag tag-group" data-type="group" data-value="{{ entry.groups[0] }}">
|
||||
{{ entry.groups[0] }}
|
||||
</button>
|
||||
</td>
|
||||
<td class="col-arrow"><span class="arrow">→</span></td>
|
||||
</tr>
|
||||
@ -163,6 +223,12 @@
|
||||
rel="noopener"
|
||||
>{{ entry.url | replace("https://", "") }}</a
|
||||
>
|
||||
{% if entry.last_commit_at %}<span class="expand-commit"
|
||||
><span class="expand-sep">/</span
|
||||
><time datetime="{{ entry.last_commit_at }}"
|
||||
>{{ entry.last_commit_at[:10] }}</time
|
||||
></span
|
||||
>{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</td>
|
||||
@ -173,5 +239,36 @@
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<div class="no-results" hidden>No libraries match your search.</div>
|
||||
<div class="no-results" hidden>
|
||||
<p>No projects match your search or filter.</p>
|
||||
<p class="no-results-hint">
|
||||
Try a broader term, or
|
||||
<button class="no-results-clear">browse all projects</button>.
|
||||
</p>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="final-cta" data-reveal>
|
||||
<div class="section-shell">
|
||||
<p class="section-label">Contribute</p>
|
||||
<h2>Know a project that belongs here?</h2>
|
||||
<p>Tell us what it does and why it stands out.</p>
|
||||
<div class="final-cta-actions">
|
||||
<a
|
||||
href="https://github.com/vinta/awesome-python/blob/master/CONTRIBUTING.md"
|
||||
class="hero-action hero-action-primary"
|
||||
target="_blank"
|
||||
rel="noopener"
|
||||
>Submit a project</a
|
||||
>
|
||||
<a
|
||||
href="https://github.com/vinta/awesome-python"
|
||||
class="hero-action hero-action-secondary"
|
||||
target="_blank"
|
||||
rel="noopener"
|
||||
>Star the repository</a
|
||||
>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
{% endblock %}
|
||||
|
||||
@ -7,12 +7,14 @@ from pathlib import Path
|
||||
|
||||
from build import (
|
||||
build,
|
||||
detect_source_type,
|
||||
extract_entries,
|
||||
extract_github_repo,
|
||||
group_categories,
|
||||
format_stars_short,
|
||||
load_stars,
|
||||
sort_entries,
|
||||
)
|
||||
from readme_parser import slugify
|
||||
from readme_parser import parse_readme, slugify
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# slugify
|
||||
@ -42,40 +44,6 @@ class TestSlugify:
|
||||
assert slugify(" Date and Time ") == "date-and-time"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# group_categories
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestGroupCategories:
|
||||
def test_appends_resources(self):
|
||||
parsed_groups = [
|
||||
{"name": "G1", "slug": "g1", "categories": [{"name": "Cat1"}]},
|
||||
]
|
||||
resources = [{"name": "Newsletters", "slug": "newsletters"}]
|
||||
groups = group_categories(parsed_groups, resources)
|
||||
group_names = [g["name"] for g in groups]
|
||||
assert "G1" in group_names
|
||||
assert "Resources" in group_names
|
||||
|
||||
def test_no_resources_no_extra_group(self):
|
||||
parsed_groups = [
|
||||
{"name": "G1", "slug": "g1", "categories": [{"name": "Cat1"}]},
|
||||
]
|
||||
groups = group_categories(parsed_groups, [])
|
||||
assert len(groups) == 1
|
||||
assert groups[0]["name"] == "G1"
|
||||
|
||||
def test_preserves_group_order(self):
|
||||
parsed_groups = [
|
||||
{"name": "Second", "slug": "second", "categories": [{"name": "C2"}]},
|
||||
{"name": "First", "slug": "first", "categories": [{"name": "C1"}]},
|
||||
]
|
||||
groups = group_categories(parsed_groups, [])
|
||||
assert groups[0]["name"] == "Second"
|
||||
assert groups[1]["name"] == "First"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# build (integration)
|
||||
# ---------------------------------------------------------------------------
|
||||
@ -94,19 +62,13 @@ class TestBuild:
|
||||
)
|
||||
(tpl_dir / "index.html").write_text(
|
||||
'{% extends "base.html" %}{% block content %}'
|
||||
"{% for group in groups %}"
|
||||
'<section class="group">'
|
||||
"<h2>{{ group.name }}</h2>"
|
||||
"{% for cat in group.categories %}"
|
||||
'<div class="row" id="{{ cat.slug }}">'
|
||||
"<span>{{ cat.name }}</span>"
|
||||
"<span>{{ cat.preview }}</span>"
|
||||
"<span>{{ cat.entry_count }}</span>"
|
||||
'<div class="row-content" hidden>{{ cat.content_html | safe }}</div>'
|
||||
"{% for entry in entries %}"
|
||||
'<div class="row">'
|
||||
"<span>{{ entry.name }}</span>"
|
||||
"<span>{{ entry.categories | join(', ') }}</span>"
|
||||
"<span>{{ entry.groups | join(', ') }}</span>"
|
||||
"</div>"
|
||||
"{% endfor %}"
|
||||
"</section>"
|
||||
"{% endfor %}"
|
||||
"{% endblock %}",
|
||||
encoding="utf-8",
|
||||
)
|
||||
@ -365,3 +327,133 @@ class TestSortEntries:
|
||||
]
|
||||
result = sort_entries(entries)
|
||||
assert [e["name"] for e in result] == ["apple", "zebra"]
|
||||
|
||||
def test_builtin_between_starred_and_unstarred(self):
|
||||
entries = [
|
||||
{"name": "builtin", "stars": None, "source_type": "Built-in"},
|
||||
{"name": "starred", "stars": 100, "source_type": None},
|
||||
{"name": "unstarred", "stars": None, "source_type": None},
|
||||
]
|
||||
result = sort_entries(entries)
|
||||
assert [e["name"] for e in result] == ["starred", "builtin", "unstarred"]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# detect_source_type
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestDetectSourceType:
|
||||
def test_github_repo_returns_none(self):
|
||||
assert detect_source_type("https://github.com/psf/requests") is None
|
||||
|
||||
def test_stdlib_url(self):
|
||||
assert detect_source_type("https://docs.python.org/3/library/asyncio.html") == "Built-in"
|
||||
|
||||
def test_gitlab_url(self):
|
||||
assert detect_source_type("https://gitlab.com/org/repo") == "GitLab"
|
||||
|
||||
def test_bitbucket_url(self):
|
||||
assert detect_source_type("https://bitbucket.org/org/repo") == "Bitbucket"
|
||||
|
||||
def test_non_github_external(self):
|
||||
assert detect_source_type("https://example.com/tool") == "External"
|
||||
|
||||
def test_github_non_repo_returns_none(self):
|
||||
assert detect_source_type("https://github.com/org/repo/wiki") is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# format_stars_short
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestFormatStarsShort:
|
||||
def test_under_1000(self):
|
||||
assert format_stars_short(500) == "500"
|
||||
|
||||
def test_exactly_1000(self):
|
||||
assert format_stars_short(1000) == "1k"
|
||||
|
||||
def test_large_number(self):
|
||||
assert format_stars_short(52000) == "52k"
|
||||
|
||||
def test_zero(self):
|
||||
assert format_stars_short(0) == "0"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# extract_entries
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestExtractEntries:
|
||||
def test_basic_extraction(self):
|
||||
readme = textwrap.dedent("""\
|
||||
# T
|
||||
|
||||
---
|
||||
|
||||
**Tools**
|
||||
|
||||
## Widgets
|
||||
|
||||
- [widget](https://example.com) - A widget.
|
||||
|
||||
# Contributing
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups = parse_readme(readme)
|
||||
categories = [c for g in groups for c in g["categories"]]
|
||||
entries = extract_entries(categories, groups)
|
||||
assert len(entries) == 1
|
||||
assert entries[0]["name"] == "widget"
|
||||
assert entries[0]["categories"] == ["Widgets"]
|
||||
assert entries[0]["groups"] == ["Tools"]
|
||||
|
||||
def test_duplicate_entry_merged(self):
|
||||
readme = textwrap.dedent("""\
|
||||
# T
|
||||
|
||||
---
|
||||
|
||||
**Tools**
|
||||
|
||||
## Alpha
|
||||
|
||||
- [shared](https://example.com/shared) - Shared lib.
|
||||
|
||||
## Beta
|
||||
|
||||
- [shared](https://example.com/shared) - Shared lib.
|
||||
|
||||
# Contributing
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups = parse_readme(readme)
|
||||
categories = [c for g in groups for c in g["categories"]]
|
||||
entries = extract_entries(categories, groups)
|
||||
shared = [e for e in entries if e["name"] == "shared"]
|
||||
assert len(shared) == 1
|
||||
assert sorted(shared[0]["categories"]) == ["Alpha", "Beta"]
|
||||
|
||||
def test_source_type_detected(self):
|
||||
readme = textwrap.dedent("""\
|
||||
# T
|
||||
|
||||
---
|
||||
|
||||
## Stdlib
|
||||
|
||||
- [asyncio](https://docs.python.org/3/library/asyncio.html) - Async I/O.
|
||||
|
||||
# Contributing
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups = parse_readme(readme)
|
||||
categories = [c for g in groups for c in g["categories"]]
|
||||
entries = extract_entries(categories, groups)
|
||||
assert entries[0]["source_type"] == "Built-in"
|
||||
|
||||
@ -1,10 +1,7 @@
|
||||
"""Tests for fetch_github_stars module."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
|
||||
from fetch_github_stars import (
|
||||
build_graphql_query,
|
||||
extract_github_repos,
|
||||
@ -138,6 +135,24 @@ class TestParseGraphqlResponse:
|
||||
assert result["a/x"]["stars"] == 100
|
||||
assert result["b/y"]["stars"] == 200
|
||||
|
||||
def test_extracts_last_commit_at(self):
|
||||
data = {
|
||||
"repo_0": {
|
||||
"stargazerCount": 100,
|
||||
"owner": {"login": "org"},
|
||||
"defaultBranchRef": {"target": {"committedDate": "2025-06-01T00:00:00Z"}},
|
||||
}
|
||||
}
|
||||
repos = ["org/repo"]
|
||||
result = parse_graphql_response(data, repos)
|
||||
assert result["org/repo"]["last_commit_at"] == "2025-06-01T00:00:00Z"
|
||||
|
||||
def test_missing_default_branch_ref(self):
|
||||
data = {"repo_0": {"stargazerCount": 50, "owner": {"login": "org"}}}
|
||||
repos = ["org/repo"]
|
||||
result = parse_graphql_response(data, repos)
|
||||
assert result["org/repo"]["last_commit_at"] == ""
|
||||
|
||||
|
||||
class TestMainSkipsFreshCache:
|
||||
"""Verify that main() skips fetching when all cache entries are fresh."""
|
||||
@ -163,7 +178,13 @@ class TestMainSkipsFreshCache:
|
||||
"owner": "psf",
|
||||
"last_commit_at": "2025-01-01T00:00:00+00:00",
|
||||
"fetched_at": (now - timedelta(hours=1)).isoformat(),
|
||||
}
|
||||
},
|
||||
"vinta/awesome-python": {
|
||||
"stars": 230000,
|
||||
"owner": "vinta",
|
||||
"last_commit_at": "2025-01-01T00:00:00+00:00",
|
||||
"fetched_at": (now - timedelta(hours=1)).isoformat(),
|
||||
},
|
||||
}
|
||||
cache_file.write_text(json.dumps(fresh_cache), encoding="utf-8")
|
||||
monkeypatch.setattr("fetch_github_stars.CACHE_FILE", cache_file)
|
||||
@ -198,7 +219,13 @@ class TestMainSkipsFreshCache:
|
||||
"owner": "psf",
|
||||
"last_commit_at": "2025-01-01T00:00:00+00:00",
|
||||
"fetched_at": (now - timedelta(hours=24)).isoformat(),
|
||||
}
|
||||
},
|
||||
"vinta/awesome-python": {
|
||||
"stars": 230000,
|
||||
"owner": "vinta",
|
||||
"last_commit_at": "2025-01-01T00:00:00+00:00",
|
||||
"fetched_at": (now - timedelta(hours=24)).isoformat(),
|
||||
},
|
||||
}
|
||||
cache_file.write_text(json.dumps(stale_cache), encoding="utf-8")
|
||||
monkeypatch.setattr("fetch_github_stars.CACHE_FILE", cache_file)
|
||||
@ -213,7 +240,12 @@ class TestMainSkipsFreshCache:
|
||||
"stargazerCount": 53000,
|
||||
"owner": {"login": "psf"},
|
||||
"defaultBranchRef": {"target": {"committedDate": "2025-06-01T00:00:00Z"}},
|
||||
}
|
||||
},
|
||||
"repo_1": {
|
||||
"stargazerCount": 231000,
|
||||
"owner": {"login": "vinta"},
|
||||
"defaultBranchRef": {"target": {"committedDate": "2025-06-01T00:00:00Z"}},
|
||||
},
|
||||
}
|
||||
}
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
@ -226,6 +258,6 @@ class TestMainSkipsFreshCache:
|
||||
main()
|
||||
|
||||
output = capsys.readouterr().out
|
||||
assert "1 repos to fetch" in output
|
||||
assert "Done. Fetched 1 repos" in output
|
||||
assert "2 repos to fetch" in output
|
||||
assert "Done. Fetched 2 repos" in output
|
||||
mock_client.post.assert_called_once()
|
||||
|
||||
@ -7,7 +7,6 @@ import pytest
|
||||
|
||||
from readme_parser import (
|
||||
_parse_section_entries,
|
||||
_render_section_html,
|
||||
parse_readme,
|
||||
render_inline_html,
|
||||
render_inline_text,
|
||||
@ -159,50 +158,39 @@ GROUPED_README = textwrap.dedent("""\
|
||||
|
||||
class TestParseReadmeSections:
|
||||
def test_ungrouped_categories_go_to_other(self):
|
||||
groups, resources = parse_readme(MINIMAL_README)
|
||||
groups = parse_readme(MINIMAL_README)
|
||||
assert len(groups) == 1
|
||||
assert groups[0]["name"] == "Other"
|
||||
assert len(groups[0]["categories"]) == 2
|
||||
|
||||
def test_ungrouped_category_names(self):
|
||||
groups, _ = parse_readme(MINIMAL_README)
|
||||
groups = parse_readme(MINIMAL_README)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["name"] == "Alpha"
|
||||
assert cats[1]["name"] == "Beta"
|
||||
|
||||
def test_resource_count(self):
|
||||
_, resources = parse_readme(MINIMAL_README)
|
||||
assert len(resources) == 2
|
||||
|
||||
def test_category_slugs(self):
|
||||
groups, _ = parse_readme(MINIMAL_README)
|
||||
groups = parse_readme(MINIMAL_README)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["slug"] == "alpha"
|
||||
assert cats[1]["slug"] == "beta"
|
||||
|
||||
def test_category_description(self):
|
||||
groups, _ = parse_readme(MINIMAL_README)
|
||||
groups = parse_readme(MINIMAL_README)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["description"] == "Libraries for alpha stuff."
|
||||
assert cats[1]["description"] == "Tools for beta."
|
||||
|
||||
def test_resource_names(self):
|
||||
_, resources = parse_readme(MINIMAL_README)
|
||||
assert resources[0]["name"] == "Newsletters"
|
||||
assert resources[1]["name"] == "Podcasts"
|
||||
|
||||
def test_contributing_skipped(self):
|
||||
groups, resources = parse_readme(MINIMAL_README)
|
||||
groups = parse_readme(MINIMAL_README)
|
||||
all_names = []
|
||||
for g in groups:
|
||||
all_names.extend(c["name"] for c in g["categories"])
|
||||
all_names.extend(r["name"] for r in resources)
|
||||
assert "Contributing" not in all_names
|
||||
|
||||
def test_no_separator(self):
|
||||
groups, resources = parse_readme("# Just a heading\n\nSome text.\n")
|
||||
groups = parse_readme("# Just a heading\n\nSome text.\n")
|
||||
assert groups == []
|
||||
assert resources == []
|
||||
|
||||
def test_no_description(self):
|
||||
readme = textwrap.dedent("""\
|
||||
@ -224,7 +212,7 @@ class TestParseReadmeSections:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, resources = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["description"] == ""
|
||||
assert cats[0]["entries"][0]["name"] == "item"
|
||||
@ -245,42 +233,37 @@ class TestParseReadmeSections:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["description"] == "Algorithms. Also see awesome-algos."
|
||||
|
||||
|
||||
class TestParseGroupedReadme:
|
||||
def test_group_count(self):
|
||||
groups, _ = parse_readme(GROUPED_README)
|
||||
groups = parse_readme(GROUPED_README)
|
||||
assert len(groups) == 2
|
||||
|
||||
def test_group_names(self):
|
||||
groups, _ = parse_readme(GROUPED_README)
|
||||
groups = parse_readme(GROUPED_README)
|
||||
assert groups[0]["name"] == "Group One"
|
||||
assert groups[1]["name"] == "Group Two"
|
||||
|
||||
def test_group_slugs(self):
|
||||
groups, _ = parse_readme(GROUPED_README)
|
||||
groups = parse_readme(GROUPED_README)
|
||||
assert groups[0]["slug"] == "group-one"
|
||||
assert groups[1]["slug"] == "group-two"
|
||||
|
||||
def test_group_one_has_one_category(self):
|
||||
groups, _ = parse_readme(GROUPED_README)
|
||||
groups = parse_readme(GROUPED_README)
|
||||
assert len(groups[0]["categories"]) == 1
|
||||
assert groups[0]["categories"][0]["name"] == "Alpha"
|
||||
|
||||
def test_group_two_has_two_categories(self):
|
||||
groups, _ = parse_readme(GROUPED_README)
|
||||
groups = parse_readme(GROUPED_README)
|
||||
assert len(groups[1]["categories"]) == 2
|
||||
assert groups[1]["categories"][0]["name"] == "Beta"
|
||||
assert groups[1]["categories"][1]["name"] == "Gamma"
|
||||
|
||||
def test_resources_still_parsed(self):
|
||||
_, resources = parse_readme(GROUPED_README)
|
||||
assert len(resources) == 1
|
||||
assert resources[0]["name"] == "Newsletters"
|
||||
|
||||
def test_empty_group_skipped(self):
|
||||
readme = textwrap.dedent("""\
|
||||
# T
|
||||
@ -299,7 +282,7 @@ class TestParseGroupedReadme:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
assert len(groups) == 1
|
||||
assert groups[0]["name"] == "HasCats"
|
||||
|
||||
@ -319,7 +302,7 @@ class TestParseGroupedReadme:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
# "Note:" has text after the strong node, so it's not a group marker
|
||||
# Category goes into "Other"
|
||||
assert len(groups) == 1
|
||||
@ -345,7 +328,7 @@ class TestParseGroupedReadme:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
assert len(groups) == 2
|
||||
assert groups[0]["name"] == "Other"
|
||||
assert groups[0]["categories"][0]["name"] == "Orphan"
|
||||
@ -438,33 +421,11 @@ class TestParseSectionEntries:
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
groups = parse_readme(readme)
|
||||
cats = groups[0]["categories"]
|
||||
# 2 main entries + 1 also_see = 3
|
||||
assert cats[0]["entry_count"] == 3
|
||||
|
||||
def test_preview_first_four_names(self):
|
||||
readme = textwrap.dedent("""\
|
||||
# T
|
||||
|
||||
---
|
||||
|
||||
## Libs
|
||||
|
||||
- [alpha](https://x.com) - A.
|
||||
- [beta](https://x.com) - B.
|
||||
- [gamma](https://x.com) - C.
|
||||
- [delta](https://x.com) - D.
|
||||
- [epsilon](https://x.com) - E.
|
||||
|
||||
# Contributing
|
||||
|
||||
Done.
|
||||
""")
|
||||
groups, _ = parse_readme(readme)
|
||||
cats = groups[0]["categories"]
|
||||
assert cats[0]["preview"] == "alpha, beta, gamma, delta"
|
||||
|
||||
def test_description_html_escapes_xss(self):
|
||||
nodes = _content_nodes('- [lib](https://x.com) - A <script>alert(1)</script> lib.\n')
|
||||
entries = _parse_section_entries(nodes)
|
||||
@ -472,58 +433,13 @@ class TestParseSectionEntries:
|
||||
assert "<script>" in entries[0]["description"]
|
||||
|
||||
|
||||
class TestRenderSectionHtml:
|
||||
def test_basic_entry(self):
|
||||
nodes = _content_nodes("- [django](https://example.com) - A web framework.\n")
|
||||
html = _render_section_html(nodes)
|
||||
assert 'class="entry"' in html
|
||||
assert 'href="https://example.com"' in html
|
||||
assert "django" in html
|
||||
assert "A web framework." in html
|
||||
|
||||
def test_subcategory_label(self):
|
||||
nodes = _content_nodes(
|
||||
"- Synchronous\n - [django](https://x.com) - Framework.\n"
|
||||
)
|
||||
html = _render_section_html(nodes)
|
||||
assert 'class="subcat"' in html
|
||||
assert "Synchronous" in html
|
||||
assert 'class="entry"' in html
|
||||
|
||||
def test_sub_entry(self):
|
||||
nodes = _content_nodes(
|
||||
"- [django](https://x.com) - Framework.\n"
|
||||
" - [awesome-django](https://y.com)\n"
|
||||
)
|
||||
html = _render_section_html(nodes)
|
||||
assert 'class="entry-sub"' in html
|
||||
assert "awesome-django" in html
|
||||
|
||||
def test_link_only_entry(self):
|
||||
nodes = _content_nodes("- [tool](https://x.com)\n")
|
||||
html = _render_section_html(nodes)
|
||||
assert 'class="entry"' in html
|
||||
assert 'href="https://x.com"' in html
|
||||
assert "tool" in html
|
||||
|
||||
def test_xss_escaped_in_name(self):
|
||||
nodes = _content_nodes('- [<img onerror=alert(1)>](https://x.com) - Bad.\n')
|
||||
html = _render_section_html(nodes)
|
||||
assert "onerror" not in html or "&" in html
|
||||
|
||||
def test_xss_escaped_in_subcat(self):
|
||||
nodes = _content_nodes("- <script>alert(1)</script>\n")
|
||||
html = _render_section_html(nodes)
|
||||
assert "<script>" not in html
|
||||
|
||||
|
||||
class TestParseRealReadme:
|
||||
@pytest.fixture(autouse=True)
|
||||
def load_readme(self):
|
||||
readme_path = os.path.join(os.path.dirname(__file__), "..", "..", "README.md")
|
||||
with open(readme_path, encoding="utf-8") as f:
|
||||
self.readme_text = f.read()
|
||||
self.groups, self.resources = parse_readme(self.readme_text)
|
||||
self.groups = parse_readme(self.readme_text)
|
||||
self.cats = [c for g in self.groups for c in g["categories"]]
|
||||
|
||||
def test_at_least_11_groups(self):
|
||||
@ -535,13 +451,8 @@ class TestParseRealReadme:
|
||||
def test_at_least_69_categories(self):
|
||||
assert len(self.cats) >= 69
|
||||
|
||||
def test_resources_has_newsletters_and_podcasts(self):
|
||||
names = [r["name"] for r in self.resources]
|
||||
assert "Newsletters" in names
|
||||
assert "Podcasts" in names
|
||||
|
||||
def test_contributing_not_in_results(self):
|
||||
all_names = [c["name"] for c in self.cats] + [r["name"] for r in self.resources]
|
||||
all_names = [c["name"] for c in self.cats]
|
||||
assert "Contributing" not in all_names
|
||||
|
||||
def test_first_category_is_ai_and_agents(self):
|
||||
@ -560,18 +471,6 @@ class TestParseRealReadme:
|
||||
for cat in self.cats:
|
||||
assert cat["entry_count"] > 0, f"{cat['name']} has 0 entries"
|
||||
|
||||
def test_previews_nonempty(self):
|
||||
for cat in self.cats:
|
||||
assert cat["preview"], f"{cat['name']} has empty preview"
|
||||
|
||||
def test_content_html_nonempty(self):
|
||||
for cat in self.cats:
|
||||
assert cat["content_html"], f"{cat['name']} has empty content_html"
|
||||
|
||||
def test_algorithms_has_subcategories(self):
|
||||
algos = next(c for c in self.cats if c["name"] == "Algorithms and Design Patterns")
|
||||
assert 'class="subcat"' in algos["content_html"]
|
||||
|
||||
def test_async_has_also_see(self):
|
||||
async_cat = next(c for c in self.cats if c["name"] == "Asynchronous Programming")
|
||||
asyncio_entry = next(e for e in async_cat["entries"] if e["name"] == "asyncio")
|
||||
|
||||
Loading…
Reference in New Issue
Block a user