skip to navigation
skip to content

Planet Python

Last update: November 18, 2025 07:43 AM UTC

November 17, 2025


Rodrigo Girão Serrão

Floodfill algorithm in Python

Learn how to implement and use the floodfill algorithm in Python.

What is the floodfill algorithm?

Click the image below to randomly colour the region you click.

Go ahead, try it!

IMG_WIDTH = 160 IMG_HEIGHT = 160 PIXEL_SIZE = 2 import asyncio import collections import random from pyscript import display from pyodide.ffi import create_proxy import js from js import fetch canvas = js.document.getElementById("bitmap") ctx = canvas.getContext("2d") URL = "/blog/floodfill-algorithm-in-python/_python.txt" async def load_bitmap(url: str) -> list[list[int]]: # Fetch the text file from the URL response = await fetch(url) text = await response.text() bitmap: list[list[int]] = [] for line in text.splitlines(): line = line.strip() if not line: continue row = [int(ch) for ch in line if ch in "01"] if row: bitmap.append(row) return bitmap def draw_bitmap(bitmap): rows = len(bitmap) cols = len(bitmap[0]) if rows > 0 else 0 if rows == 0 or cols == 0: return for y, row in enumerate(bitmap): for x, value in enumerate(row): if value == 1: ctx.fillStyle = "black" else: ctx.fillStyle = "white" ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) _neighbours = [(1, 0), (-1, 0), (0, 1), (0, -1)] async def fill_bitmap(bitmap, x, y): if bitmap[y][x] == 1: return ctx = canvas.getContext("2d") r, g, b = (random.randint(0, 255) for _ in range(3)) ctx.fillStyle = f"rgb({r}, {g}, {b})" def draw_pixel(x, y): ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) pixels = collections.deque([(x, y)]) seen = set((x, y)) while pixels: nx, ny = pixels.pop() draw_pixel(nx, ny) for dx, dy in _neighbours: x_, y_ = nx + dx, ny + dy if x_ < 0 or x_ >= IMG_WIDTH or y_ < 0 or y_ >= IMG_HEIGHT or (x_, y_) in seen: continue if bitmap[y_][x_] == 0: seen.add((x_, y_)) pixels.appendleft((x_, y_)) await asyncio.sleep(0.0001) is_running = False def get_event_coords(event): """Return (clientX, clientY) for mouse/pointer/touch events.""" # PointerEvent / MouseEvent: clientX/clientY directly available if hasattr(event, "clientX") and hasattr(event, "clientY") and event.clientX is not None: return event.clientX, event.clientY # TouchEvent: use the first touch point if hasattr(event, "touches") and event.touches.length > 0: touch = event.touches.item(0) return touch.clientX, touch.clientY # Fallback: try changedTouches if hasattr(event, "changedTouches") and event.changedTouches.length > 0: touch = event.changedTouches.item(0) return touch.clientX, touch.clientY return None, None async def on_canvas_press(event): global is_running if is_running: return is_running = True try: # Avoid scrolling / zooming taking over on touch if hasattr(event, "preventDefault"): event.preventDefault() clientX, clientY = get_event_coords(event) if clientX is None: # Could not read coordinates; bail out gracefully return rect = canvas.getBoundingClientRect() # Account for CSS scaling: map from displayed size to canvas units scale_x = canvas.width / rect.width scale_y = canvas.height / rect.height x_canvas = (clientX - rect.left) * scale_x y_canvas = (clientY - rect.top) * scale_y x_idx = int(x_canvas // PIXEL_SIZE) y_idx...

November 17, 2025 03:49 PM UTC


Real Python

How to Serve a Website With FastAPI Using HTML and Jinja2

By the end of this guide, you’ll be able to serve dynamic websites from FastAPI endpoints using Jinja2 templates powered by CSS and JavaScript. By leveraging FastAPI’s HTMLResponse, StaticFiles, and Jinja2Templates classes, you’ll use FastAPI like a traditional Python web framework.

You’ll start by returning basic HTML from your endpoints, then add Jinja2 templating for dynamic content, and finally create a complete website with external CSS and JavaScript files to copy hex color codes:

To follow along, you should be comfortable with Python functions and have a basic understanding of HTML and CSS. Experience with FastAPI is helpful but not required.

Get Your Code: Click here to download the free sample code that shows you how to serve a website with FastAPI using HTML and Jinja2.

Take the Quiz: Test your knowledge with our interactive “How to Serve a Website With FastAPI Using HTML and Jinja2” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Serve a Website With FastAPI Using HTML and Jinja2

Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles.

Prerequisites

Before you start building your HTML-serving FastAPI application, you’ll need to set up your development environment with the required packages. You’ll install FastAPI along with its standard dependencies, including the ASGI server you need to run your application.

Select your operating system below and install FastAPI with all the standard dependencies inside a virtual environment:

Windows PowerShell
PS> python -m venv venv
PS> .\venv\Scripts\activate
(venv) PS> python -m pip install "fastapi[standard]"
Shell
$ python -m venv venv
$ source venv/bin/activate
(venv) $ python -m pip install "fastapi[standard]"

These commands create and activate a virtual environment, then install FastAPI along with Uvicorn as the ASGI server, and additional dependencies that enhance FastAPI’s functionality. The standard option ensures you have everything you need for this tutorial, including Jinja2 for templating.

Step 1: Return Basic HTML Over an API Endpoint

When you take a close look at a FastAPI example application, you commonly encounter functions returning dictionaries, which the framework transparently serializes into JSON responses.

However, FastAPI’s flexibility allows you to serve various custom responses besides that—for example, HTMLResponse to return content as a text/html type, which your browser interprets as a web page.

To explore returning HTML with FastAPI, create a new file called main.py and build your first HTML-returning endpoint:

Python main.py
from fastapi import FastAPI
from fastapi.responses import HTMLResponse

app = FastAPI()

@app.get("/", response_class=HTMLResponse)
def home():
    html_content = """
    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>Home</title>
    </head>
    <body>
        <h1>Welcome to FastAPI!</h1>
    </body>
    </html>
    """
    return html_content

The HTMLResponse class tells FastAPI to return your content with the text/html content type instead of the default application/json response. This ensures that browsers interpret your response as HTML rather than plain text.

Before you can visit your home page, you need to start your FastAPI development server to see the HTML response in action:

Read the full article at https://realpython.com/fastapi-jinja2-template/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 17, 2025 02:00 PM UTC

Quiz: How to Serve a Website With FastAPI Using HTML and Jinja2

In this quiz, you’ll test your understanding of building dynamic websites with FastAPI and Jinja2 Templates.

By working through this quiz, you’ll revisit how to return HTML with HTMLResponse, serve assets with StaticFiles, render Jinja2 templates with context, and include CSS and JavaScript for interactivity like copying hex color codes.

If you are new to FastAPI, review Get Started With FastAPI. You can also brush up on Python functions and HTML and CSS.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 17, 2025 12:00 PM UTC


Python Bytes

#458 I will install Linux on your computer

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Possibility of a new website for Django</strong></li> <li><strong><a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></li> <li><strong><a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></li> <li><strong><a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=s2HlckfeBCs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="458">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Possibility of a new website for Django</strong></p> <ul> <li>Current Django site: <a href="https://www.djangoproject.com?featured_on=pythonbytes">djangoproject.com</a></li> <li>Adam Hill’s in progress redesign idea: <a href="https://django-homepage.adamghill.com?featured_on=pythonbytes">django-homepage.adamghill.com</a></li> <li>Commentary in the <a href="https://forum.djangoproject.com/t/want-to-work-on-a-homepage-site-redesign/42909/35?featured_on=pythonbytes">Want to work on a homepage site redesign? discussion</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></p> <ul> <li>🛡️A resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient and scalable database operations.</li> <li>About 2x better than regular SQLite.</li> <li>Pairs with <a href="https://github.com/omnilib/aiosqlite?featured_on=pythonbytes">aiosqlite</a></li> <li><code>aiosqlitepool</code> in three points: <ul> <li><strong>Eliminates connection overhead</strong>: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.</li> <li><strong>Faster queries via "hot" cache</strong>: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.</li> <li><strong>Maximizes concurrent throughput</strong>: Allows your application to process significantly more database queries per second under heavy load.</li> </ul></li> </ul> <p><strong>Brian #3: <a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></p> <ul> <li>“deptry is a command line tool to check for issues with dependencies in a Python project, such as unused or missing dependencies. It supports projects using Poetry, pip, PDM, uv, and more generally any project supporting PEP 621 specification.”</li> <li>“Dependency issues are detected by scanning for imported modules within all Python files in a directory and its subdirectories, and comparing those to the dependencies listed in the project's requirements.”</li> <li><p>Note if you use <code>project.optional-dependencies</code></p> <div class="codehilite"> <pre><span></span><code><span class="k">[project.optional-dependencies]</span> <span class="n">plot</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;matplotlib&quot;</span><span class="p">]</span> <span class="n">test</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;pytest&quot;</span><span class="p">]</span> </code></pre> </div></li> <li><p>you have to set a config setting to get it to work right:</p> <div class="codehilite"> <pre><span></span><code><span class="k">[tool.deptry]</span> <span class="n">pep621_dev_dependency_groups</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;test&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;docs&quot;</span><span class="p">]</span> </code></pre> </div></li> </ul> <p><strong>Michael #4: <a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></p> <ul> <li><strong><code>browsr</code></strong> 🗂️ is a pleasant <strong>file explorer</strong> in your terminal. It's a command line <strong>TUI</strong> (text-based user interface) application that empowers you to browse the contents of local and remote filesystems with your keyboard or mouse.</li> <li>You can quickly navigate through directories and peek at files whether they're hosted <strong>locally</strong>, in <strong>GitHub</strong>, over <strong>SSH</strong>, in <strong>AWS S3</strong>, <strong>Google Cloud Storage</strong>, or <strong>Azure Blob Storage</strong>.</li> <li>View code files with syntax highlighting, format JSON files, render images, convert data files to navigable datatables, and more.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Understanding the MICRO</li> <li>TDD chapter coming out later today or maybe tomorrow, but it’s close.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">Peacock</a> is excellent</li> </ul> <p><strong>Joke: <a href="https://x.com/thatstraw/status/1977317574779048171?featured_on=pythonbytes">I will find you</a></strong></p>

November 17, 2025 08:00 AM UTC

November 16, 2025


Ned Batchelder

Why your mock breaks later

In Why your mock doesn’t work I explained this rule of mocking:

Mock where the object is used, not where it’s defined.

That blog post explained why that rule was important: often a mock doesn’t work at all if you do it wrong. But in some cases, the mock will work even if you don’t follow this rule, and then it can break much later. Why?

Let’s say you have code like this:

# user.py

def get_user_settings():
    with open(Path("~/settings.json").expanduser()) as f:
        return json.load(f)

def add_two_settings():
    settings = get_user_settings()
    return settings["opt1"] + settings["opt2"]

You write a simple test:

def test_add_two_settings():
    # NOTE: need to create ~/settings.json for this to work:
    #   {"opt1": 10, "opt2": 7}
    assert add_two_settings() == 17

As the comment in the test points out, the test will only pass if you create the correct settings.json file in your home directory. This is bad: you don’t want to require finicky environments for your tests to pass.

The thing we want to avoid is opening a real file, so it’s a natural impulse to mock out open():

# test_user.py

from io import StringIO
from unittest.mock import patch

@patch("builtins.open")
def test_add_two_settings(mock_open):
    mock_open.return_value = StringIO('{"opt1": 10, "opt2": 7}')
    assert add_two_settings() == 17

Nice, the test works without needing to create a file in our home directory!

Much later...

One day your test suite fails with an error like:

...
  File ".../site-packages/coverage/python.py", line 55, in get_python_source
    source_bytes = read_python_source(try_filename)
  File ".../site-packages/coverage/python.py", line 39, in read_python_source
    return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
           ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeErrorreplace() argument 1 must be str, not bytes

What happened!? Coverage.py code runs during your tests, invoked by the Python interpreter. The mock in the test changed the builtin open, so any use of it anywhere during the test is affected. In some cases, coverage.py needs to read your source code to record the execution properly. When that happens, coverage.py unknowingly uses the mocked open, and bad things happen.

When you use a mock, patch it where it’s used, not where it’s defined. In this case, the patch would be:

@patch("myproduct.user.open")
def test_add_two_settings(mock_open):
    ... etc ...

With a mock like this, the coverage.py code would be unaffected.

Keep in mind: it’s not just coverage.py that could trip over this mock. There could be other libraries used by your code, or you might use open yourself in another part of your product. Mocking the definition means anything using the object will be affected. Your intent is to only mock in one place, so target that place.

Postscript

I decided to add some code to coverage.py to defend against this kind of over-mocking. There is a lot of over-mocking out there, and this problem only shows up in coverage.py with Python 3.14. It’s not happening to many people yet, but it will happen more and more as people start testing with 3.14. I didn’t want to have to answer this question many times, and I didn’t want to force people to fix their mocks.

From a certain perspective, I shouldn’t have to do this. They are in the wrong, not me. But this will reduce the overall friction in the universe. And the fix was really simple:

open = open

This is a top-level statement in my module, so it runs when the module is imported, long before any tests are run. The assignment to open will create a global in my module, using the current value of open, the one found in the builtins. This saves the original open for use in my module later, isolated from how builtins might be changed later.

This is an ad-hoc fix: it only defends one builtin. Mocking other builtins could still break coverage.py. But open is a common one, and this will keep things working smoothly for those cases. And there’s precedent: I’ve already been using a more involved technique to defend against mocking of the os module for ten years.

Even better!

No blog post about mocking is complete without encouraging a number of other best practices, some of which could get you out of the mocking mess:

November 16, 2025 12:55 PM UTC

November 15, 2025


Kay Hayen

Nuitka Release 2.8

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.

This release adds a ton of new features and corrections.

Bug Fixes

Package Support

New Features

Optimization

Anti-Bloat

Organizational

Tests

Cleanups

Summary

This release was supposed to focus on scalability, but that didn’t happen again due to a variety of important issues coming up as well as a created downtime after high private difficulties after a planned surgery. However, the upcoming release will have it finally.

The onefile DLL mode as used on Windows has driven a lot of need for corrections, some of which are only in the final release, and this is probably the first time it should be usable for everything.

For compatibility, working with the popular (yet - not yes recommended UV-Python), Windows UI fixes for temporary onefile and macOS improvements, as well as improved Android support are excellent.

The next release of Nuitka however will have to focus on scalability and maintenance only. But as usual, not sure if it can happen.

November 15, 2025 01:52 PM UTC

November 14, 2025


Real Python

The Real Python Podcast – Episode #274: Preparing Data Science Projects for Production

How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, "Production Ready Data Science."


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 14, 2025 12:00 PM UTC


EuroPython Society

Recognising Michael Foord as an Honorary EuroPython Society Fellow

Hi everyone. Today, we are honoured to announce a very special recognition.

The EuroPython Society has posthumously elected Michael Foord (aka voidspace) as an Honorary EuroPython Society Fellow.


Michael Foord (1974–2025)

Michael was a long-time and deeply influential member of the Python community. He began using Python in 2002, became a Python core developer, and left a lasting mark on the language through his work on unittest and the creation of the mock library. He also started the tradition of the Python Language Summits at PyCon US, and he consistently supported and connected the Python community across Europe and beyond.

However, his legacy extends far beyond code. Many of us first met Michael through his writing and tools, but what stayed with people was the example he set through his contributions, and how he showed up for others. He answered questions with patience, welcomed newcomers, and cared about doing the right thing in small, everyday ways. He made space for people to learn. He helped the Python community in Europe grow stronger and more connected. He made our community feel like a community.

His impact was celebrated widely across the community, with many tributes reflecting his kindness, humour, and dedication:

At EuroPython 2025, we held a memorial and kept a seat for him in the Forum Hall:


A lasting tribute

EuroPython Society Fellows are people whose work and care move our mission forward. By naming Michael an Honorary Fellow, we acknowledge his technical contributions and also the kindness and curiosity that defined his presence among us. We are grateful for the example he set, and we miss him.

Our thoughts and thanks are with Michael&aposs friends, collaborators, and family. His work lives on in our tools. His spirit lives on in how we treat each other.

With gratitude,
Your friends at EuroPython Society

November 14, 2025 09:00 AM UTC

November 13, 2025


Paolo Melchiorre

How to use UUIDv7 in Python, Django and PostgreSQL

Learn how to use UUIDv7 today with stable releases of Python 3.14, Django 5.2 and PostgreSQL 18. A step by step guide showing how to generate UUIDv7 in Python, store them in Django models, use PostgreSQL native functions and build time ordered primary keys without writing SQL.

November 13, 2025 11:00 PM UTC


Python Engineering at Microsoft

Python in Visual Studio Code – November 2025 Release

We’re excited to announce that the November 2025 release of the Python extension for Visual Studio Code is now available!

This release includes the following announcements:

If you’re interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.

Add Copilot Hover Summaries as docstring

You can now add your AI-generated documentation directly into your code as a docstring using the new Add as docstring command in Copilot Hover Summaries. When you generate a summary for a function or class, navigate to the symbol definition and hover over it to access the Add as docstring command, which inserts the summary below your cursor formatted as a proper docstring.

This streamlines the process of documenting your code, allowing you to quickly enhance readability and maintainability without retyping.

Add as docstring command in Copilot Hover Summaries

Localized Copilot Hover Summaries

GitHub Copilot Hover Summaries inside Pylance now respect your display language within VS Code. When you invoke an AI-generated summary, you’ll get strings in the language you’ve set for your editor, making it easier to understand the generated documentation.

Copilot Hover Summary generated in Portuguese

Convert wildcard imports into Code Action

Wildcard imports (from module import *) are often discouraged in Python because they can clutter your namespace and make it unclear where names come from, reducing code clarity and maintainability. Pylance now helps you clean up modules that still rely on from module import * via a new Code Action. It replaces the wildcard with the explicit symbols, preserving aliases and keeping the import to a single statement. To try it out, you can click on the line with the wildcard import and press Ctrl + . (or Cmd + . on macOS) to select the Convert to explicit imports Code Action.

Convert wildcard imports Code Action

Debugger support for multiple interpreters via the Python Environments Extension

The Python Debugger extension now leverages the APIs from the Python Environments Extension (vscode-python-debugger#849). When enabled, the debugger can recognize and use different interpreters for each project within a workspace. If you have multiple folders configured as projects—each with its own interpreter – the debugger will now respect these selections and use the interpreter shown in the status bar when debugging.

To enable this functionality, set “python.useEnvironmentsExtension”: true in your user settings. The new API integration is only active when this setting is turned on.

Please report any issues you encounter to the Python Debugger repository.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month’s contributors:

Try out these new improvements by downloading the Python extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – November 2025 Release appeared first on Microsoft for Python Developers Blog.

November 13, 2025 06:41 PM UTC

November 12, 2025


Python Software Foundation

Python is for everyone: Join in the PSF year-end fundraiser & membership drive!

The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.

Python is for teaching, learning, playing, researching, exploring, creating, working– the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.

There are two direct ways to join through donate.python.org

 

>>> Donate or Become a Member Today! <<<

 

If you already donated and/or you’re already a member, you can:

 

Your donations and support:

 

Highlights from 2025:

November 12, 2025 05:03 PM UTC


Real Python

The Python Standard REPL: Try Out Code and Ideas Quickly

The Python standard REPL (Read-Eval-Print Loop) lets you run code interactively, test ideas, and get instant feedback. You start it by running the python command, which opens an interactive shell included in every Python installation.

In this tutorial, you’ll learn how to use the Python REPL to execute code, edit and navigate code history, introspect objects, and customize the REPL for a smoother coding workflow.

By the end of this tutorial, you’ll understand that:

  • You can enter and run simple or compound statements in a REPL session.
  • The implicit _ variable stores the result of the last evaluated expression and can be reused in later expressions.
  • You can reload modules dynamically with importlib.reload() to test updates without restarting the REPL.
  • The modern Python REPL supports auto-indentation, history navigation, syntax highlighting, quick commands, and autocompletion, which improves your user experience.
  • You can customize the REPL with a startup file, color themes, and third-party libraries like Rich for a better experience.

With these skills, you can move beyond just running short code snippets and start using the Python REPL as a flexible environment for testing, debugging, and exploring new ideas.

Get Your Code: Click here to download the free sample code that you’ll use to explore the capabilities of Python’s standard REPL.

Take the Quiz: Test your knowledge with our interactive “The Python Standard REPL: Try Out Code and Ideas Quickly” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

The Python Standard REPL: Try Out Code and Ideas Quickly

Test your understanding of the Python standard REPL. The Python REPL allows you to run Python code interactively, which is useful for testing new ideas, exploring libraries, refactoring and debugging code, and trying out examples.

Getting to Know the Python Standard REPL

In computer programming, you’ll find two kinds of programming languages: compiled and interpreted languages. Compiled languages like C and C++ have an associated compiler program that converts the language’s code into machine code.

This machine code is typically saved in an executable file. Once you have an executable, you can run your program on any compatible computer system without needing the compiler or the source code.

In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed on your computer to run Python code. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.

However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what’s known as an interactive Read-Eval-Print Loop (REPL), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.

The Python REPL is a built-in interactive coding playground that you can start by typing python in your terminal. Once in a REPL session, you can run Python code:

Python
>>> "Python!" * 3
Python!Python!Python!
>>> 40 + 2
42

In the REPL, you can use Python as a calculator, but also try any Python code you can think of, and much more! Jump to starting and terminating REPL interactive sessions if you want to get your hands dirty right away, or keep reading to gather more background context first.

Note: In this tutorial, you’ll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don’t have CPython yet, then check out How to Install Python on Your System: A Guide for detailed instructions.

The standard REPL has changed significantly since Python 3.13 was released. Several limitations from earlier versions have been lifted. Throughout this tutorial, version differences are indicated when appropriate.

To dive deeper into the new REPL features, check out these resources:

The Python interpreter can execute Python code in two modes:

  1. Script, or program
  2. Interactive, or REPL

In script mode, you use the interpreter to run a source file—typically a .py file—as an executable program. In this case, Python loads the file’s content and runs the code line by line, following the script or program’s execution flow.

Alternatively, interactive mode is when you launch the interpreter using the python command and use it as a platform to run code that you type in directly.

In this tutorial, you’ll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!

What Is Python’s Interactive Shell or REPL?

When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive session. In this shell, your keyboard is the input source, and your screen is the output destination.

Note: In this tutorial, the terms interactive shell, interactive session, interpreter session, and REPL session are used interchangeably.

Here’s how the REPL works: it takes input consisting of Python code, which the interpreter parses and evaluates. Next, the interpreter displays the result on your screen, and the process starts again as a loop.

Read the full article at https://realpython.com/python-repl/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 12, 2025 02:00 PM UTC


Peter Bengtsson

Using AI to rewrite blog post comments

Using AI to correct and edit blog post comments as part of the moderation process.

November 12, 2025 12:42 PM UTC


Python Morsels

Unnecessary parentheses in Python

Python's ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn't be used.

Table of contents

  1. Parentheses can be used for grouping
  2. Python's if statements don't use parentheses
  3. Parentheses can go anywhere
  4. Parentheses for wrapping lines
  5. Parentheses that make statements look like functions
  6. Parentheses can go in lots of places
  7. Use parentheses sometimes
  8. Consider readability when adding or removing parentheses

Parentheses can be used for grouping

Parentheses are used for 3 things in Python: calling callables, creating empty tuples, and grouping.

Functions, classes, and other [callable][] objects can be called with parentheses:

>>> print("I'm calling a function")
I'm calling a function

Empty tuples can be created with parentheses:

>>> empty = ()

Lastly, parentheses can be used for grouping:

>>> 3 * (4 + 7)
33

Sometimes parentheses are necessary to convey the order of execution for an expression. For example, 3 * (4 + 7) is different than 3 * 4 + 7:

>>> 3 * (4 + 7)
33
>>> 3 * 4 + 7
19

Those parentheses around 4 + 7 are for grouping that sub-expression, which changes the meaning of the larger expression.

All confusing and unnecessary uses of parentheses are caused by this third use: grouping parentheses.

Python's if statements don't use parentheses

In JavaScript if statements look …

Read the full article: https://www.pythonmorsels.com/unnecessary-parentheses/

November 12, 2025 03:30 AM UTC


Seth Michael Larson

Blogrolls are the Best(rolls)

Happy 6-year blogiversary to me! 🎉 To celebrate I want to talk about other peoples’ blogs, more specifically the magic of “blogrolls”. Blogrolls are “lists of other sites that you read, are a follower of, or recommend”. Any blog can host a blogroll, or sometimes websites can be one big blogroll.

I’ve hosted a blogroll on my own blog since 2023 and encourage other bloggers to do so. My own blogroll is generated from the list of RSS feeds I subscribe to and articles that I “favorite” within my RSS reader. If you want to be particularly fancy you can add an RSS feed (example) to your blogroll that provides readers a method to “subscribe” for future blogroll updates.

Blogrolls are like catnip for me: I cannot resist opening and Ctrl-clicking every link until I can’t see my tabs anymore. The feeling is akin to the first deep breath of air before starting a hike: there’s a rush of new information, topics, and potential new blogs to follow.

Blogrolls can bridge the “effort chasm” I frequently hear as an issue when I recommend folks try an RSS feed reader. We’re not used to empty feeds anymore; self-curating blogs until you receive multiple articles per day takes time and effort. Blogrolls can help here, especially ones that publish using the importable OPML format.

You can instantly populate your feed reader app with hundreds of feeds from blogs that are likely relevant to you. Simply create an account on a feed reader, import the blogroll OPML document from a blogger you enjoy, and watch the articles “roll” in. Blogrolls are almost like Bluesky “Starter Packs” in this way!

Hopefully this has convinced you to either curate your own blogroll or to start looking for (or asking for!) blogrolls from your favorite writers on the Web. Share your favorite blogroll with me on email or social media. Title inspired by “Hexagons are the Best-agons”.



Thanks for keeping RSS alive! ♥

November 12, 2025 12:00 AM UTC

November 11, 2025


Ahmed Bouchefra

Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn.

If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now.

Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future.

And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over.

Ready? Let’s dive in.

1. Cohesion & Single Responsibility

This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change.

High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes.

Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare.

The senior approach? Break it up. You’d have:

Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful.

2. Encapsulation & Abstraction

This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data.

Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos.

The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.”

Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds.

The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface.

3. Loose Coupling & Modularity

Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system.

Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code.

The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.”

Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably.


A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it.

4. Reusability & Extensibility

This one’s a question you should always ask yourself: Can I add new functionality without editing existing code?

Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible.

The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic.

Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified.

5. Portability

This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed?

The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world.

The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure.

6. Defensibility

Write your code as if an idiot is going to use it. Because someday, that idiot will be you.

This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it.

In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults.

And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data.

7. Maintainability & Testability

The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test.

Code that is easy to test is, by default, more maintainable.

Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases.

The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components.

8. Simplicity (KISS, DRY, YAGNI)

Finally, after all that, the highest goal is simplicity.

Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last.

If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.

November 11, 2025 09:03 PM UTC


PyCoder’s Weekly

Issue #708: Debugging Live Code, NiceGUI, Textual, and More (Nov. 11, 2025)

#708 – NOVEMBER 11, 2025
View in Browser »

The PyCoder’s Weekly Logo


Debugging Live Code With CPython 3.14

Python 3.14 added new capabilities to attach to and debug a running process. Learn what this means for debugging and examining your running code.
SURISTER

NiceGUI Goes 3.0

Talk Python interviews Rodja Trappe and Falko Schindler, creators of the NiceGUI toolkit. They talk about what it can do and how it works.
TALK PYTHON

AI Code Reviews Without the Noise

alt

Sentry’s AI Code Review has caught more than 30,000 bugs before they hit production. 🤯 What it hasn’t caught: about a million spammy style nitpicks. Plus, it now predicts bugs 50% faster, and provides agent prompts to automate your fixes. Learn more about Sentry’s AI Code Review →
SENTRY sponsor

Building UIs in the Terminal With Python Textual

Learn to build rich, interactive terminal UIs in Python with Textual: a powerful library for modern, event-driven TUIs.
REAL PYTHON course

PEP 810: Explicit Lazy Imports (Accepted)

PYTHON.ORG

PEP 791: math.integer — submodule for integer-specific mathematics functions (Final)

PYTHON.ORG

PyCon US, Long Beach CA, 2026: Call for Proposals Open

PYCON.BLOGSPOT.COM

Django Security Release: 5.2.8, 5.1.14, and 4.2.26

DJANGO SOFTWARE FOUNDATION

EuroPython 2025 Videos Available

YOUTUBE.COM video

Python Jobs

Python Video Course Instructor (Anywhere)

Real Python

Python Tutorial Writer (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials

How Often Does Python Allocate?

How often does Python allocate? The answer is “very often”. This post demonstrates how you can see that for yourself. See also the associated HN discussion
ZACK RADISIC

Improving Security and Integrity of Python Package Archives

Python packages are built on top of archive formats like ZIP which can be problematic as features of the format can be abused. A recent white paper outlines dangers to PyPI and what can be done about it.
PYTHON SOFTWARE FOUNDATION

The 2025 AI Stack, Unpacked

alt

Temporal’s industry report explores how teams like Snap, Descript, and ZoomInfo are building production-ready AI systems, including what’s working, what’s breaking, and what’s next. Download today to see how your stack compares →
TEMPORAL sponsor

10 Smart Performance Hacks for Faster Python Code

Some practical optimization hacks, from data structures to built-in modules, that boost speed, reduce overhead, and keep your Python code clean.
DIDO GRIGOROV

Understanding the PSF’s Current Financial Outlook

A summary of the Python Software Foundation’s current financial outlook and what that means to the variety of community groups it supports.
PYTHON SOFTWARE FOUNDATION

__dict__: Where Python Stores Attributes

Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER

My Favorite Django Packages

A descriptive list of Mattias’s favorite Django packages divided into areas, including core helpers, data structures, CMS, PDFs, and more.
MATTHIAS KESTENHOLZ

A Close Look at a FastAPI Example Application

Set up an example FastAPI app, add path and query parameters, and handle CRUD operations with Pydantic for clean, validated endpoints.
REAL PYTHON

Quiz: A Close Look at a FastAPI Example Application

Practice FastAPI basics with path parameters, request bodies, async endpoints, and CORS. Build confidence to design and test simple Python web APIs.
REAL PYTHON

An Annual Release Cycle for Django

Carlton wants Django to move to an annual release cycle. This post explains why he thinks this way and what the benefits might be.
CARLTON GIBSON

Behave: ML Tests With Behavior-Driven Development

This walkthrough shows how to use the Behave library to bring behavior-driven testing to data and machine learning Python projects.
CODECUT.AI • Shared by Khuyen Tran

Polars and Pandas: Working With the Data-Frame

This post compares the syntax of Polars and pandas with a quick peek at the changes coming in pandas 3.0.
JUMPINGRIVERS.COM • Shared by Aida Gjoka

Projects & Code

moneyflow: Personal Finance Data Interface for Power Users

GITHUB.COM/WESM

wove: Beautiful Python Async

GITHUB.COM/CURVEDINF

tiny8: A Tiny CPU Simulator Written in Python

GITHUB.COM/SQL-HKR

FuncToWeb: Transform Pythons Function Into a Web Interface

GITHUB.COM/OFFERRALL

dj-spinners: Pure SVG Loading Spinners for Django

GITHUB.COM/ADAMGHILL

Events

Weekly Real Python Office Hours Q&A (Virtual)

November 12, 2025
REALPYTHON.COM

Python Leiden User Group

November 13, 2025
PYTHONLEIDEN.NL

Python Kino-Barcamp Südost

November 14 to November 17, 2025
BARCAMPS.EU

Python Atlanta

November 14, 2025
MEETUP.COM

PyCon Wroclaw 2025

November 15 to November 16, 2025
PYCONWROCLAW.COM

PyCon Ireland 2025

November 15 to November 17, 2025
PYCON.IE


Happy Pythoning!
This was PyCoder’s Weekly Issue #708.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

November 11, 2025 07:30 PM UTC


Daniel Roy Greenfeld

Visiting Tokyo, Japan from November 12 to 24

I'm excited to announce that me and Audrey will be visiting Japan from November 12 to November 24, 2025! This will be our first time in Japan, and we can't wait to explore Tokyo. Yes, we'll be in Tokyo for most of it, near the Shinjuku area, working from coffee shops, meeting some colleagues, and exploring the city during our free time. Our six year old daughter is with us, so our explorations will be family-friendly.

Unfortunately, we'll be between Python meetups in the Tokyo area. However, if you are in Toyo and write software in any shape or form, and would like to get together for coffee or a meal, please let me know!

If you do Brazilian Jiu-Jitsu in Tokyo, please let me know as well! I'd love to drop by a gym while I'm there.

November 11, 2025 02:45 PM UTC


Real Python

Python Operators and Expressions

Python operators enable you to perform computations by combining objects and operators into expressions. Understanding Python operators is essential for manipulating data effectively.

This video course covers arithmetic, comparison, Boolean, identity, membership, bitwise, concatenation, and repetition operators, along with augmented assignment operators. You’ll also learn how to build expressions using these operators and explore operator precedence to understand the order of operations in complex expressions.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 11, 2025 02:00 PM UTC


Python Bytes

#457 Tapping into HTTP

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://httptap.dev?featured_on=pythonbytes">httptap</a></strong></li> <li><strong><a href="https://blog.jetbrains.com/pycharm/2025/11/10-smart-performance-hacks-for-faster-python-code/?featured_on=pythonbytes">10 Smart Performance Hacks For Faster Python Code</a></strong></li> <li><strong><a href="https://fastrtc.org?featured_on=pythonbytes">FastRTC</a></strong></li> <li><strong><a href="https://pythontest.com/pipdeptree-uv-pip-tree/?featured_on=pythonbytes">Explore Python dependencies with <code>pipdeptree</code> and <code>uv pip tree</code></a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=YjoTi2hHZ-M' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="457">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://httptap.dev?featured_on=pythonbytes">httptap</a></strong></p> <ul> <li>Rich-powered CLI that breaks each HTTP request into DNS, connect, TLS, wait, and transfer phases with waterfall timelines, compact summaries, or metrics-only output.</li> <li>Features <ul> <li><strong>Phase-by-phase timing</strong> – precise measurements built from httpcore trace hooks (with sane fallbacks when metal-level data is unavailable).</li> <li><strong>All HTTP methods</strong> – GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS with request body support.</li> <li><strong>Request body support</strong> – send JSON, XML, or any data inline or from file with automatic Content-Type detection.</li> <li><strong>IPv4/IPv6 aware</strong> – the resolver and TLS inspector report both the address and its family.</li> <li><strong>TLS insights</strong> – certificate CN, expiry countdown, cipher suite, and protocol version are captured automatically.</li> <li><strong>Multiple output modes</strong> – rich waterfall view, compact single-line summaries, or <code>-metrics-only</code> for scripting.</li> <li><strong>JSON export</strong> – persist full step data (including redirect chains) for later processing.</li> <li><strong>Extensible</strong> – clean Protocol interfaces for DNS, TLS, timing, visualization, and export so you can plug in custom behavior.</li> </ul></li> <li>Example:</li> </ul> <p><img src="https://blobs.pythonbytes.fm/httptap-tp-example.png?cache_id=29ee04" alt="img" /></p> <p><strong>Brian #2: <a href="https://blog.jetbrains.com/pycharm/2025/11/10-smart-performance-hacks-for-faster-python-code/?featured_on=pythonbytes">10 Smart Performance Hacks For Faster Python Code</a></strong></p> <ul> <li>Dido Grigorov</li> <li>A few from the list <ul> <li>Use math functions instead of operators</li> <li>Avoid exception handling in hot loops</li> <li>Use itertools for combinatorial operations - huge speedup</li> <li>Use bisect for sorted list operations - huge speedup</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://fastrtc.org?featured_on=pythonbytes">FastRTC</a></strong></p> <ul> <li>The Real-Time Communication Library for Python: Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.</li> <li>Features <ul> <li>🗣️ Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.</li> <li>💻 Automatic UI - Use the <code>.ui.launch()</code> method to launch the webRTC-enabled built-in Gradio UI.</li> <li>🔌 Automatic WebRTC Support - Use the <code>.mount(app)</code> method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!</li> <li>⚡️ Websocket Support - Use the <code>.mount(app)</code> method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!</li> <li>📞 Automatic Telephone Support - Use the <code>fastphone()</code> method of the stream to launch the application and get a free temporary phone number!</li> <li>🤖 Completely customizable backend - A <code>Stream</code> can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude?featured_on=pythonbytes">Talk To Claude</a> demo for an example of how to serve a custom JS frontend.</li> </ul></li> </ul> <p><strong>Brian #4: <a href="https://pythontest.com/pipdeptree-uv-pip-tree/?featured_on=pythonbytes">Explore Python dependencies with &lt;code>pipdeptree&lt;/code> and &lt;code>uv pip tree&lt;/code></a></strong></p> <ul> <li>Suggested by Nicholas Carsner <ul> <li>We have covered it, but in 2017 on <a href="https://pythonbytes.fm/episodes/show/17/googles-python-is-on-fire-and-simon-says-you-have-cpu-load-pythonically">episode 17.</a></li> </ul></li> <li><a href="https://github.com/tox-dev/pipdeptree/blob/main/README.md#pipdeptree"><strong>pipdeptree</a></strong> <ul> <li>Use <code>pipdeptree --python auto</code> to allow it to read your venv</li> </ul></li> <li><a href="https://docs.astral.sh/uv/reference/cli/#uv-pip-tree"><strong>uv pip tree</strong></a> <ul> <li>Also check out <code>uv pip tree</code> and some useful flags <ul> <li><code>--show-version-specifiers</code> to show the rules</li> <li><code>--outdated</code> notes packages that need updated</li> </ul></li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD 0.1.1</a> includes an updated intro and another chapter, “Essential Components”</li> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">VSCode Peacock Extension</a> - color code your different projects</li> </ul> <p><strong>Joke: <a href="https://x.com/pr0grammerhum0r/status/1956264628960272542?s=12&featured_on=pythonbytes">Sure Grandma</a></strong></p>

November 11, 2025 08:00 AM UTC


Glyph Lefkowitz

The “Dependency Cutout” Workflow Pattern, Part I

Tell me if you’ve heard this one before.

You’re working on an application. Let’s call it “FooApp”. FooApp has a dependency on an open source library, let’s call it “LibBar”. You find a bug in LibBar that affects FooApp.

To envisage the best possible version of this scenario, let’s say you actively like LibBar, both technically and socially. You’ve contributed to it in the past. But this bug is causing production issues in FooApp today, and LibBar’s release schedule is quarterly. FooApp is your job; LibBar is (at best) your hobby. Blocking on the full upstream contribution cycle and waiting for a release is an absolute non-starter.

What do you do?

There are a few common reactions to this type of scenario, all of which are bad options.

I will enumerate them specifically here, because I suspect that some of them may resonate with many readers:

  1. Find an alternative to LibBar, and switch to it.

    This is a bad idea because a transition to a core infrastructure component could be extremely expensive.

  2. Vendor LibBar into your codebase and fix your vendored version.

    This is a bad idea because carrying this one fix now requires you to maintain all the tooling associated with a monorepo1: you have to be able to start pulling in new versions from LibBar regularly, reconcile your changes even though you now have a separate version history on your imported version, and so on.

  3. Monkey-patch LibBar to include your fix.

    This is a bad idea because you are now extremely tightly coupled to a specific version of LibBar. By modifying LibBar internally like this, you’re inherently violating its compatibility contract, in a way which is going to be extremely difficult to test. You can test this change, of course, but as LibBar changes, you will need to replicate any relevant portions of its test suite (which may be its entire test suite) in FooApp. Lots of potential duplication of effort there.

  4. Implement a workaround in your own code, rather than fixing it.

    This is a bad idea because you are distorting the responsibility for correct behavior. LibBar is supposed to do LibBar’s job, and unless you have a full wrapper for it in your own codebase, other engineers (including “yourself, personally”) might later forget to go through the alternate, workaround codepath, and invoke the buggy LibBar behavior again in some new place.

  5. Implement the fix upstream in LibBar anyway, because that’s the Right Thing To Do, and burn credibility with management while you anxiously wait for a release with the bug in production.

    This is a bad idea because you are betraying your users — by allowing the buggy behavior to persist — for the workflow convenience of your dependency providers. Your users are probably giving you money, and trusting you with their data. This means you have both ethical and economic obligations to consider their interests.

    As much as it’s nice to participate in the open source community and take on an appropriate level of burden to maintain the commons, this cannot sustainably be at the explicit expense of the population you serve directly.

    Even if we only care about the open source maintainers here, there’s still a problem: as you are likely to come under immediate pressure to ship your changes, you will inevitably relay at least a bit of that stress to the maintainers. Even if you try to be exceedingly polite, the maintainers will know that you are coming under fire for not having shipped the fix yet, and are likely to feel an even greater burden of obligation to ship your code fast.

    Much as it’s good to contribute the fix, it’s not great to put this on the maintainers.

The respective incentive structures of software development — specifically, of corporate application development and open source infrastructure development — make options 1-4 very common.

On the corporate / application side, these issues are:

But there are problems on the open source side as well. Those problems are all derived from one big issue: because we’re often working with relatively small sums of money, it’s hard for upstream open source developers to consume either money or patches from application developers. It’s nice to say that you should contribute money to your dependencies, and you absolutely should, but the cost-benefit function is discontinuous. Before a project reaches the fiscal threshold where it can be at least one person’s full-time job to worry about this stuff, there’s often no-one responsible in the first place. Developers will therefore gravitate to the issues that are either fun, or relevant to their own job.

These mutually-reinforcing incentive structures are a big reason that users of open source infrastructure, even teams who work at corporate users with zillions of dollars, don’t reliably contribute back.

The Answer We Want

All those options are bad. If we had a good option, what would it look like?

It is both practically necessary3 and morally required4 for you to have a way to temporarily rely on a modified version of an open source dependency, without permanently diverging.

Below, I will describe a desirable abstract workflow for achieving this goal.

Step 0: Report the Problem

Before you get started with any of these other steps, write up a clear description of the problem and report it to the project as an issue; specifically, in contrast to writing it up as a pull request. Describe the problem before submitting a solution.

You may not be able to wait for a volunteer-run open source project to respond to your request, but you should at least tell the project what you’re planning on doing.

If you don’t hear back from them at all, you will have at least made sure to comprehensively describe your issue and strategy beforehand, which will provide some clarity and focus to your changes.

If you do hear back from them, in the worst case scenario, you may discover that a hard fork will be necessary because they don’t consider your issue valid, but even that information will save you time, if you know it before you get started. In the best case, you may get a reply from the project telling you that you’ve misunderstood its functionality and that there is already a configuration parameter or usage pattern that will resolve your problems with no new code. But in all cases, you will benefit from early coordination on what needs fixing before you get to how to fix it.

Step 1: Source Code and CI Setup

Fork the source code for your upstream dependency to a writable location where it can live at least for the duration of this one bug-fix, and possibly for the duration of your application’s use of the dependency. After all, you might want to fix more than one bug in LibBar.

You want to have a place where you can put your edits, that will be version controlled and code reviewed according to your normal development process. This probably means you’ll need to have your own main branch that diverges from your upstream’s main branch.

Remember: you’re going to need to deploy this to your production, so testing gates that your upstream only applies to final releases of LibBar will need to be applied to every commit here.

Depending on your LibBar’s own development process, this may result in slightly unusual configurations where, for example, your fixes are written against the last LibBar release tag, rather than its current5 main; if the project has a branch-freshness requirement, you might need two branches, one for your upstream PR (based on main) and one for your own use (based on the release branch with your changes).

Ideally for projects with really good CI and a strong “keep main release-ready at all times” policy, you can deploy straight from a development branch, but it’s good to take a moment to consider this before you get started. It’s usually easier to rebase changes from an older HEAD onto a newer one than it is to go backwards.

Speaking of CI, you will want to have your own CI system. The fact that GitHub Actions has become a de-facto lingua franca of continuous integration means that this step may be quite simple, and your forked repo can just run its own instance.

Optional Bonus Step 1a: Artifact Management

If you have an in-house artifact repository, you should set that up for your dependency too, and upload your own build artifacts to it. You can often treat your modified dependency as an extension of your own source tree and install from a GitHub URL, but if you’ve already gone to the trouble of having an in-house package repository, you can pretend you’ve taken over maintenance of the upstream package temporarily (which you kind of have) and leverage those workflows for caching and build-time savings as you would with any other internal repo.

Step 2: Do The Fix

Now that you’ve got somewhere to edit LibBar’s code, you will want to actually fix the bug.

Step 2a: Local Filesystem Setup

Before you have a production version on your own deployed branch, you’ll want to test locally, which means having both repositories in a single integrated development environment.

At this point, you will want to have a local filesystem reference to your LibBar dependency, so that you can make real-time edits, without going through a slow cycle of pushing to a branch in your LibBar fork, pushing to a FooApp branch, and waiting for all of CI to run on both.

This is useful in both directions: as you prepare the FooApp branch that makes any necessary updates on that end, you’ll want to make sure that FooApp can exercise the LibBar fix in any integration tests. As you work on the LibBar fix itself, you’ll also want to be able to use FooApp to exercise the code and see if you’ve missed anything - and this, you wouldn’t get in CI, since LibBar can’t depend on FooApp itself.

In short, you want to be able to treat both projects as an integrated development environment, with support from your usual testing and debugging tools, just as much as you want your deployment output to be an integrated artifact.

Step 2b: Branch Setup for PR

However, for continuous integration to work, you will also need to have a remote resource reference of some kind from FooApp’s branch to LibBar. You will need 2 pull requests: the first to land your LibBar changes to your internal LibBar fork and make sure it’s passing its own tests, and then a second PR to switch your LibBar dependency from the public repository to your internal fork.

At this step it is very important to ensure that there is an issue filed on your own internal backlog to drop your LibBar fork. You do not want to lose track of this work; it is technical debt that must be addressed.

Until it’s addressed, automated tools like Dependabot will not be able to apply security updates to LibBar for you; you’re going to need to manually integrate every upstream change. This type of work is itself very easy to drop or lose track of, so you might just end up stuck on a vulnerable version.

Step 3: Deploy Internally

Now that you’re confident that the fix will work, and that your temporarily-internally-maintained version of LibBar isn’t going to break anything on your site, it’s time to deploy.

Some deployment heritage should help to provide some evidence that your fix is ready to land in LibBar, but at the next step, please remember that your production environment isn’t necessarily emblematic of that of all LibBar users.

Step 4: Propose Externally

You’ve got the fix, you’ve tested the fix, you’ve got the fix in your own production, you’ve told upstream you want to send them some changes. Now, it’s time to make the pull request.

You’re likely going to get some feedback on the PR, even if you think it’s already ready to go; as I said, despite having been proven in your production environment, you may get feedback about additional concerns from other users that you’ll need to address before LibBar’s maintainers can land it.

As you process the feedback, make sure that each new iteration of your branch gets re-deployed to your own production. It would be a huge bummer to go through all this trouble, and then end up unable to deploy the next publicly released version of LibBar within FooApp because you forgot to test that your responses to feedback still worked on your own environment.

Step 4a: Hurry Up And Wait

If you’re lucky, upstream will land your changes to LibBar. But, there’s still no release version available. Here, you’ll have to stay in a holding pattern until upstream can finalize the release on their end.

Depending on some particulars, it might make sense at this point to archive your internal LibBar repository and move your pinned release version to a git hash of the LibBar version where your fix landed, in their repository.

Before you do this, check in with the LibBar core team and make sure that they understand that’s what you’re doing and they don’t have any wacky workflows which may involve rebasing or eliding that commit as part of their release process.

Step 5: Unwind Everything

Finally, you eventually want to stop carrying any patches and move back to an official released version that integrates your fix.

You want to do this because this is what the upstream will expect when you are reporting bugs. Part of the benefit of using open source is benefiting from the collective work to do bug-fixes and such, so you don’t want to be stuck off on a pinned git hash that the developers do not support for anyone else.

As I said in step 2b6, make sure to maintain a tracking task for doing this work, because leaving this sort of relatively easy-to-clean-up technical debt lying around is something that can potentially create a lot of aggravation for no particular benefit. Make sure to put your internal LibBar repository into an appropriate state at this point as well.

Up Next

This is part 1 of a 2-part series. In part 2, I will explore in depth how to execute this workflow specifically for Python packages, using some popular tools. I’ll discuss my own workflow, standards like PEP 517 and pyproject.toml, and of course, by the popular demand that I just know will come, uv.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. if you already have all the tooling associated with a monorepo, including the ability to manage divergence and reintegrate patches with upstream, you already have the higher-overhead version of the workflow I am going to propose, so, never mind. but chances are you don’t have that, very few companies do. 

  2. In any business where one must wrangle with Legal, 3 hours is a wildly optimistic estimate. 

  3. c.f. @mcc@mastodon.social 

  4. c.f. @geofft@mastodon.social 

  5. In an ideal world every project would keep its main branch ready to release at all times, no matter what but we do not live in an ideal world. 

  6. In this case, there is no question. It’s 2b only, no not-2b. 

November 11, 2025 01:44 AM UTC


Ahmed Bouchefra

Tired of Pip and Venv? Meet UV, Your New All-in-One Python Tool

Hey there, how’s it going?

Let’s talk about the Python world for a second. If you’ve been around for a while, you know the drill. You start a new project, and the ritual begins: create a directory, set up a virtual environment with venv, remember to activate it, pip install your packages, and then pip freeze everything into a requirements.txt file.

It works. It’s fine. But it always felt a bit… clunky. A lot of steps. A lot to explain to newcomers.

Well, I’ve been playing with a new tool that’s been gaining a ton of steam, and honestly? I don’t think I’m going back. It’s called UV, and it comes from Astral, the same team behind the super-popular linter, ruff.

The goal here is ambitious. UV wants to be the single tool that replaces pip, venv, pip-tools, and even pipx. It’s an installer, an environment manager, and a tool runner all rolled into one. And because it’s written in Rust, it’s ridiculously fast.

So, let’s walk through what a typical project setup looks like the old way… and then see how much simpler it gets with UV.

The Old Way: The Pip & Venv Dance

Okay, so let’s say we’re starting a new Flask app. The old-school workflow would look something like this:

  1. mkdir old-way-project && cd old-way-project
  2. python3 -m venv .venv (Create the virtual environment)
  3. source .venv/bin/activate (Activate it… don’t forget!)
  4. pip install flask requests (Install our packages)
  5. pip freeze > requirements.txt (Save our dependencies for later)

It’s a process we’ve all done a hundred times. But it’s also a process with a few different tools and concepts you have to juggle. For someone just starting out, it’s a lot to take in.

The New Way: Just uv

Now, let’s do the same thing with UV.

Instead of creating a directory myself, I can just run:

uv init new-app

This one command creates a new directory, cds into it, and sets up a modern Python project structure. It initializes a Git repository, creates a sensible .gitignore, and gives us a pyproject.toml file. This is the modern way to manage project metadata and dependencies.

But wait… where’s the virtual environment? Where’s the activation step?

Here’s the magic. You don’t have to worry about it.

Let’s add Flask and Requests to our new project. Instead of pip, we use uv add:

uv add flask requests

When I run this, a few amazing things happen:

  1. UV sees I don’t have a virtual environment yet, so it creates one for me automatically.
  2. It installs Flask and Requests into that environment at lightning speed.
  3. It updates my pyproject.toml file to list flask and requests as dependencies.
  4. It creates a uv.lock file, which records the exact versions of every single package and sub-dependency. This is what solves the classic “but it works on my machine!” problem.

All of that, with one command, and I never had to type source ... activate.

Running Your Code (This Is the Coolest Part)

“Okay,” you might be thinking, “but how do I run my code if the environment isn’t active?”

Simple. You just tell UV to run it for you.

uv run main.py

UV finds the project’s virtual environment and runs your script inside it, even though your main shell doesn’t have it activated.

Now, get ready for the part that really blew my mind.

Let’s say I accidentally delete my virtual environment.

rm -rf .venv

Normally, this would be a disaster. I’d have to recreate the environment, activate it, and reinstall everything from my requirements.txt file. It would be a whole thing.

But with UV? I just run the same command again:

uv run main.py

UV sees the environment is gone. It reads the uv.lock file, instantly recreates the exact same environment with the exact same packages, and then runs the code. It all happens in a couple of seconds. It’s just… seamless.

If you’re sharing the project with a teammate, they just clone it and run uv sync. That’s it. Their environment is ready to go, perfectly matching yours.

It Even Replaces Pipx for Tools

Another thing I love is how it handles command-line tools. I used to use pipx to install global tools like linters and formatters. UV has that built-in, too.

Want to install ruff?

uv tool install ruff

This installs it in an isolated environment but makes it available everywhere.

But even better is the uvx command, which lets you run a tool without permanently installing it.

Let’s say I want to quickly check my code with ruff but I don’t want to install it.

uvx ruff check .

UV will download ruff to a temporary environment, run the command, and then clean up after itself. It’s perfect for trying out new tools or running one-off commands without cluttering your system.

My Takeaway

I know, I know… another new tool to learn. It can feel overwhelming. But this one is different. It doesn’t just add another layer; it simplifies and replaces a whole stack of existing tools with something faster, smarter, and more intuitive.

The smart caching alone is a huge win. If you have ten projects that all use Flask, UV only stores it once on your disk, saving a ton of space and making new project setups almost instantaneous.

I’ve fully switched my workflow over to UV, and I can’t see myself going back. It just gets out of the way and lets me focus on the code.

November 11, 2025 12:00 AM UTC

The Anatomy of a Scalable Python Project

Ever start a Python project that feels clean and simple, only to have it turn into a tangled mess a few months later? Yeah, I’ve been there more times than I can count.

Today, I want to pull back the curtain and show you the anatomy of a Python project that’s built to last. This is the setup I use for all my production projects. It’s a blueprint that helps keep things sane, organized, and ready to grow without giving you a massive headache.

We’ll walk through everything—folder structure, config, logging, testing, and tooling. The whole package.

So, What Does “Scalable” Even Mean?

It’s a word that gets thrown around a lot, right? “Scalable.” But what does it actually mean in practice?

For me, it boils down to a few things:

  1. Scales with Size: Your codebase is going to grow. That’s a good thing! It means you’re adding features. A scalable structure means you don’t have to constantly refactor everything just to add something new. The foundation is already there.
  2. Scales with Your Team: If you bring on another developer, they shouldn’t need a two-week onboarding just to figure out where to put a new function. The boundaries should be clear, and the layout should be predictable.
  3. Scales with Environments: Moving from your local machine to staging and then to production should be… well, boring. In a good way. Your config should be centralized, making environment switching a non-event.
  4. Scales with Speed: Your local setup should be a breeze. Tests should run fast. Docker should just work. You want to eliminate friction so you can actually focus on building things.

Over the years, I’ve worked with everything from TypeScript to Java to C++, and while the specifics change, the principles of good structure are universal. This is the flavor that I’ve found works beautifully for Python.

The Blueprint: A Balanced Folder Structure

You want just enough structure to keep things organized, but not so much that you’re digging through ten nested folders to find a single file. It’s a balance.

Here’s the high-level view:

/
├── app/      # Your application's source code
├── tests/    # Your tests
├── .env      # Environment variables (for local dev)
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
└── ... other config files

Right away, you see the most important separation: your app code and your tests live in their own top-level directories. This is crucial. Don’t mix them.

Diving Into the app Folder

This is where the magic happens. Inside app, I follow a simple pattern. For this example, we’re looking at a FastAPI app, but the concepts apply anywhere.

app/
├── api/
│   └── v1/
│       └── users.py   # The HTTP layer (routers)
├── core/
│   ├── config.py    # Centralized configuration
│   └── logging.py   # Logging setup
├── db/
│   └── schema.py    # Database models (e.g., SQLAlchemy)
├── models/
│   └── user.py      # Data contracts (e.g., Pydantic schemas)
├── services/
│   └── user.py      # The business logic!
└── main.py          # App entry point

Let’s break it down.

main.py - The Entry Point

This file is kept as lean as possible. Seriously, there’s almost nothing in it. It just initializes the FastAPI app and registers the routers from the api folder. That’s it.

api/ - The Thin HTTP Layer

This is where your routes live. If you look inside api/v1/users.py, you won’t find any business logic. You’ll just see the standard GET, POST, PUT, DELETE endpoints. Their only job is to handle the HTTP request and response. They act as a thin translator, calling into the real logic somewhere else.

core/ - The Cross-Cutting Concerns

This folder is for things that are used all over your application.

db/ and models/ - The Data Layers

services/ - The Heart of Your Application

This is the most important folder, in my opinion. This is where your actual business logic lives. The UserService takes a database session and does the real work: querying for users, creating a new user, running validation logic, etc.

Why is this so great?

Let’s Talk About Testing

Your tests folder should mirror your app folder’s structure. This makes it incredibly easy to find the tests for any given piece of code.

tests/
└── api/
    └── v1/
        └── test_users.py

For testing, I use an in-memory SQLite database. This keeps my tests completely isolated from my production database and makes them run super fast.

FastAPI has a fantastic dependency injection system that makes testing a dream. In my tests, I can just “override” the dependency that provides the database session and swap it with my in-memory test database. Now, when I run a test that hits my API, it’s running against a temporary, clean database every single time.

Tooling That Ties It All Together

How It All Flows Together

So, let’s trace a request:

  1. A GET /users request hits the router in api/v1/users.py.
  2. FastAPI’s dependency injection system automatically creates a UserService instance, giving it a fresh database session.
  3. The route calls the list_users method on the service.
  4. The service runs a query against the database, gets the results, and returns them.
  5. The router takes those results, formats them as a JSON response, and sends it back to the client.

The beauty of this is the clean separation of concerns. The API layer handles HTTP. The service layer handles business logic. The database layer handles persistence.

This structure lets you start small and add complexity later without making a mess. The boundaries are clear, which makes development faster, testing easier, and onboarding new team members a whole lot smoother.

Of course, this is a starting point. You might need a scripts/ folder for data migrations or other custom tasks. But this foundation… it’s solid. It’s been a game-changer for me, and I hope it can be for you too.

November 11, 2025 12:00 AM UTC

November 10, 2025


Brian Okken

Explore Python dependencies with `pipdeptree` and `uv pip tree`

Sometimes you just want to know about your dependencies, and their dependencies.

I’ve been using pipdeptree for a while, but recently switched to uv pip tree.
Let’s take a look at both tools.

pipdeptree

pipdeptree is pip installable, but I don’t want pipdeptree itself to be reported alongside everything else installed, so I usually install it outside of a project. We can use it system wide by:

usage

The --python auto tells pipdeptree to look at the current environment.

November 10, 2025 11:24 PM UTC


Patrick Altman

Using Vite with Vue and Django

Using Vite with Vue and Django

I&aposve been building web applications with Vue and Django for a long time. I don&apost remember my first one—certainly before Vite was available. As soon as I switched to using Vite, I ended up building a template tag to join the frontend and backend together rather than having separate projects. I&aposve always found things simpler to have Django serve everything.

While preparing this post to share the latest version of what is essentially a small set of files we copy between projects, I started exploring the idea of open-sourcing the solution.

The goal was twofold

  1. To create a reusable package instead of relying on copy-and-paste code, and
  2. To contribute something back to the open-source community.

In the process, I stumbled upon an excellent existing project — django-vite.

So now I think we might give this a good look to switch to and add a Redis backend.

For now though, I think it&aposs still worth sharing our simple solution in case it&aposs a better fit for you (I haven&apost fully examined django-vite yet).

The Problem

The problem we are trying to solve is using Vite to bundle/build our Vue frontend and yet have Django be able to serve the bundle entry point JS and CSS entry points automatically. Running vite build will yield output like:

main-2uqS21f4.js
main-BCI6Z1XL.css

Without any extra tooling, we&aposd have to commit build output, hard-code these cache-busting file names to the base template, every time we made a change that could affect the bundle.

This was completely unacceptable.

The Solution

Vite offers the ability to generate a manifest file that will map the cache-busting file name with their base name in a machine readable format. This will allow us to leverage builds happening on CI/CD as part of our Docker image build, and then read the manifest produced by Vite, to keep everything neat and simple.

Here is the setting in the vite.config.ts key to this:

{
  // ...
  build: {
    manifest: true,
    // ...
  }
  // ...
}

This will produce a file in your output folder (under .vite/) called manifest.json.

Here is a snippet; note that you typically won’t need to inspect it manually:

"main.ts": {
    "file": "assets/main-2uqS21f4.js",
    "name": "main",
    "src": "main.ts",
    "isEntry": true,
    "imports": [
      "_runtime-D84vrshd.js",
      "_forms-OJiVtksU.js",
      "_analytics-CCPQRNnj.js",
      "_forms-pro-qreHBaUb.js",
      "_icons-3wXMhf1p.js",
      "_pv-DzJUpav-.js",
      "_vue-mapbox-BRpo1ix7.js",
      "_mapbox--vATkUHK.js"
    ],
    "dynamicImports": [
      "views/HomeView.vue",
      "views/dispatch/DispatchNewOrdersView.vue",
      ...

This is the key to tying things together dynamically. We constructed a template tag so that we could dynamically add our entry point in our base template:

{% load vite %}

<html>
  <head>
    <!-- ... base head template stuff -->
    {% vite_styles &aposmain.ts&apos %}
  </head>
  <body>
    <!-- ... base template stuff -->
  
    {% vite_scripts &aposmain.ts&apos %}
  </body>
</html>

The idea behind this type of solution is conceptually pretty simple. The template tag needs to read the manifest.json, find the referenced entry point main.ts, then return the staticfiles based path to what&aposs in the file key (e.g. assets/main-2uqS21f4.js before rendering the template).

Given this, we need to optimize by reducing file I/O hits on every request, and since we’ll use caching we must also handle cache invalidation. Every deployment is a candidate for invalidation because the bundle could change at deployment, but not between.

We&aposll solve the caching using Redis. Since we have multiple nodes in our web app cluster local memory isn&apost an option. We&aposll solve the cache invalidation with a management command that runs at the end of each deployment. This uses a short stack (keeping only the latest n versions) instead of deleting.

We use a stack so we can push the new manifest to the top of the queue while leaving older references around. Requests to updated nodes can then fetch the latest bundle, while allowing older nodes to still work and serve up their existing (older) bundle. This enables random rolling upgrades on our cluster allowing us to push up updates in middle of a work day without disrupting end users.

All of this is done with basically a template tag python module and a management command.

Template Tag

We have this template tag module stored as vite.py, so that you can load it with {% load vite %} which then exposes the {% vite_styles %} and {% vite_scripts %} template tags.

import json
import re
import typing

from django import template
from django.conf import settings
from django.core.cache import cache
from django.templatetags.static import static
from django.utils.safestring import mark_safe


if typing.TYPE_CHECKING:  # pragma: no cover
    from django.utils.safestring import SafeString

    ChunkType = typing.TypedDict("chunk", {"file": str, "css": str, "imports": list[str]})
    ManifestType = typing.Mapping[str, ChunkType]
    ScriptsStylesType = typing.Tuple[list[str], list[str]]


DEV_SERVER_ROOT = "http://localhost:3001/static"


register = template.Library()


def is_absolute_url(url: str) -> bool:
    return re.match("^https?://", url) is not None


def set_manifest() -> "ManifestType":
    with open(settings.MANIFEST_LOADER["output_path"]) as fp:
        manifest: "ManifestType" = json.load(fp)

    cache.set(settings.MANIFEST_LOADER["cache_key"], manifest, None)
    return manifest


def get_manifest() -> "ManifestType":
    if manifest := cache.get(settings.MANIFEST_LOADER["cache_key"]):
        if settings.MANIFEST_LOADER["cache"]:
            return manifest

    return set_manifest()


def vite_manifest(entries_names: typing.Sequence[str]) -> "ScriptsStylesType":
    if settings.DEBUG:
        scripts = [f"{DEV_SERVER_ROOT}/@vite/client"] + [
            f"{DEV_SERVER_ROOT}/{name}"
            for name in entries_names
        ]
        styles = []
        return scripts, styles

    manifest = get_manifest()

    _processed = set()

    def _process_entries(names: typing.Sequence[str]) -> "ScriptsStylesType":
        scripts = []
        styles = []

        for name in names:
            if name in _processed:
                continue
            chunk = manifest[name]

            import_scripts, import_styles = _process_entries(chunk.get("imports", []))
            scripts.extend(import_scripts)
            styles.extend(import_styles)

            scripts.append(chunk["file"])
            styles.extend(chunk.get("css", []))

            _processed.add(name)
        return scripts, styles

    return _process_entries(entries_names)


@register.simple_tag(name="vite_styles")
def vite_styles(*entries_names: str) -> "SafeString":
    _, styles = vite_manifest(entries_names)
    styles = map(lambda href: href if is_absolute_url(href) else static(href), styles)
    return mark_safe("\n".join(map(lambda href: f&apos<link rel="stylesheet" href="{href}" />&apos, styles)))  # nosec


@register.simple_tag(name="vite_scripts")
def vite_scripts(*entries_names: str) -> "SafeString":
    scripts, _ = vite_manifest(entries_names)
    scripts = map(lambda src: src if is_absolute_url(src) else static(src), scripts)
    return mark_safe("\n".join(map(lambda src: f&apos<script type="module" src="{src}"></script>&apos, scripts)))  # nosec

Here are a few features this supports::

  1. If running in local development, it will bypass loading from the manifest and load the @vite/client and point to the dev server that is running in a docker compose instance so we get HMR (Hot Module Replacement).
  2. It relies on some settings that control if caching is enabled, what the cache key is (we set it to the RELEASE_VERSION which is pulled from the environment and tied to the git sha or tag.
  3. We leverage the Django cache backend here for getting from and setting to the cache independent on what the actual cache backend is. This layer of indirection only works for this tag though and not for our cache invalidation management command.

The settings we use:

MANIFEST_LOADER = {
    "cache": not DEBUG,
    "cache_key": f"vite_manifest:{RELEASE_VERSION}",
    "output_path": f"{STATIC_ROOT}/.vite/manifest.json",
}

The management command gets a bit fancy with invalidation mainly to support running a multi-node cluster.

If you run a single web instance this probably isn&apost a lot of benefit.

However, we encountered issues spinning up additional nodes: some were updated, others weren’t, and we were seeing 500 errors during deployment because we needed to support both versions in the cache.

Our short term solution was to just put entire site into maintenance mode during deploys, but that&aposs kind of annoying for pushing out some simple fixes. This technique has solved that for us with this management command that lives in post_deploy.py

from django.conf import settings
from django.core.cache import cache
from django.core.management import BaseCommand
from redis.exceptions import RedisError

from ...templatetags.vite import set_manifest


class Command(BaseCommand):

    def success(self, message: str):
        self.stdout.write(self.style.SUCCESS(message))

    def warning(self, message: str):
        self.stdout.write(self.style.WARNING(message))

    def error(self, message: str):
        self.stdout.write(self.style.ERROR(message))

    def set_new_manifest_in_cache(self):
        current_version = settings.RELEASE_VERSION
        if not current_version:
            self.warning(
                "RELEASE_VERSION is empty; skipping cleanup to avoid deleting default keys."
            )
            return

        prefix = "vite_manifest:*"  # Match all versionsed keys
        recent_versions_key = "recent-manifest-versions"  # Redis key for tracking versions

        try:
            redis_client = cache._client.get_client()

            # Add current version to the front of the list (in bytes)
            redis_client.lpush(recent_versions_key, current_version.encode("utf-8"))

            # Keep only the last 5 versions
            redis_client.ltrim(recent_versions_key, 0, 5)

            # Get recent versions as a set for quick lookup (decoding to strings)
            recent_versions = {
                v.decode("utf-8")
                for v in redis_client.lrange(recent_versions_key, 0, -1)
            }

            self.success(f"Recent versions: {recent_versions}")

            cursor = "0"
            deleted_count = 0
            while cursor != 0:
                cursor, keys = redis_client.scan(cursor=cursor, match=prefix, count=100)  # Batch scan
                for key in keys:
                    key_str = key.decode("utf-8")
                    self.success(f"Checking key: {key_str}")
                    # If the key&aposs version is not in recent versions, delete it
                    if not any(key_str.endswith(f":{version}") for version in recent_versions):
                        redis_client.delete(key)
                        deleted_count += 1
                        self.success(f"Deleted old manifest cache key: {key_str}")

            self.success(
                f"Added current version &apos{current_version}&apos and deleted {deleted_count} old manifest cache keys."
            )

            set_manifest()
            self.success("Updated Vite manifest in cache.")
        except RedisError as e:
            self.error(f"Redis error: {e}")

    def handle(self, *args, **options):
        self.set_new_manifest_in_cache()

This isn&apost the prettiest code. We could probably tidy it up by extracting the Redis operations and/or the main while loop to make things more readable. But for now it&aposs working and we haven&apost had to touch it in a while.

The latest six versions in our cache:

Using Vite with Vue and Django

We had to break out of the pure django cache backend here to get access to some redis specific operations for the stack operations. Again, this is something that might be worth tidying up if we build a cache backend for django-vite but maybe not necessary if we build a Redis specific backend.

Not only do we invalidate the latest cache by pushing the version key down the stack, but we then seed the cache with the current version to save some time on a lazy load.

Summary

Next up is for us to take a hard look at django-vite as this seems to be a well structured and maintained project. Perhaps we can move to using this, retire our custom code, and then contribute what remains lacking either to the project or via a sidecar package.

Have you dealt with these problems in a different way? If so, we&aposd love to hear from you and learn about your approach.

November 10, 2025 05:58 PM UTC