Planet Python
Last update: November 18, 2025 07:43 AM UTC
November 17, 2025
Rodrigo Girão Serrão
Floodfill algorithm in Python
Learn how to implement and use the floodfill algorithm in Python.
What is the floodfill algorithm?
Click the image below to randomly colour the region you click.
Go ahead, try it!
IMG_WIDTH = 160 IMG_HEIGHT = 160 PIXEL_SIZE = 2 import asyncio import collections import random from pyscript import display from pyodide.ffi import create_proxy import js from js import fetch canvas = js.document.getElementById("bitmap") ctx = canvas.getContext("2d") URL = "/blog/floodfill-algorithm-in-python/_python.txt" async def load_bitmap(url: str) -> list[list[int]]: # Fetch the text file from the URL response = await fetch(url) text = await response.text() bitmap: list[list[int]] = [] for line in text.splitlines(): line = line.strip() if not line: continue row = [int(ch) for ch in line if ch in "01"] if row: bitmap.append(row) return bitmap def draw_bitmap(bitmap): rows = len(bitmap) cols = len(bitmap[0]) if rows > 0 else 0 if rows == 0 or cols == 0: return for y, row in enumerate(bitmap): for x, value in enumerate(row): if value == 1: ctx.fillStyle = "black" else: ctx.fillStyle = "white" ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) _neighbours = [(1, 0), (-1, 0), (0, 1), (0, -1)] async def fill_bitmap(bitmap, x, y): if bitmap[y][x] == 1: return ctx = canvas.getContext("2d") r, g, b = (random.randint(0, 255) for _ in range(3)) ctx.fillStyle = f"rgb({r}, {g}, {b})" def draw_pixel(x, y): ctx.fillRect(x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE) pixels = collections.deque([(x, y)]) seen = set((x, y)) while pixels: nx, ny = pixels.pop() draw_pixel(nx, ny) for dx, dy in _neighbours: x_, y_ = nx + dx, ny + dy if x_ < 0 or x_ >= IMG_WIDTH or y_ < 0 or y_ >= IMG_HEIGHT or (x_, y_) in seen: continue if bitmap[y_][x_] == 0: seen.add((x_, y_)) pixels.appendleft((x_, y_)) await asyncio.sleep(0.0001) is_running = False def get_event_coords(event): """Return (clientX, clientY) for mouse/pointer/touch events.""" # PointerEvent / MouseEvent: clientX/clientY directly available if hasattr(event, "clientX") and hasattr(event, "clientY") and event.clientX is not None: return event.clientX, event.clientY # TouchEvent: use the first touch point if hasattr(event, "touches") and event.touches.length > 0: touch = event.touches.item(0) return touch.clientX, touch.clientY # Fallback: try changedTouches if hasattr(event, "changedTouches") and event.changedTouches.length > 0: touch = event.changedTouches.item(0) return touch.clientX, touch.clientY return None, None async def on_canvas_press(event): global is_running if is_running: return is_running = True try: # Avoid scrolling / zooming taking over on touch if hasattr(event, "preventDefault"): event.preventDefault() clientX, clientY = get_event_coords(event) if clientX is None: # Could not read coordinates; bail out gracefully return rect = canvas.getBoundingClientRect() # Account for CSS scaling: map from displayed size to canvas units scale_x = canvas.width / rect.width scale_y = canvas.height / rect.height x_canvas = (clientX - rect.left) * scale_x y_canvas = (clientY - rect.top) * scale_y x_idx = int(x_canvas // PIXEL_SIZE) y_idx...November 17, 2025 03:49 PM UTC
Real Python
How to Serve a Website With FastAPI Using HTML and Jinja2
By the end of this guide, you’ll be able to serve dynamic websites from FastAPI endpoints using Jinja2 templates powered by CSS and JavaScript. By leveraging FastAPI’s HTMLResponse, StaticFiles, and Jinja2Templates classes, you’ll use FastAPI like a traditional Python web framework.
You’ll start by returning basic HTML from your endpoints, then add Jinja2 templating for dynamic content, and finally create a complete website with external CSS and JavaScript files to copy hex color codes:
To follow along, you should be comfortable with Python functions and have a basic understanding of HTML and CSS. Experience with FastAPI is helpful but not required.
Get Your Code: Click here to download the free sample code that shows you how to serve a website with FastAPI using HTML and Jinja2.
Take the Quiz: Test your knowledge with our interactive “How to Serve a Website With FastAPI Using HTML and Jinja2” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Serve a Website With FastAPI Using HTML and Jinja2Review how to build dynamic websites with FastAPI and Jinja2, and serve HTML, CSS, and JS with HTMLResponse and StaticFiles.
Prerequisites
Before you start building your HTML-serving FastAPI application, you’ll need to set up your development environment with the required packages. You’ll install FastAPI along with its standard dependencies, including the ASGI server you need to run your application.
Select your operating system below and install FastAPI with all the standard dependencies inside a virtual environment:
These commands create and activate a virtual environment, then install FastAPI along with Uvicorn as the ASGI server, and additional dependencies that enhance FastAPI’s functionality. The standard option ensures you have everything you need for this tutorial, including Jinja2 for templating.
Step 1: Return Basic HTML Over an API Endpoint
When you take a close look at a FastAPI example application, you commonly encounter functions returning dictionaries, which the framework transparently serializes into JSON responses.
However, FastAPI’s flexibility allows you to serve various custom responses besides that—for example, HTMLResponse to return content as a text/html type, which your browser interprets as a web page.
To explore returning HTML with FastAPI, create a new file called main.py and build your first HTML-returning endpoint:
main.py
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
app = FastAPI()
@app.get("/", response_class=HTMLResponse)
def home():
html_content = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Home</title>
</head>
<body>
<h1>Welcome to FastAPI!</h1>
</body>
</html>
"""
return html_content
The HTMLResponse class tells FastAPI to return your content with the text/html content type instead of the default application/json response. This ensures that browsers interpret your response as HTML rather than plain text.
Before you can visit your home page, you need to start your FastAPI development server to see the HTML response in action:
Read the full article at https://realpython.com/fastapi-jinja2-template/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 02:00 PM UTC
Quiz: How to Serve a Website With FastAPI Using HTML and Jinja2
In this quiz, you’ll test your understanding of building dynamic websites with FastAPI and Jinja2 Templates.
By working through this quiz, you’ll revisit how to return HTML with HTMLResponse, serve assets with StaticFiles, render Jinja2 templates with context, and include CSS and JavaScript for interactivity like copying hex color codes.
If you are new to FastAPI, review Get Started With FastAPI. You can also brush up on Python functions and HTML and CSS.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 17, 2025 12:00 PM UTC
Python Bytes
#458 I will install Linux on your computer
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong>Possibility of a new website for Django</strong></li> <li><strong><a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></li> <li><strong><a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></li> <li><strong><a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=s2HlckfeBCs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="458">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Brian #1: Possibility of a new website for Django</strong></p> <ul> <li>Current Django site: <a href="https://www.djangoproject.com?featured_on=pythonbytes">djangoproject.com</a></li> <li>Adam Hill’s in progress redesign idea: <a href="https://django-homepage.adamghill.com?featured_on=pythonbytes">django-homepage.adamghill.com</a></li> <li>Commentary in the <a href="https://forum.djangoproject.com/t/want-to-work-on-a-homepage-site-redesign/42909/35?featured_on=pythonbytes">Want to work on a homepage site redesign? discussion</a></li> </ul> <p><strong>Michael #2: <a href="https://github.com/slaily/aiosqlitepool?featured_on=pythonbytes">aiosqlitepool</a></strong></p> <ul> <li>🛡️A resilient, high-performance asynchronous connection pool layer for SQLite, designed for efficient and scalable database operations.</li> <li>About 2x better than regular SQLite.</li> <li>Pairs with <a href="https://github.com/omnilib/aiosqlite?featured_on=pythonbytes">aiosqlite</a></li> <li><code>aiosqlitepool</code> in three points: <ul> <li><strong>Eliminates connection overhead</strong>: It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections.</li> <li><strong>Faster queries via "hot" cache</strong>: Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations.</li> <li><strong>Maximizes concurrent throughput</strong>: Allows your application to process significantly more database queries per second under heavy load.</li> </ul></li> </ul> <p><strong>Brian #3: <a href="https://deptry.com?featured_on=pythonbytes">deptry</a></strong></p> <ul> <li>“deptry is a command line tool to check for issues with dependencies in a Python project, such as unused or missing dependencies. It supports projects using Poetry, pip, PDM, uv, and more generally any project supporting PEP 621 specification.”</li> <li>“Dependency issues are detected by scanning for imported modules within all Python files in a directory and its subdirectories, and comparing those to the dependencies listed in the project's requirements.”</li> <li><p>Note if you use <code>project.optional-dependencies</code></p> <div class="codehilite"> <pre><span></span><code><span class="k">[project.optional-dependencies]</span> <span class="n">plot</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"matplotlib"</span><span class="p">]</span> <span class="n">test</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"pytest"</span><span class="p">]</span> </code></pre> </div></li> <li><p>you have to set a config setting to get it to work right:</p> <div class="codehilite"> <pre><span></span><code><span class="k">[tool.deptry]</span> <span class="n">pep621_dev_dependency_groups</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s2">"test"</span><span class="p">,</span><span class="w"> </span><span class="s2">"docs"</span><span class="p">]</span> </code></pre> </div></li> </ul> <p><strong>Michael #4: <a href="https://github.com/juftin/browsr?featured_on=pythonbytes">browsr</a></strong></p> <ul> <li><strong><code>browsr</code></strong> 🗂️ is a pleasant <strong>file explorer</strong> in your terminal. It's a command line <strong>TUI</strong> (text-based user interface) application that empowers you to browse the contents of local and remote filesystems with your keyboard or mouse.</li> <li>You can quickly navigate through directories and peek at files whether they're hosted <strong>locally</strong>, in <strong>GitHub</strong>, over <strong>SSH</strong>, in <strong>AWS S3</strong>, <strong>Google Cloud Storage</strong>, or <strong>Azure Blob Storage</strong>.</li> <li>View code files with syntax highlighting, format JSON files, render images, convert data files to navigable datatables, and more.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li>Understanding the MICRO</li> <li>TDD chapter coming out later today or maybe tomorrow, but it’s close.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">Peacock</a> is excellent</li> </ul> <p><strong>Joke: <a href="https://x.com/thatstraw/status/1977317574779048171?featured_on=pythonbytes">I will find you</a></strong></p>
November 17, 2025 08:00 AM UTC
November 16, 2025
Ned Batchelder
Why your mock breaks later
In Why your mock doesn’t work I explained this rule of mocking:
Mock where the object is used, not where it’s defined.
That blog post explained why that rule was important: often a mock doesn’t work at all if you do it wrong. But in some cases, the mock will work even if you don’t follow this rule, and then it can break much later. Why?
Let’s say you have code like this:
# user.py
def get_user_settings():
with open(Path("~/settings.json").expanduser()) as f:
return json.load(f)
def add_two_settings():
settings = get_user_settings()
return settings["opt1"] + settings["opt2"]
You write a simple test:
def test_add_two_settings():
# NOTE: need to create ~/settings.json for this to work:
# {"opt1": 10, "opt2": 7}
assert add_two_settings() == 17
As the comment in the test points out, the test will only pass if you create the correct settings.json file in your home directory. This is bad: you don’t want to require finicky environments for your tests to pass.
The thing we want to avoid is opening a real file, so it’s a natural impulse
to mock out open():
# test_user.py
from io import StringIO
from unittest.mock import patch
@patch("builtins.open")
def test_add_two_settings(mock_open):
mock_open.return_value = StringIO('{"opt1": 10, "opt2": 7}')
assert add_two_settings() == 17
Nice, the test works without needing to create a file in our home directory!
Much later...
One day your test suite fails with an error like:
...
File ".../site-packages/coverage/python.py", line 55, in get_python_source
source_bytes = read_python_source(try_filename)
File ".../site-packages/coverage/python.py", line 39, in read_python_source
return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
TypeError: replace() argument 1 must be str, not bytes
What happened!? Coverage.py code runs during your tests, invoked by the
Python interpreter. The mock in the test changed the builtin open, so
any use of it anywhere during the test is affected. In some cases, coverage.py
needs to read your source code to record the execution properly. When that
happens, coverage.py unknowingly uses the mocked open, and bad things
happen.
When you use a mock, patch it where it’s used, not where it’s defined. In this case, the patch would be:
@patch("myproduct.user.open")
def test_add_two_settings(mock_open):
... etc ...
With a mock like this, the coverage.py code would be unaffected.
Keep in mind: it’s not just coverage.py that could trip over this mock. There
could be other libraries used by your code, or you might use open
yourself in another part of your product. Mocking the definition means
anything using the object will be affected. Your intent is to only
mock in one place, so target that place.
Postscript
I decided to add some code to coverage.py to defend against this kind of over-mocking. There is a lot of over-mocking out there, and this problem only shows up in coverage.py with Python 3.14. It’s not happening to many people yet, but it will happen more and more as people start testing with 3.14. I didn’t want to have to answer this question many times, and I didn’t want to force people to fix their mocks.
From a certain perspective, I shouldn’t have to do this. They are in the wrong, not me. But this will reduce the overall friction in the universe. And the fix was really simple:
open = open
This is a top-level statement in my module, so it runs when the module is
imported, long before any tests are run. The assignment to open will
create a global in my module, using the current value of open, the one
found in the builtins. This saves the original open for use in my module
later, isolated from how builtins might be changed later.
This is an ad-hoc fix: it only defends one builtin. Mocking other builtins
could still break coverage.py. But open is a common one, and this will
keep things working smoothly for those cases. And there’s precedent: I’ve
already been using a more involved technique to defend
against mocking of the os module for ten years.
Even better!
No blog post about mocking is complete without encouraging a number of other best practices, some of which could get you out of the mocking mess:
- Use
autospec=Trueto make your mocks strictly behave like the original object: see Why your mock still doesn’t work. - Make assertions about how your mock was called to be sure everything is connected up properly.
- Use verified fakes instead of auto-generated mocks: Fast tests for slow services: why you should use verified fakes.
- Separate your code so that computing functions like our
add_two_settingsdon’t also do I/O. This makes the functions easier to test in the first place. Take a look at Function Core, Imperative Shell. - Dependency injection lets you explicitly pass test-specific objects where they are needed instead of relying on implicit access to a mock.
November 16, 2025 12:55 PM UTC
November 15, 2025
Kay Hayen
Nuitka Release 2.8
This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler, “download now”.
This release adds a ton of new features and corrections.
Bug Fixes
Standalone: For the “Python Build Standalone” flavor ensured that debug builds correctly recognize all their specific built-in modules, preventing potential errors. (Fixed in 2.7.2 already.)
Linux: Fixed a crash when attempting to modify the RPATH of statically linked executables (e.g., from
imageio-ffmpeg). (Fixed in 2.7.2 already.)Anaconda: Updated
PySide2support to correctly handle path changes in newer Conda packages and improved path normalization for robustness. (Fixed in 2.7.2 already.)macOS: Corrected handling of
QtWebKitframework resources. Previous special handling was removed as symlinking is now default, which also resolved an issue of file duplication. (Fixed in 2.7.2 already.)Debugging: Resolved an issue in debug builds where an incorrect assertion was done during the addition of distribution metadata. (Fixed in 2.7.1 already.)
Module: Corrected an issue preventing
stubgenfrom functioning with Python versions earlier than 3.6. (Fixed in 2.7.1 already.)UI: Prevented Nuitka from crashing when
--include-modulewas used with a built-in module. (Fixed in 2.7.1 already.)Module: Addressed a compatibility issue where the
codemode for the constants blob failed with the C++ fallback. This fallback is utilized on very old GCC versions (e.g., default on CentOS7), which are generally not recommended. (Fixed in 2.7.1 already.)Standalone: Resolved an assertion error that could occur in certain Python setups due to extension module suffix ordering. The issue involved incorrect calculation of the derived module name when the wrong suffix was applied (e.g., using
.soto derive a module name likegdbmmoduleinstead of justgdbm). This was observed with Python 2 on CentOS7 but could potentially affect other versions with unconventional extension module configurations. (Fixed in 2.7.1 already.)Python 3.12.0: Corrected the usage of an internal structure identifier that is only available in Python 3.12.1 and later versions. (Fixed in 2.7.1 already.)
Plugins: Prevented crashes in Python setups where importing
pkg_resourcesresults in aPermissionError. This typically occurs in broken installations, for instance, where some packages are installed with root privileges. (Fixed in 2.7.1 already.)macOS: Implemented a workaround for data file names that previously could not be signed within app bundles. The attempt in release 2.7 to sign these files inadvertently caused a regression for cases involving illegal filenames. (Fixed in 2.7.1 already.)
Python 2.6: Addressed an issue where
staticmethodobjects lacked the__func__attribute. Nuitka now tracks the original function as a distinct value. (Fixed in 2.7.1 already.)Corrected behavior for
orderedsetimplementations that lack aunionmethod, ensuring Nuitka does not attempt to use it. (Fixed in 2.7.1 already.)Python 2.6: Ensured compatibility for setups where the
_PyObject_GC_IS_TRACKEDmacro is unavailable. This macro is now used beyond assertions, necessitating support outside of debug mode. (Fixed in 2.7.1 already.)Python 2.6: Resolved an issue caused by the absence of
sys.version_info.releaselevelby utilizing a numeric index instead and adding a new helper function to access it. (Fixed in 2.7.1 already.)Module: Corrected the
__compiled__.mainvalue to accurately reflects the package in which a module is loaded, this was not the case for Python versions prior to 3.12. (Fixed in 2.7.1 already.)Plugins: Further improved the
dill-compatplugin by preventing assertions related to empty annotations and by removing hard-coded module names for greater flexibility. (Fixed in 2.7.1 already.)Windows: For onefile mode using DLL mode, ensure all necessary environment variables are correctly set for
QtWebEngine. Previously, default Qt paths could point incorrectly near the onefile binary. (Fixed in 2.7.3 already.)PySide6: Fixed an issue with
PySide6where slots defined in base classes might not be correctly handled, leading to them only working for the first class that used them. (Fixed in 2.7.3 already.)Plugins: Enhanced Qt binding plugin support by checking for module presence without strictly requiring metadata. This improves compatibility with environments like Homebrew or
uvwhere package metadata might be absent. (Fixed in 2.7.3 already.)macOS: Ensured the
appletarget is specified during linking to prevent potential linker warnings about using anunknowntarget in certain configurations. (Fixed in 2.7.3 already.)macOS: Disabled the use of static
libpythonwithpyenvinstallations, as this configuration is currently broken. (Fixed in 2.7.3 already.)macOS: Improved error handling for the
--macos-app-protected-resourceoption by catching cases where a description is not provided. (Fixed in 2.7.3 already.)Plugins: Enhanced workarounds for
PySide6, now also covering single-shot timer callbacks. (Fixed in 2.7.4 already.)Plugins: Ensured that the Qt binding module is included when using accelerated mode with Qt bindings. (Fixed in 2.7.4 already.)
macOS: Avoided signing through symlinks and minimized their use to prevent potential issues, especially during code signing of application bundles. (Fixed in 2.7.4 already.)
Windows: Implemented path shortening for paths used in onefile DLL mode to prevent issues with long or Unicode paths. This also benefits module mode. (Fixed in 2.7.4 already.)
UI: The options nanny plugin no longer uses a deprecated option for macOS app bundles, preventing potential warnings or issues. (Fixed in 2.7.4 already.)
Plugins: Ensured the correct macOS target architecture is used. This particularly useful for
PySide2with universal CPython binaries, to prevent compile time crashes e.g. when cross-compiling for a different architecture. (Fixed in 2.7.4 already.)UI: Fixed a crash that occurred on macOS if the
ccachedownload was rejected by the user. (Fixed in 2.7.4 already.)UI: Improved the warning message related to macOS application icons for better clarity. (Added in 2.7.4 already.)
Standalone: Corrected an issue with QML plugins on macOS when using newer
PySide6versions. (Fixed in 2.7.4 already.)Python 3.10+: Fixed a memory leak where the matched value in pattern matching constructs was not being released. (Fixed in 2.7.4 already.)
Python3: Fixed an issue where exception exits for larger
rangeobjects, which are not optimized away, were not correctly annotated by the compiler. (Fixed in 2.7.4 already.)Windows: Corrected an issue with the automatic use of icons for
PySide6applications on non-Windows if Windows icon options were used. (Fixed in 2.7.4 already.)Onefile: When using DLL mode there was a load error for the DLL with MSVC 14.2 or earlier, but older MSVC is to be supported. (Fixed in 2.7.5 already.)
Onefile: Fix, the splash screen was showing in DLL mode twice or more; these extra copies couldn’t be stopped. (Fixed in 2.7.5 already.)
Standalone: Fixed an issue where data files were no longer checked for conflicts with included DLLs. The order of data file and DLL copying was restored, and macOS app signing was made a separate step to remove the order dependency. (Fixed in 2.7.6 already.)
macOS: Corrected our workaround using symlinks for files that cannot be signed. When
--output-directorywas used, as it made incorrect assumptions about thedistfolder path. (Fixed in 2.7.6 already.)UI: Prevented checks on onefile target specifications when not actually compiling in onefile mode, e.g. on macOS with
--mode=app. (Fixed in 2.7.6 already.)UI: Improved error messages for data directory options by include the relevant part in the output. (Fixed in 2.7.6 already.)
Plugins: Suppressed
UserWarningmessages from thepkg_resourcesmodule during compilation. (Fixed in 2.7.6 already.)Python3.11+: Fixed an issue where descriptors for compiled methods were incorrectly exposed for Python 3.11 and 3.12. (Fixed in 2.7.7 already.)
Plugins: Avoided loading modules when checking for data file existence. This prevents unnecessary module loading and potential crashes in broken installations. (Fixed in 2.7.9 already.)
Plugins: The
global_change_functionanti-bloat feature now operates on what should be the qualified names (__qualname__) instead of just function names, preventing incorrect replacements of methods with the same name in different classes. (Fixed in 2.7.9 already.)Onefile: The
containing_dirattribute of the__compiled__object was regressed in DLL mode on Windows, pointing to the temporary DLL directory instead of the directory containing the onefile binary. (Fixed in 2.7.10 already, note that the solution in 2.7.9 had a regression.)Compatibility: Fixed a crash that occurred when an import attempted to go outside its package boundaries. (Fixed in 2.7.11 already.)
macOS: Ignored a warning from
codesignwhen using self-signed certificates. (Fixed in 2.7.11 already.)Onefile: Fixed an issue in DLL mode where environment variables from other onefile processes (related to temporary paths and process IDs) were not being ignored, which could lead to conflicts. (Fixed in 2.7.12 already.)
Compatibility: Fixed a potential crash that could occur when processing an empty code body. (Fixed in 2.7.13 already.)
Plugins: Ensured that DLL directories created by plugins could be at the top level when necessary, improving flexibility. (Fixed in 2.7.13 already.)
Onefile: On Windows, corrected an issue in DLL mode where
original_argv0wasNone; it is now properly set. (Fixed in 2.7.13 already.)macOS: Avoided a warning that appeared on newer macOS versions. (Fixed in 2.7.13 already.)
macOS: Allowed another DLL to be missing for
PySide6to support more setups. (Fixed in 2.7.13 already.)Standalone: Corrected the existing import workaround for Python 3.12 that was incorrectly renaming existing modules of matching names into sub-modules of the currently imported module. (Fixed in 2.7.14 already.)
Standalone: On Windows, ensured that the DLL search path correctly uses the proper DLL directory. (Fixed in 2.7.14 already.)
Python 3.5+: Fixed a memory leak where the called object could be leaked in calls with keyword arguments following a star dict argument. (Fixed in 2.7.14 already.)
Python 3.13: Fixed an issue where
PyState_FindModulewas not working correctly with extension modules due to sub-interpreter changes. (Fixed in 2.7.14 already.)Onefile: Corrected an issue where the process ID (PID) was not set in a timely manner, which could affect onefile operations. (Fixed in 2.7.14 already.)
Compatibility: Fixed a crash that could occur when a function with both a star-list argument and keyword-only arguments was called without any arguments. (Fixed in 2.7.16 already.)
Standalone: Corrected an issue where distribution names were not checked case-insensitively, which could lead to metadata not being included. (Fixed in 2.7.16 already.)
Linux: Avoid using full zlib with extern declarations but instead only the CRC32 functions we need. Otherwise conflicts with OS headers could occur.
Standalone: Fixed an issue where scanning for standard library dependencies was unnecessarily performed.
Plugins: Made the runtime query code robust against modules that in stdout during import
This affected at least
togagiving some warnings on Windows with mere stdout prints. We now have a marker for the start of our output that we look for and safely ignore them.Windows: Do not attempt to attach to the console when running in DLL mode. For onefile with DLL mode, this was unnecessary as the bootstrap already handles it, and for pure DLL mode, it is not desired.
Onefile: Removed unnecessary parent process monitoring in onefile mode, as there is no child process launched.
Anaconda: Determine version and project name for conda packages more reliably
It seems Anaconda is giving variables in package metadata and often no project name, so we derive it from the conda files and its meta data in those cases.
macOS: Make sure the SSL certificates are found when downloading on macOS, ensuring successful downloads.
Windows: Fixed an issue where console mode
attachwas not working in onefile DLL mode.Scons: Fixed an issue where
pragmawas used with oldergccgcccan give warnings about them. This fixes building on older OSes with the system gcc.Compatibility: Fix, need to avoid using filenames with more than 250 chars for long module names.
For cache files, const files, and C files, we need to make sure, we don’t exceed the 255 char limits per path element that literally every OS has.
Also enhanced the check code for legal paths to cover this, so user options are covered from this errors too.
Moved file hashing to file operations where it makes more sense to allow module names to use hashing to provide a legal filename to refer to themselves.
Compatibility: Fixed an issue where walking included compiled packages through the Nuitka loader could produce incorrect names in some cases.
Windows: Fixed wrong calls made when checking
stderrproperties during launch if it wasNone.Debugging: Fixed an issue where the segfault non-deployment disable itself before doing anything else.
Plugins: Fix, the warning to choose a GUI plugin for
matplotlibwas given withtk-interplugin enabled still, which is of course not appropriate.Distutils: Fix, do not recreate the build folder with a
.gitignorefile.We were re-creating it as soon as we looked at what it would be, now it’s created only when asking for that to happen.
No-GIL: Addressed compile errors for the free-threaded dictionary implementation that were introduced by necessary hot-fixes in the version 2.7.
Compatibility: Fixed handling of generic classes and generic type declarations in Python 3.12.
macOS: Fixed an issue where entitlements were not properly provided for code signing.
Onefile: Fixed delayed shutdown for terminal applications in onefile DLL mode.
Was waiting for non-used child processes, which don’t exist and then the timeout for that operation, which is always happening on CTRL-C or terminal shutdown.
Python3.13: Fix, seems interpreter frames with None code objects exist and need to be handled as well.
Standalone: Fix, need to allow for
setuptoolspackage to be user provided.Windows: Avoided using non-encodable dist and build folder names.
Some paths don’t become short, but still be non-encodable from the file system for tools. In these cases, temporary filenames are used to avoid errors from C compilers and other tools.
Python3.13: Fix, ignore stdlib
cgimodule that might be left over from previous installsThe module was removed during development, and if you install over an old alpha version of 3.13 a newer Python, Nuitka would crash on it.
macOS: Allowed the
libfolder for the Python Build Standalone flavor, improving compatibility.macOS: Allowed libraries for
rpathresolution to be found in all Homebrew folders and not justlib.Onefile: Need to allow
..in paths to allow outside installation paths.
Package Support
Standalone: Introduced support for the
niceguipackage. (Added in 2.7.1 already.)Standalone: Extended support to include
xgboost.coreon macOS. (Added in 2.7.1 already.)Standalone: Added needed data files for
ursinapackage. (Added in 2.7.1 already.)Standalone: Added support for newer versions of the
pydanticpackage. (Added in 2.7.4 already.)Standalone: Extended
libonnxruntimesupport to macOS, enabling its use in compiled applications on this platform. (Added in 2.7.4 already.)Standalone: Added necessary data files for the
pygameextrapackage. (Added in 2.7.4 already.)Standalone: Included GL backends for the
arcadepackage. (Added in 2.7.4 already.)Standalone: Added more data directories for the
ursinaandpanda3dpackages, improving their out-of-the-box compatibility. (Added in 2.7.4 already.)Standalone: Added support for newer
skimagepackage. (Added in 2.7.5 already.)Standalone: Added support for the
PyTaskbarpackage. (Added in 2.7.6 already.)macOS: Added
tk-intersupport for Python 3.13 with official CPython builds, which now use framework files for Tcl/Tk. (Added in 2.7.6 already.)Standalone: Added support for the
paddlexpackage. (Added in 2.7.6 already.)Standalone: Added support for the
jinxedpackage, which dynamically loads terminal information. (Added in 2.7.6 already.)Windows: Added support for the
ansiconpackage by including a missing DLL. (Added in 2.7.6 already.)macOS: Enhanced configuration for the
pypylonpackage, however, it’s not sufficient. (Added in 2.7.6 already.)Standalone: Added support for newer
numpyversions. (Added in 2.7.7 already.)Standalone: Added support for older
vtkpackage. (Added in 2.7.8 already.)Standalone: Added support for newer
certifiversions that useimportlib.resources. (Added in 2.7.9 already.)Standalone: Added support for the
reportlab.graphics.barcodemodule. (Added in 2.7.9 already.)Standalone: Added support for newer versions of the
transformerspackage. (Added in 2.7.11 already.)Standalone: Added support for newer versions of the
sklearnpackage. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
scipypackage. (Added in 2.7.12 already.)Standalone: Added support for older versions of the
cv2package (specifically version 4.4). (Added in 2.7.12 already.)Standalone: Added initial support for the
vllmpackage. (Added in 2.7.12 already.)Standalone: Ensured all necessary DLLs for the
pygamepackage are included. (Added in 2.7.12 already.)Standalone: Added support for newer versions of the
zaber_motionpackage. (Added in 2.7.13 already.)Standalone: Added missing dependencies for the
pymediainfopackage. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
sklearnpackage by including a missing dependency. (Added in 2.7.13 already.)Standalone: Added support for newer versions of the
togapackage. (Added in 2.7.14 already.)Standalone: Added support for the
wordninja-enhancedpackage. (Added in 2.7.14 already.)Standalone: Added support for the
Fast-SSIMpackage. (Added in 2.7.14 already.)Standalone: Added a missing data file for the
rfc3987_syntaxpackage. (Added in 2.7.14 already.)Standalone: Added missing data files for the
trimeshpackage. (Added in 2.7.15 already.)Standalone: Added support for the
gdsfactory,klayout, andkfactorypackages. (Added in 2.7.15 already.)Standalone: Added support for the
vllmpackage. (Added in 2.7.16 already.)Standalone: Added support for newer versions of the
tkinterwebpackage. (Added in 2.7.15 already.)Standalone: Added support for newer versions of the
cmsis_pack_managerpackage. (Added in 2.7.15 already.)Standalone: Added missing data files for the
idlelibpackage. (Added in 2.7.15 already.)Standalone: Avoid including debug binary on non-Windows for Qt Webkit.
Standalone: Add dependencies for pymediainfo package.
Standalone: Added support for the
winptypackage.Standalone: Added support for newer versions of the
gipackage.Standalone: Added support for newer versions of the
litellmpackage.Standalone: Added support for the
traitsandpyfacepackages.Standalone: Added support for newer versions of the
transformerspackage.Standalone: Added data files for
rasteriopackage.Standalone: Added support for
ortoolspackage.Standalone: Added support newer “vtk” package
New Features
Python3.14: Added experimental support for Python3.14, not recommended for use yet, as this is very fresh and might be missing a lot of fixes.
Release: Added an extra dependency group for the Nuitka build-backend, intended for use in
pyproject.tomland other build-system dependencies. To use it depend inNuitka[build-wheel]instead of Nuitka. (Added in 2.7.7 already.)For release we also added
Nuitka[onefile],Nuitka[standalone],Nuitka[app]as extra dependency groups. If icon conversions are used, e.g.Nuitka[onefile,icon-conversion]adds the necessary packages for that. If you don’t care about what’s being pulled inNuitka[all]can be used, by defaultNuitkaonly comes with the bare minimum needed and will inform about missing packages.macOS: Added
--macos-sign-keyring-filenameand--macos-sign-keyring-passwordto automatically unlock a keyring for use during signing. This is very useful for CI where no UI prompt can be used.Windows: Detect when
inputcannot be used due to no console or the console not providing proper standard input and produce a dialog for entry instead. Shells likecmd.exeexecute inputs as commands entered when attaching to them. With this, the user is informed to make the input into the dialog instead. In case of no terminal, this just brings up the dialog for GUI mode.Plugins: Introduced
global_change_functionto the anti-bloat engine, allowing function replacements across all sub-modules of a package at once. (Added in 2.7.6 already.)Reports: For Python 3.13+, the compilation report now includes information on GIL usage. (Added in 2.7.7 already.)
macOS: Added an option to prevent an application from running in multiple instances. (Added in 2.7.7 already.)
AIX: Added support for this OS as well, now standalone and module mode work there too.
Scons: When C a compilation fails to due warnings in
--debugmode, recognize that and provide the proper extra options to use if you want to ignore that.Non-Deployment: Added a non-deployment handler to catch modules
Non-Deployment: Added non-deployment handler to catch modules that error exit on import, while assumed to work perfectly.
This will give people an indication that the
numpymodule is expected to work and that maybe just the newest version is not and we need to be told about it.Non-Deployment: Added a non-deployment handler for
DistributionNotFoundexceptions in the main program, which now points the user to the necessary metadata options.UI: Made
--include-data-files-externalthe primary option for placing data files alongside the created program.This now works with standalone mode too, and is no longer onefile specific, the name should reflect that and people can now use it more broadly.
Plugins: Added support for multiple warnings of the same kind. The
dill-compatplugin needs that as it supports multiple packages.Plugins: Added detector for the
dill-compatplugin that detects usages ofdill,cloudpickleandray.cloudpickle.Standalone: Add support for including Visual Code runtime dlls on Windows.
When MSVC (Visual Studio) is installed, we take the runtime DLLs from its folders. We cannot take the ones from the
redistpackages installed to system folders for license reasons.Gives a warning when these DLLs would be needed, but were not found.
We might want to add an option later to exclude them again, for size purposes, but correctness out of the box is more important for now.
UI: Make sure the distribution name is correct for
--include-distribution-metadataoption values.Plugins: Added support for configuring re-compilation of extension modules from their source code.
When we have both Python code and an extension module, we only had a global option available on the command line.
This adds
--recompile-extension-modulesfor more fine grained choices as it allows to specify names and patterns.For
zmq, we need to enforce it to never be compiled, as it checks if it is compiled with Cython at runtime, so re-compilation is never possible.
Reports: Include environment flags for C compiler and linker picked up for the compilation. Sometimes these cause compilation errors that and this will reveal there presence.
Optimization
Enhanced detection of
raisestatements that use compile-time constant values which are not actual exception instances.This improvement prevents Nuitka from crashing during code generation when encountering syntactically valid but semantically incorrect code, such as
raise NotImplemented. While such code is erroneous, it should not cause a compiler crash. (Added in 2.7.1 already.)With unknown locals dictionary variables trust very hard values there too.
With this using hard import names also optimize inside of classes.
This makes
gcloudmetadata work, which previously wasn’t resolved in their code.
macOS: Enhanced
PySide2support by removing the general requirement for onefile mode. Onefile mode is now only enforced forQtWebEnginedue to its specific stability issues when not bundled this way. (Added in 2.7.4 already.)Scons: Added support for C23 embedding of the constants blob with ClangCL, avoiding the use of resources. Since the onefile bootstrap does not yet honor this for its payload, this feature is not yet complete but could help with size limitations in the future.
Plugins: Overhauled the UPX plugin.
Use better compression than before, hint the user at disabling onefile compression where applicable to avoid double compression. Output warnings for files that are not considered compressible. Check for
upxbinary sooner.Scons: Avoid compiling
haclcode for macOS where it’s not needed.
Anti-Bloat
Improved handling of the
astropypackage by implementing global replacements instead of per-module ones. Similar global handling has also been applied toIPythonto reduce overhead. (Added in 2.7.1 already.)Avoid
docutilsusage in themarkdown2package. (Added in 2.7.1 already.)Reduced compiled size by avoiding the use of “docutils” within the
markdown2package. (Added in 2.7.1 already.)Avoid including the testing framework from the
langsmithpackage. (Added in 2.7.6 already.)Avoid including
setuptoolsfromjax.version. (Added in 2.7.6 already.)Avoid including
unittestfrom thereportlabpackage. (Added in 2.7.6 already.)Avoid including
IPythonfor thekeraspackage using a more global approach. (Added in 2.7.11 already.)Avoid including the
tritonpackage when compilingtransformers. (Added in 2.7.11 already.)Avoid a bloat warning for an optional import in the
seabornpackage. (Added in 2.7.13 already.)Avoid compiling generated
google.protobuf.*_pb2files. (Added in 2.7.7 already.)Avoid including
tritonandsetuptoolswhen using thexformerspackage. (Added in 2.7.16 already.)Refined
dasksupport to not removepandas.testingwhenpytestusage is allowed. (Added in 2.7.16 already.)Avoid compiling the
tensorflowmodule that is very slow and contains generated code.Avoid using
setuptoolsincupypackage.Avoid false bloat warning in
seadocpackage.Avoid using
daskinsklearnpackage.Avoid using
cupy.testingin thecupypackage.Avoid using
IPythonin theroboflowpackage.Avoid including
rayfor thevllmpackage.Avoid using
dillin thetorchpackage.
Organizational
UI: Remove obsolete options to control the compilation mode from help output. We are keeping them only to not break existing workflows, but
--mode=...should be used now, and these options will start triggering warnings soon.Python3.13.4: Reject broken CPython official release for Windows.
The link library included is not the one needed for GIL, and as such it breaks Nuitka heavily and must be errored out on, all smaller or larger micro versions work, but this one does not.
Release: Do not use Nuitka 2.7.9 as it broke data file access via
__file__in onefile mode on Windows. This is a brown paper bag release, with 2.7.10 containing only the fix for that. Sorry for the inconvenience.Release: Ensured proper handling of newer
setuptoolsversions during Nuitka installation. (Fixed in 2.7.4 already.)UI: Sort
--list-distribution-metadataoutput and remove duplicates. (Changed in 2.7.8 already.)Visual Code: Added a Python 2.6 configuration for Win32 to aid in comparisons and legacy testing.
UI: Now lists available Qt plugin families if
--include-qt-plugincannot find one.UI: Warn about compiling a file named
__main__.pywhich should be avoided, instead you should specify the package directory in that case.UI: Make it an error to compile a file named
__init__.pyfor standalone mode.
Debugging: The
--editoption now correctly finds files even when using long, non-shortened temporary file paths.Debugging: The
pyside6plugin now enforces--no-debug-immortal-assumptionswhen--debugis on because PySide6 violates these and we don’t need Nuitka to check for that then as it will abort when it finds them.Quality: Avoid writing auto-formatted files with same contents
That avoids stirring up tools that listen to changes.
For example the Nuitka website auto-builder otherwise rebuilt per release post on docs update.
Quality: Use latest version of
deepdiff.Quality: Added autoformat for JSON files.
Release: The man pages were using outdated options and had no example for standalone or app modes. Also the actual options were no longer included.
GitHub: Use the
--modeoptions in the issue template as well.GitHub: Enhanced wordings for bug report template to give more directions and more space for excellent reports to be made.
GitHub: The bug report template now requests the output of our package metadata listing tool, as it provides more insight into how Nuitka perceives the environment.
Debugging: Re-enabled important warnings for Clang, which had unnoticed for a long time and prevented a few things from being recognized.
Debugging: Support arbitrary debuggers through –debugger-choice.
Support arbitrary debuggers for use in the
--debuggermode, if you specify all of their command line you can do anything there.Also added predefined
valgrind-memcheckmode for memory checker tool of Valgrind to be used.UI: Added rich as a progress bar that can be used. Since it’s available via pip, it can likely be found and requires no inline copy. Added colors and similar behavior for
tqdmas well.UI: Remove obsolete warning for Linux with
upxplugin.We don’t use
appimageanymore for a while now, so its constraints no longer apply.UI: Add warnings for module specific options too. The logic to not warn on GitHub Actions was inverted, this restores warnings for normal users.
UI: Output the module name in question for
options-nannyplugin and parameter warnings.UI: When a forbidden import comes from an implicit import, report it properly.
Sometimes
.pyifiles from extension modules cause an import, but it was not clear which one; now it will indicate the module causing it.UI: More clear error message in case a Python for scons was not found.
Actions: Cover debug mode compilation at least once.
Quality: Resolve paths from all OSes in
--edit. Sometime I want to look at a file on a different OS, and there is no need to enforce being on the same one for path resolution to work.Actions: Updated to a newer Ubuntu version for testing, as to get
clang-formatinstalled anymore.Debugging: Allow for C stack output in signal handlers, this is most useful when doing the non-deployment handler that catches them to know where they came from more precisely.
UI: Show no-GIL in output of Python flavor in compilation if relevant.
Tests
Removed Azure CI configuration, as testing has been fully migrated to GitHub Actions. (Changed in 2.7.9 already.)
Improved test robustness against short paths for package-containing directories. (Added in 2.7.4 already.)
Prevented test failures caused by rejected download prompts during test execution, making CI more stable. (Added in 2.7.4 already.)
Refactored common testing code to avoid using
doctests, preventing warnings in specific standalone mode test scenarios related to reference counting. (Added in 2.7.4 already.)Tests: Cover the memory leaking call re-formulation with a reference count test.
Cleanups
Plugins: Improved
pkg_resourcesintegration by using the__loader__attribute of the registering module for loader type registration, avoiding modification of the globalbuiltinsdictionary. (Fixed in 2.7.2 already.)Improved the logging mechanism for module search scans. It is now possible to enable tracing for individual
locateModulecalls, significantly enhancing readability and aiding debugging efforts.Scons: Refactored architecture specific options into dedicated functions to improve code clarity.
Spelling: Various spelling and wording cleanups.
Avoid using
#ifdefin C code templates, and let’s just avoid it generally.Added missing slot function names to the ignored word list.
Renamed variables related to slots to be more verbose and proper spelling as a result, as that’s for better understanding of their use anyway.
Scons: Specify versions supported for Scons by excluding the ones that are not, rather than manually maintaining a list. This adds automatic support for Python 3.14.
Plugins: Removed a useless call to
internas it did not have thought it does.Attach copyright during code generation for code specializations
This also enhances the formatting for almost all files by making leading and trailing new lines more consistent.
One C file turns out unused and was removed as a left over from a previous refactoring.
Summary
This release was supposed to focus on scalability, but that didn’t happen again due to a variety of important issues coming up as well as a created downtime after high private difficulties after a planned surgery. However, the upcoming release will have it finally.
The onefile DLL mode as used on Windows has driven a lot of need for corrections, some of which are only in the final release, and this is probably the first time it should be usable for everything.
For compatibility, working with the popular (yet - not yes recommended UV-Python), Windows UI fixes for temporary onefile and macOS improvements, as well as improved Android support are excellent.
The next release of Nuitka however will have to focus on scalability and maintenance only. But as usual, not sure if it can happen.
November 15, 2025 01:52 PM UTC
November 14, 2025
Real Python
The Real Python Podcast – Episode #274: Preparing Data Science Projects for Production
How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, "Production Ready Data Science."
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 14, 2025 12:00 PM UTC
EuroPython Society
Recognising Michael Foord as an Honorary EuroPython Society Fellow
Hi everyone. Today, we are honoured to announce a very special recognition.
The EuroPython Society has posthumously elected Michael Foord (aka voidspace) as an Honorary EuroPython Society Fellow.
Michael Foord (1974–2025)
Michael was a long-time and deeply influential member of the Python community. He began using Python in 2002, became a Python core developer, and left a lasting mark on the language through his work on unittest and the creation of the mock library. He also started the tradition of the Python Language Summits at PyCon US, and he consistently supported and connected the Python community across Europe and beyond.
However, his legacy extends far beyond code. Many of us first met Michael through his writing and tools, but what stayed with people was the example he set through his contributions, and how he showed up for others. He answered questions with patience, welcomed newcomers, and cared about doing the right thing in small, everyday ways. He made space for people to learn. He helped the Python community in Europe grow stronger and more connected. He made our community feel like a community.
His impact was celebrated widely across the community, with many tributes reflecting his kindness, humour, and dedication:
At EuroPython 2025, we held a memorial and kept a seat for him in the Forum Hall:
A lasting tribute
EuroPython Society Fellows are people whose work and care move our mission forward. By naming Michael an Honorary Fellow, we acknowledge his technical contributions and also the kindness and curiosity that defined his presence among us. We are grateful for the example he set, and we miss him.
Our thoughts and thanks are with Michael&aposs friends, collaborators, and family. His work lives on in our tools. His spirit lives on in how we treat each other.
With gratitude,
Your friends at EuroPython Society
November 14, 2025 09:00 AM UTC
November 13, 2025
Paolo Melchiorre
How to use UUIDv7 in Python, Django and PostgreSQL
Learn how to use UUIDv7 today with stable releases of Python 3.14, Django 5.2 and PostgreSQL 18. A step by step guide showing how to generate UUIDv7 in Python, store them in Django models, use PostgreSQL native functions and build time ordered primary keys without writing SQL.
November 13, 2025 11:00 PM UTC
Python Engineering at Microsoft
Python in Visual Studio Code – November 2025 Release
We’re excited to announce that the November 2025 release of the Python extension for Visual Studio Code is now available!
This release includes the following announcements:
- Add Copilot Hover Summaries as docstring
- Localized Copilot Hover Summaries
- Convert wildcard imports Code Action
- Debugger support for multiple interpreters via the Python Environments Extension
If you’re interested, you can check the full list of improvements in our changelogs for the Python and Pylance extensions.
Add Copilot Hover Summaries as docstring
You can now add your AI-generated documentation directly into your code as a docstring using the new Add as docstring command in Copilot Hover Summaries. When you generate a summary for a function or class, navigate to the symbol definition and hover over it to access the Add as docstring command, which inserts the summary below your cursor formatted as a proper docstring.
This streamlines the process of documenting your code, allowing you to quickly enhance readability and maintainability without retyping.

Localized Copilot Hover Summaries
GitHub Copilot Hover Summaries inside Pylance now respect your display language within VS Code. When you invoke an AI-generated summary, you’ll get strings in the language you’ve set for your editor, making it easier to understand the generated documentation.

Convert wildcard imports into Code Action
Wildcard imports (from module import *) are often discouraged in Python because they can clutter your namespace and make it unclear where names come from, reducing code clarity and maintainability.
Pylance now helps you clean up modules that still rely on from module import * via a new Code Action. It replaces the wildcard with the explicit symbols, preserving aliases and keeping the import to a single statement. To try it out, you can click on the line with the wildcard import and press Ctrl + . (or Cmd + . on macOS) to select the Convert to explicit imports Code Action.

Debugger support for multiple interpreters via the Python Environments Extension
The Python Debugger extension now leverages the APIs from the Python Environments Extension (vscode-python-debugger#849). When enabled, the debugger can recognize and use different interpreters for each project within a workspace. If you have multiple folders configured as projects—each with its own interpreter – the debugger will now respect these selections and use the interpreter shown in the status bar when debugging.
To enable this functionality, set “python.useEnvironmentsExtension”: true in your user settings. The new API integration is only active when this setting is turned on.
Please report any issues you encounter to the Python Debugger repository.
Other Changes and Enhancements
We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:
- Resolve unexpected blocking during PowerShell command activation (vscode-python-environments#952)
- The Python Environments Extension now respects the existing python.poetryPath user setting to specify which Poetry executable to use (vscode-python-environments#918)
- The Python Environments Extension now detects both requirements.txt and dev-requirements.txt files when creating a new virtual environment for automatic dependency installation (vscode-python-environments#506)
We would also like to extend special thanks to this month’s contributors:
- @iBug: Fixed Python REPL cursor drifting in vscode-python#25521
Try out these new improvements by downloading the Python extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.
The post Python in Visual Studio Code – November 2025 Release appeared first on Microsoft for Python Developers Blog.
November 13, 2025 06:41 PM UTC
November 12, 2025
Python Software Foundation
Python is for everyone: Join in the PSF year-end fundraiser & membership drive!
The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Package Index (PyPI), supporting 5 Developers-in-Residence, maintaining critical community infrastructure, and more.
Python is for teaching, learning, playing, researching, exploring, creating, working– the list goes on and on and on! Support this year's fundraiser with your donations and memberships to help the PSF, the Python community, and the language stay strong and sustainable. Because Python is for everyone, thanks to you.
There are two direct ways to join through donate.python.org:
- Donate directly to the PSF! Your donation is a direct way to support and power the future of the Python programming language and community you love. Every donation makes a difference, and we work hard to make a little go a long way.
- Become a PSF Supporting Member! When you sign up as a Supporting Member of the PSF, you become a part of the PSF, are eligible to vote in PSF elections, and help us sustain our mission with your annual support. You can sign up as a Supporting Member at the usual annual rate ($99 USD), or you can take advantage of our sliding scale option (starting at $25 USD)!
>>> Donate or Become a Member Today! <<<
If you already donated and/or you’re already a member, you can:
- Share the fundraiser with your regional and project-based communities: Share this blog post in your Python-related Discords, Slacks, social media accounts- wherever your Python community is! Keep an eye on our social media accounts to see the latest stories and news for the campaign.
- Share your Python story with a call to action: We invite you to share your personal Python, PyCon, or PSF story. What impact has it made in your life, in your community, in your career? Share your story in a blog post or on your social media platform of choice and add a link to donate.python.org.
- Ask your employer to sponsor: If your company is using Python to build its products and services, check to see if they already sponsor the PSF on our Sponsors page. If not, reach out to your organization's internal decision-makers and impress on them just how important it is for us to power the future of Python together, and send them our sponsor prospectus.
Your donations and support:
- Keep Python thriving
- Support CPython and PyPI progress
- Increase security across the Python ecosystem
- Bring the global Python community together
- Make our community more diverse and robust every year
Highlights from 2025:
- Producing another wonderful PyCon US: We welcomed 2,225 attendees for PyCon US 2025– 1,404 of whom were newcomers– at the David L. Lawrence Convention Center in beautiful downtown Pittsburgh. PyCon US 2025 was packed with 9 days of content, education, and networking for the Python community, including 6 Keynote Sessions, 91 Talks, including the Charlas Spanish track, 24 Tutorials, 20 Posters, 30+ Sprint Projects, 146 Open Spaces, and 60 Booths!
- Continuing to enhance Python and PyPI’s security through Developers-in-Residence: The PSF’s PyPI Safety and Security Engineer, Mike Fiedler, has implemented new safeguards, including automation to detect expiring email domains and prevent impersonation attacks, as well as guidance for maintainers to use more secure authentication methods like WebAuthn and Trusted Publishers. The PSF’s Security Developer-in-Residence, Seth Larson, continues to lead efforts to strengthen Python’s security and transparency. His work on PEP 770 introduces standardized Software Bill-of-Materials (SBOMs) within Python packages, improving visibility into dependencies for stronger supply chain security. A new white paper co-authored with Alpha-Omega outlines how these improvements enhance trust and measurability across the ecosystem.
- Adoption of pypistats.org: The PSF infrastructure team has officially adopted the operation of pypistats.org, which had been run by volunteer Christopher Flynn for over six years (thank you, Christopher!). The PSF’s Infrastructure Team now handles the service’s infrastructure, costs, and domain registration– and the service itself remains open source and community-maintained.
- Advancing PyPI Organizations: The rollout of PyPI Organizations is now well underway, marking a major milestone in improving project management and collaboration across the Python ecosystem. With new Terms of Service finalized and supporting tools in place, the PSF has cleared its backlog of requests and approved thousands of organizations—including 2,409 Community and 4979 Company organizations as of today. Hundreds of these organizations have already begun adding members, transferring projects, and subscribing to the new Company tier, generating sustainable support for the PSF. We’re excited to see how teams are using these new features to better organize and maintain their projects on PyPI.
- Empowering the Python community through Fiscal Sponsorship: We are proud to continue supporting our 20 fiscal sponsoree organizations with their initiatives and events all year round. The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees such as PyLadies and Pallets, and provides back office support so they can focus on their missions. Consider donating to your favorite PSF Fiscal Sponsoree and check out our Fiscal Sponsorees page to learn more about what each of these awesome organizations is all about!
- Serving our community with grants: The PSF Grants Program awarded approximately $340K to 86 grantees around the world; supporting local conferences, workshops, and community initiatives that keep Python growing and accessible to all. While we had to make the difficult decision to pause the program early to ensure financial sustainability, we would love to reopen it as soon as possible. Your participation in this year’s fundraiser fuels that effort!
- Honoring community leaders: The PSF honored three leaders with Distinguished Service Awards this year. Ewa Jodlowska helped transform the PSF into a professional, globally supportive organization. Thomas Wouters has contributed decades of leadership, guidance, and institutional knowledge. Van Lindberg provided essential legal expertise that guided the PSF through growth and governance. Their dedication has left a lasting impact on the PSF, Python, and its community. The PSF was also thrilled to recognize Katie McLaughlin, Sarah Kuchinsky, and Rodrigo Girão Serrão with Community Service Awards (CSA) for their outstanding contributions to the Python community. Their dedication, creativity, and generosity embody the spirit of Python and strengthen our global community. We recognized Jay Miller with a CSA for his work to improve diversity, inclusion, and equity in the global Python community through founding and sustaining Black Python Devs. We also honored Matt Lebrun and Micaela Reyes with CSA's for their efforts to grow and support the Python community in the Philippines through conferences, meetups, and volunteer programs.
- Finding strength in the Python community: When the PSF shared the news about turning down a NSF grant, the outpouring of support from the Python community was nothing short of incredible. In just one day, you helped raise over $60K and welcomed 125 new Supporting Members- in the week after, that number jumped to $150K+ and 270+ new Supporting Members! A community-led matching campaign and countless messages of support, solidarity, and encouragement reminded us that while some choices are tough, we never face them alone. The PSF Board & Staff are deeply moved and energized by your words, actions, and continued belief in our shared mission. This moment has set the stage for a record-breaking end-of-year fundraiser, and we are so incredibly grateful to be in community with each of you.
November 12, 2025 05:03 PM UTC
Real Python
The Python Standard REPL: Try Out Code and Ideas Quickly
The Python standard REPL (Read-Eval-Print Loop) lets you run code interactively, test ideas, and get instant feedback. You start it by running the python command, which opens an interactive shell included in every Python installation.
In this tutorial, you’ll learn how to use the Python REPL to execute code, edit and navigate code history, introspect objects, and customize the REPL for a smoother coding workflow.
By the end of this tutorial, you’ll understand that:
- You can enter and run simple or compound statements in a REPL session.
- The implicit
_variable stores the result of the last evaluated expression and can be reused in later expressions. - You can reload modules dynamically with
importlib.reload()to test updates without restarting the REPL. - The modern Python REPL supports auto-indentation, history navigation, syntax highlighting, quick commands, and autocompletion, which improves your user experience.
- You can customize the REPL with a startup file, color themes, and third-party libraries like Rich for a better experience.
With these skills, you can move beyond just running short code snippets and start using the Python REPL as a flexible environment for testing, debugging, and exploring new ideas.
Get Your Code: Click here to download the free sample code that you’ll use to explore the capabilities of Python’s standard REPL.
Take the Quiz: Test your knowledge with our interactive “The Python Standard REPL: Try Out Code and Ideas Quickly” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
The Python Standard REPL: Try Out Code and Ideas QuicklyTest your understanding of the Python standard REPL. The Python REPL allows you to run Python code interactively, which is useful for testing new ideas, exploring libraries, refactoring and debugging code, and trying out examples.
Getting to Know the Python Standard REPL
In computer programming, you’ll find two kinds of programming languages: compiled and interpreted languages. Compiled languages like C and C++ have an associated compiler program that converts the language’s code into machine code.
This machine code is typically saved in an executable file. Once you have an executable, you can run your program on any compatible computer system without needing the compiler or the source code.
In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed on your computer to run Python code. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.
However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what’s known as an interactive Read-Eval-Print Loop (REPL), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.
The Python REPL is a built-in interactive coding playground that you can start by typing python in your terminal. Once in a REPL session, you can run Python code:
>>> "Python!" * 3
Python!Python!Python!
>>> 40 + 2
42
In the REPL, you can use Python as a calculator, but also try any Python code you can think of, and much more! Jump to starting and terminating REPL interactive sessions if you want to get your hands dirty right away, or keep reading to gather more background context first.
Note: In this tutorial, you’ll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don’t have CPython yet, then check out How to Install Python on Your System: A Guide for detailed instructions.
The standard REPL has changed significantly since Python 3.13 was released. Several limitations from earlier versions have been lifted. Throughout this tutorial, version differences are indicated when appropriate.
To dive deeper into the new REPL features, check out these resources:
The Python interpreter can execute Python code in two modes:
- Script, or program
- Interactive, or REPL
In script mode, you use the interpreter to run a source file—typically a .py file—as an executable program. In this case, Python loads the file’s content and runs the code line by line, following the script or program’s execution flow.
Alternatively, interactive mode is when you launch the interpreter using the python command and use it as a platform to run code that you type in directly.
In this tutorial, you’ll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!
What Is Python’s Interactive Shell or REPL?
When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive session. In this shell, your keyboard is the input source, and your screen is the output destination.
Note: In this tutorial, the terms interactive shell, interactive session, interpreter session, and REPL session are used interchangeably.
Here’s how the REPL works: it takes input consisting of Python code, which the interpreter parses and evaluates. Next, the interpreter displays the result on your screen, and the process starts again as a loop.
Read the full article at https://realpython.com/python-repl/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 12, 2025 02:00 PM UTC
Peter Bengtsson
Using AI to rewrite blog post comments
Using AI to correct and edit blog post comments as part of the moderation process.
November 12, 2025 12:42 PM UTC
Python Morsels
Unnecessary parentheses in Python
Python's ability to use parentheses for grouping can often confuse new Python users into over-using parentheses in ways that they shouldn't be used.
Table of contents
- Parentheses can be used for grouping
- Python's
ifstatements don't use parentheses - Parentheses can go anywhere
- Parentheses for wrapping lines
- Parentheses that make statements look like functions
- Parentheses can go in lots of places
- Use parentheses sometimes
- Consider readability when adding or removing parentheses
Parentheses can be used for grouping
Parentheses are used for 3 things in Python: calling callables, creating empty tuples, and grouping.
Functions, classes, and other [callable][] objects can be called with parentheses:
>>> print("I'm calling a function")
I'm calling a function
Empty tuples can be created with parentheses:
>>> empty = ()
Lastly, parentheses can be used for grouping:
>>> 3 * (4 + 7)
33
Sometimes parentheses are necessary to convey the order of execution for an expression.
For example, 3 * (4 + 7) is different than 3 * 4 + 7:
>>> 3 * (4 + 7)
33
>>> 3 * 4 + 7
19
Those parentheses around 4 + 7 are for grouping that sub-expression, which changes the meaning of the larger expression.
All confusing and unnecessary uses of parentheses are caused by this third use: grouping parentheses.
Python's if statements don't use parentheses
In JavaScript if statements look …
Read the full article: https://www.pythonmorsels.com/unnecessary-parentheses/
November 12, 2025 03:30 AM UTC
Seth Michael Larson
Blogrolls are the Best(rolls)
Happy 6-year blogiversary to me! 🎉 To celebrate I want to talk about other peoples’ blogs, more specifically the magic of “blogrolls”. Blogrolls are “lists of other sites that you read, are a follower of, or recommend”. Any blog can host a blogroll, or sometimes websites can be one big blogroll.
I’ve hosted a blogroll on my own blog since 2023 and encourage other bloggers to do so. My own blogroll is generated from the list of RSS feeds I subscribe to and articles that I “favorite” within my RSS reader. If you want to be particularly fancy you can add an RSS feed (example) to your blogroll that provides readers a method to “subscribe” for future blogroll updates.
Blogrolls are like catnip for me: I cannot resist opening and Ctrl-clicking
every link until I can’t see my tabs anymore. The feeling is akin to the first deep breath
of air before starting a hike: there’s a rush of new information, topics, and potential new blogs
to follow.
Blogrolls can bridge the “effort chasm” I frequently hear as an issue when I recommend folks try an RSS feed reader. We’re not used to empty feeds anymore; self-curating blogs until you receive multiple articles per day takes time and effort. Blogrolls can help here, especially ones that publish using the importable OPML format.
You can instantly populate your feed reader app with hundreds of feeds from blogs that are likely relevant to you. Simply create an account on a feed reader, import the blogroll OPML document from a blogger you enjoy, and watch the articles “roll” in. Blogrolls are almost like Bluesky “Starter Packs” in this way!
Hopefully this has convinced you to either curate your own blogroll or to start looking for (or asking for!) blogrolls from your favorite writers on the Web. Share your favorite blogroll with me on email or social media. Title inspired by “Hexagons are the Best-agons”.
Thanks for keeping RSS alive! ♥
November 12, 2025 12:00 AM UTC
November 11, 2025
Ahmed Bouchefra
Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn.
If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now.
Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future.
And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over.
Ready? Let’s dive in.
1. Cohesion & Single Responsibility
This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change.
High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes.
Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare.
The senior approach? Break it up. You’d have:
- An
EmailValidatorclass. - A
UserRespositoryclass (just for database stuff). - An
EmailServiceclass. - A
UserActivityLoggerclass.
Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful.
2. Encapsulation & Abstraction
This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data.
Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos.
The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.”
Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds.
The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface.
3. Loose Coupling & Modularity
Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system.
Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code.
The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.”
Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably.
A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it. —
4. Reusability & Extensibility
This one’s a question you should always ask yourself: Can I add new functionality without editing existing code?
Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible.
The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic.
Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified.
5. Portability
This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed?
The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world.
The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure.
6. Defensibility
Write your code as if an idiot is going to use it. Because someday, that idiot will be you.
This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it.
In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults.
And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data.
7. Maintainability & Testability
The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test.
Code that is easy to test is, by default, more maintainable.
Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases.
The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components.
8. Simplicity (KISS, DRY, YAGNI)
Finally, after all that, the highest goal is simplicity.
- KISS (Keep It Simple, Stupid): Simple code is harder to write than complex code, but it’s a million times easier to understand and maintain. Swallow your ego and write the simplest thing that works.
- DRY (Don’t Repeat Yourself): If you’re doing something more than once, wrap it in a reusable function or component.
- YAGNI (You Aren’t Gonna Need It): This is the counter-balance to all the principles above. Don’t over-engineer. Don’t add a flexible, extensible system if you’re just building a quick prototype to validate an idea. When I was coding my startup, I ignored a lot of these patterns at first because speed was more important. Always ask what the business need is before you start engineering a masterpiece.
Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last.
If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.
November 11, 2025 09:03 PM UTC
PyCoder’s Weekly
Issue #708: Debugging Live Code, NiceGUI, Textual, and More (Nov. 11, 2025)
#708 – NOVEMBER 11, 2025
View in Browser »
Debugging Live Code With CPython 3.14
Python 3.14 added new capabilities to attach to and debug a running process. Learn what this means for debugging and examining your running code.
SURISTER
NiceGUI Goes 3.0
Talk Python interviews Rodja Trappe and Falko Schindler, creators of the NiceGUI toolkit. They talk about what it can do and how it works.
TALK PYTHON
AI Code Reviews Without the Noise
Sentry’s AI Code Review has caught more than 30,000 bugs before they hit production. 🤯 What it hasn’t caught: about a million spammy style nitpicks. Plus, it now predicts bugs 50% faster, and provides agent prompts to automate your fixes. Learn more about Sentry’s AI Code Review →
SENTRY sponsor
Building UIs in the Terminal With Python Textual
Learn to build rich, interactive terminal UIs in Python with Textual: a powerful library for modern, event-driven TUIs.
REAL PYTHON course
Python Jobs
Python Video Course Instructor (Anywhere)
Python Tutorial Writer (Anywhere)
Articles & Tutorials
How Often Does Python Allocate?
How often does Python allocate? The answer is “very often”. This post demonstrates how you can see that for yourself. See also the associated HN discussion
ZACK RADISIC
Improving Security and Integrity of Python Package Archives
Python packages are built on top of archive formats like ZIP which can be problematic as features of the format can be abused. A recent white paper outlines dangers to PyPI and what can be done about it.
PYTHON SOFTWARE FOUNDATION
The 2025 AI Stack, Unpacked
Temporal’s industry report explores how teams like Snap, Descript, and ZoomInfo are building production-ready AI systems, including what’s working, what’s breaking, and what’s next. Download today to see how your stack compares →
TEMPORAL sponsor
10 Smart Performance Hacks for Faster Python Code
Some practical optimization hacks, from data structures to built-in modules, that boost speed, reduce overhead, and keep your Python code clean.
DIDO GRIGOROV
Understanding the PSF’s Current Financial Outlook
A summary of the Python Software Foundation’s current financial outlook and what that means to the variety of community groups it supports.
PYTHON SOFTWARE FOUNDATION
__dict__: Where Python Stores Attributes
Most Python objects store their attributes in a __dict__ dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER
My Favorite Django Packages
A descriptive list of Mattias’s favorite Django packages divided into areas, including core helpers, data structures, CMS, PDFs, and more.
MATTHIAS KESTENHOLZ
A Close Look at a FastAPI Example Application
Set up an example FastAPI app, add path and query parameters, and handle CRUD operations with Pydantic for clean, validated endpoints.
REAL PYTHON
Quiz: A Close Look at a FastAPI Example Application
Practice FastAPI basics with path parameters, request bodies, async endpoints, and CORS. Build confidence to design and test simple Python web APIs.
REAL PYTHON
An Annual Release Cycle for Django
Carlton wants Django to move to an annual release cycle. This post explains why he thinks this way and what the benefits might be.
CARLTON GIBSON
Behave: ML Tests With Behavior-Driven Development
This walkthrough shows how to use the Behave library to bring behavior-driven testing to data and machine learning Python projects.
CODECUT.AI • Shared by Khuyen Tran
Polars and Pandas: Working With the Data-Frame
This post compares the syntax of Polars and pandas with a quick peek at the changes coming in pandas 3.0.
JUMPINGRIVERS.COM • Shared by Aida Gjoka
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
November 12, 2025
REALPYTHON.COM
Python Leiden User Group
November 13, 2025
PYTHONLEIDEN.NL
Python Kino-Barcamp Südost
November 14 to November 17, 2025
BARCAMPS.EU
Python Atlanta
November 14, 2025
MEETUP.COM
PyCon Wroclaw 2025
November 15 to November 16, 2025
PYCONWROCLAW.COM
PyCon Ireland 2025
November 15 to November 17, 2025
PYCON.IE
Happy Pythoning!
This was PyCoder’s Weekly Issue #708.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
November 11, 2025 07:30 PM UTC
Daniel Roy Greenfeld
Visiting Tokyo, Japan from November 12 to 24
I'm excited to announce that me and Audrey will be visiting Japan from November 12 to November 24, 2025! This will be our first time in Japan, and we can't wait to explore Tokyo. Yes, we'll be in Tokyo for most of it, near the Shinjuku area, working from coffee shops, meeting some colleagues, and exploring the city during our free time. Our six year old daughter is with us, so our explorations will be family-friendly.
Unfortunately, we'll be between Python meetups in the Tokyo area. However, if you are in Toyo and write software in any shape or form, and would like to get together for coffee or a meal, please let me know!
If you do Brazilian Jiu-Jitsu in Tokyo, please let me know as well! I'd love to drop by a gym while I'm there.
November 11, 2025 02:45 PM UTC
Real Python
Python Operators and Expressions
Python operators enable you to perform computations by combining objects and operators into expressions. Understanding Python operators is essential for manipulating data effectively.
This video course covers arithmetic, comparison, Boolean, identity, membership, bitwise, concatenation, and repetition operators, along with augmented assignment operators. You’ll also learn how to build expressions using these operators and explore operator precedence to understand the order of operations in complex expressions.
By the end of this video course, you’ll understand that:
- Arithmetic operators perform mathematical calculations on numeric values.
- Comparison operators evaluate relationships between values, returning Boolean results.
- Boolean operators create compound logical expressions.
- Identity operators determine if two operands refer to the same object.
- Membership operators check for the presence of a value in a container.
- Bitwise operators manipulate data at the binary level.
- Concatenation and repetition operators manipulate sequence data types.
- Augmented assignment operators simplify expressions involving the same variable.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
November 11, 2025 02:00 PM UTC
Python Bytes
#457 Tapping into HTTP
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://httptap.dev?featured_on=pythonbytes">httptap</a></strong></li> <li><strong><a href="https://blog.jetbrains.com/pycharm/2025/11/10-smart-performance-hacks-for-faster-python-code/?featured_on=pythonbytes">10 Smart Performance Hacks For Faster Python Code</a></strong></li> <li><strong><a href="https://fastrtc.org?featured_on=pythonbytes">FastRTC</a></strong></li> <li><strong><a href="https://pythontest.com/pipdeptree-uv-pip-tree/?featured_on=pythonbytes">Explore Python dependencies with <code>pipdeptree</code> and <code>uv pip tree</code></a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=YjoTi2hHZ-M' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="457">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://httptap.dev?featured_on=pythonbytes">httptap</a></strong></p> <ul> <li>Rich-powered CLI that breaks each HTTP request into DNS, connect, TLS, wait, and transfer phases with waterfall timelines, compact summaries, or metrics-only output.</li> <li>Features <ul> <li><strong>Phase-by-phase timing</strong> – precise measurements built from httpcore trace hooks (with sane fallbacks when metal-level data is unavailable).</li> <li><strong>All HTTP methods</strong> – GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS with request body support.</li> <li><strong>Request body support</strong> – send JSON, XML, or any data inline or from file with automatic Content-Type detection.</li> <li><strong>IPv4/IPv6 aware</strong> – the resolver and TLS inspector report both the address and its family.</li> <li><strong>TLS insights</strong> – certificate CN, expiry countdown, cipher suite, and protocol version are captured automatically.</li> <li><strong>Multiple output modes</strong> – rich waterfall view, compact single-line summaries, or <code>-metrics-only</code> for scripting.</li> <li><strong>JSON export</strong> – persist full step data (including redirect chains) for later processing.</li> <li><strong>Extensible</strong> – clean Protocol interfaces for DNS, TLS, timing, visualization, and export so you can plug in custom behavior.</li> </ul></li> <li>Example:</li> </ul> <p><img src="https://blobs.pythonbytes.fm/httptap-tp-example.png?cache_id=29ee04" alt="img" /></p> <p><strong>Brian #2: <a href="https://blog.jetbrains.com/pycharm/2025/11/10-smart-performance-hacks-for-faster-python-code/?featured_on=pythonbytes">10 Smart Performance Hacks For Faster Python Code</a></strong></p> <ul> <li>Dido Grigorov</li> <li>A few from the list <ul> <li>Use math functions instead of operators</li> <li>Avoid exception handling in hot loops</li> <li>Use itertools for combinatorial operations - huge speedup</li> <li>Use bisect for sorted list operations - huge speedup</li> </ul></li> </ul> <p><strong>Michael #3: <a href="https://fastrtc.org?featured_on=pythonbytes">FastRTC</a></strong></p> <ul> <li>The Real-Time Communication Library for Python: Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.</li> <li>Features <ul> <li>🗣️ Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.</li> <li>💻 Automatic UI - Use the <code>.ui.launch()</code> method to launch the webRTC-enabled built-in Gradio UI.</li> <li>🔌 Automatic WebRTC Support - Use the <code>.mount(app)</code> method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!</li> <li>⚡️ Websocket Support - Use the <code>.mount(app)</code> method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!</li> <li>📞 Automatic Telephone Support - Use the <code>fastphone()</code> method of the stream to launch the application and get a free temporary phone number!</li> <li>🤖 Completely customizable backend - A <code>Stream</code> can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude?featured_on=pythonbytes">Talk To Claude</a> demo for an example of how to serve a custom JS frontend.</li> </ul></li> </ul> <p><strong>Brian #4: <a href="https://pythontest.com/pipdeptree-uv-pip-tree/?featured_on=pythonbytes">Explore Python dependencies with <code>pipdeptree</code> and <code>uv pip tree</code></a></strong></p> <ul> <li>Suggested by Nicholas Carsner <ul> <li>We have covered it, but in 2017 on <a href="https://pythonbytes.fm/episodes/show/17/googles-python-is-on-fire-and-simon-says-you-have-cpu-load-pythonically">episode 17.</a></li> </ul></li> <li><a href="https://github.com/tox-dev/pipdeptree/blob/main/README.md#pipdeptree"><strong>pipdeptree</a></strong> <ul> <li>Use <code>pipdeptree --python auto</code> to allow it to read your venv</li> </ul></li> <li><a href="https://docs.astral.sh/uv/reference/cli/#uv-pip-tree"><strong>uv pip tree</strong></a> <ul> <li>Also check out <code>uv pip tree</code> and some useful flags <ul> <li><code>--show-version-specifiers</code> to show the rules</li> <li><code>--outdated</code> notes packages that need updated</li> </ul></li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">Lean TDD 0.1.1</a> includes an updated intro and another chapter, “Essential Components”</li> <li><a href="https://marketplace.visualstudio.com/items?itemName=johnpapa.vscode-peacock&featured_on=pythonbytes">VSCode Peacock Extension</a> - color code your different projects</li> </ul> <p><strong>Joke: <a href="https://x.com/pr0grammerhum0r/status/1956264628960272542?s=12&featured_on=pythonbytes">Sure Grandma</a></strong></p>
November 11, 2025 08:00 AM UTC
Glyph Lefkowitz
The “Dependency Cutout” Workflow Pattern, Part I
Tell me if you’ve heard this one before.
You’re working on an application. Let’s call it “FooApp”. FooApp has a dependency on an open source library, let’s call it “LibBar”. You find a bug in LibBar that affects FooApp.
To envisage the best possible version of this scenario, let’s say you actively like LibBar, both technically and socially. You’ve contributed to it in the past. But this bug is causing production issues in FooApp today, and LibBar’s release schedule is quarterly. FooApp is your job; LibBar is (at best) your hobby. Blocking on the full upstream contribution cycle and waiting for a release is an absolute non-starter.
What do you do?
There are a few common reactions to this type of scenario, all of which are bad options.
I will enumerate them specifically here, because I suspect that some of them may resonate with many readers:
-
Find an alternative to LibBar, and switch to it.
This is a bad idea because a transition to a core infrastructure component could be extremely expensive.
-
Vendor LibBar into your codebase and fix your vendored version.
This is a bad idea because carrying this one fix now requires you to maintain all the tooling associated with a monorepo1: you have to be able to start pulling in new versions from LibBar regularly, reconcile your changes even though you now have a separate version history on your imported version, and so on.
-
Monkey-patch LibBar to include your fix.
This is a bad idea because you are now extremely tightly coupled to a specific version of LibBar. By modifying LibBar internally like this, you’re inherently violating its compatibility contract, in a way which is going to be extremely difficult to test. You can test this change, of course, but as LibBar changes, you will need to replicate any relevant portions of its test suite (which may be its entire test suite) in FooApp. Lots of potential duplication of effort there.
-
Implement a workaround in your own code, rather than fixing it.
This is a bad idea because you are distorting the responsibility for correct behavior. LibBar is supposed to do LibBar’s job, and unless you have a full wrapper for it in your own codebase, other engineers (including “yourself, personally”) might later forget to go through the alternate, workaround codepath, and invoke the buggy LibBar behavior again in some new place.
-
Implement the fix upstream in LibBar anyway, because that’s the Right Thing To Do, and burn credibility with management while you anxiously wait for a release with the bug in production.
This is a bad idea because you are betraying your users — by allowing the buggy behavior to persist — for the workflow convenience of your dependency providers. Your users are probably giving you money, and trusting you with their data. This means you have both ethical and economic obligations to consider their interests.
As much as it’s nice to participate in the open source community and take on an appropriate level of burden to maintain the commons, this cannot sustainably be at the explicit expense of the population you serve directly.
Even if we only care about the open source maintainers here, there’s still a problem: as you are likely to come under immediate pressure to ship your changes, you will inevitably relay at least a bit of that stress to the maintainers. Even if you try to be exceedingly polite, the maintainers will know that you are coming under fire for not having shipped the fix yet, and are likely to feel an even greater burden of obligation to ship your code fast.
Much as it’s good to contribute the fix, it’s not great to put this on the maintainers.
The respective incentive structures of software development — specifically, of corporate application development and open source infrastructure development — make options 1-4 very common.
On the corporate / application side, these issues are:
-
it’s difficult for corporate developers to get clearance to spend even small amounts of their work hours on upstream open source projects, but clearance to spend time on the project they actually work on is implicit. If it takes 3 hours of wrangling with Legal2 and 3 hours of implementation work to fix the issue in LibBar, but 0 hours of wrangling with Legal and 40 hours of implementation work in FooApp, a FooApp developer will often perceive it as “easier” to fix the issue downstream.
-
it’s difficult for corporate developers to get clearance from management to spend even small amounts of money sponsoring upstream reviewers, so even if they can find the time to contribute the fix, chances are high that it will remain stuck in review unless they are personally well-integrated members of the LibBar development team already.
-
even assuming there’s zero pressure whatsoever to avoid open sourcing the upstream changes, there’s still the fact inherent to any development team that FooApp’s developers will be more familiar with FooApp’s codebase and development processes than they are with LibBar’s. It’s just easier to work there, even if all other things are equal.
-
systems for tracking risk from open source dependencies often lack visibility into vendoring, particularly if you’re doing a hybrid approach and only vendoring a few things to address work in progress, rather than a comprehensive and disciplined approach to a monorepo. If you fully absorb a vendored dependency and then modify it, Dependabot isn’t going to tell you that a new version is available any more, because it won’t be present in your dependency list. Organizationally this is bad of course but from the perspective of an individual developer this manifests mostly as fewer annoying emails.
But there are problems on the open source side as well. Those problems are all derived from one big issue: because we’re often working with relatively small sums of money, it’s hard for upstream open source developers to consume either money or patches from application developers. It’s nice to say that you should contribute money to your dependencies, and you absolutely should, but the cost-benefit function is discontinuous. Before a project reaches the fiscal threshold where it can be at least one person’s full-time job to worry about this stuff, there’s often no-one responsible in the first place. Developers will therefore gravitate to the issues that are either fun, or relevant to their own job.
These mutually-reinforcing incentive structures are a big reason that users of open source infrastructure, even teams who work at corporate users with zillions of dollars, don’t reliably contribute back.
The Answer We Want
All those options are bad. If we had a good option, what would it look like?
It is both practically necessary3 and morally required4 for you to have a way to temporarily rely on a modified version of an open source dependency, without permanently diverging.
Below, I will describe a desirable abstract workflow for achieving this goal.
Step 0: Report the Problem
Before you get started with any of these other steps, write up a clear description of the problem and report it to the project as an issue; specifically, in contrast to writing it up as a pull request. Describe the problem before submitting a solution.
You may not be able to wait for a volunteer-run open source project to respond to your request, but you should at least tell the project what you’re planning on doing.
If you don’t hear back from them at all, you will have at least made sure to comprehensively describe your issue and strategy beforehand, which will provide some clarity and focus to your changes.
If you do hear back from them, in the worst case scenario, you may discover that a hard fork will be necessary because they don’t consider your issue valid, but even that information will save you time, if you know it before you get started. In the best case, you may get a reply from the project telling you that you’ve misunderstood its functionality and that there is already a configuration parameter or usage pattern that will resolve your problems with no new code. But in all cases, you will benefit from early coordination on what needs fixing before you get to how to fix it.
Step 1: Source Code and CI Setup
Fork the source code for your upstream dependency to a writable location where it can live at least for the duration of this one bug-fix, and possibly for the duration of your application’s use of the dependency. After all, you might want to fix more than one bug in LibBar.
You want to have a place where you can put your edits, that will be version controlled and code reviewed according to your normal development process. This probably means you’ll need to have your own main branch that diverges from your upstream’s main branch.
Remember: you’re going to need to deploy this to your production, so testing gates that your upstream only applies to final releases of LibBar will need to be applied to every commit here.
Depending on your LibBar’s own development process, this may result in slightly
unusual configurations where, for example, your fixes are written against the
last LibBar release tag, rather than its current5 main; if the project has a branch-freshness requirement, you
might need two branches, one for your upstream PR (based on main) and one for
your own use (based on the release branch with your changes).
Ideally for projects with really good CI and a strong “keep main release-ready at all times” policy, you can deploy straight from a development branch, but it’s good to take a moment to consider this before you get started. It’s usually easier to rebase changes from an older HEAD onto a newer one than it is to go backwards.
Speaking of CI, you will want to have your own CI system. The fact that GitHub Actions has become a de-facto lingua franca of continuous integration means that this step may be quite simple, and your forked repo can just run its own instance.
Optional Bonus Step 1a: Artifact Management
If you have an in-house artifact repository, you should set that up for your dependency too, and upload your own build artifacts to it. You can often treat your modified dependency as an extension of your own source tree and install from a GitHub URL, but if you’ve already gone to the trouble of having an in-house package repository, you can pretend you’ve taken over maintenance of the upstream package temporarily (which you kind of have) and leverage those workflows for caching and build-time savings as you would with any other internal repo.
Step 2: Do The Fix
Now that you’ve got somewhere to edit LibBar’s code, you will want to actually fix the bug.
Step 2a: Local Filesystem Setup
Before you have a production version on your own deployed branch, you’ll want to test locally, which means having both repositories in a single integrated development environment.
At this point, you will want to have a local filesystem reference to your LibBar dependency, so that you can make real-time edits, without going through a slow cycle of pushing to a branch in your LibBar fork, pushing to a FooApp branch, and waiting for all of CI to run on both.
This is useful in both directions: as you prepare the FooApp branch that makes any necessary updates on that end, you’ll want to make sure that FooApp can exercise the LibBar fix in any integration tests. As you work on the LibBar fix itself, you’ll also want to be able to use FooApp to exercise the code and see if you’ve missed anything - and this, you wouldn’t get in CI, since LibBar can’t depend on FooApp itself.
In short, you want to be able to treat both projects as an integrated development environment, with support from your usual testing and debugging tools, just as much as you want your deployment output to be an integrated artifact.
Step 2b: Branch Setup for PR
However, for continuous integration to work, you will also need to have a remote resource reference of some kind from FooApp’s branch to LibBar. You will need 2 pull requests: the first to land your LibBar changes to your internal LibBar fork and make sure it’s passing its own tests, and then a second PR to switch your LibBar dependency from the public repository to your internal fork.
At this step it is very important to ensure that there is an issue filed on your own internal backlog to drop your LibBar fork. You do not want to lose track of this work; it is technical debt that must be addressed.
Until it’s addressed, automated tools like Dependabot will not be able to apply security updates to LibBar for you; you’re going to need to manually integrate every upstream change. This type of work is itself very easy to drop or lose track of, so you might just end up stuck on a vulnerable version.
Step 3: Deploy Internally
Now that you’re confident that the fix will work, and that your temporarily-internally-maintained version of LibBar isn’t going to break anything on your site, it’s time to deploy.
Some deployment heritage should help to provide some evidence that your fix is ready to land in LibBar, but at the next step, please remember that your production environment isn’t necessarily emblematic of that of all LibBar users.
Step 4: Propose Externally
You’ve got the fix, you’ve tested the fix, you’ve got the fix in your own production, you’ve told upstream you want to send them some changes. Now, it’s time to make the pull request.
You’re likely going to get some feedback on the PR, even if you think it’s already ready to go; as I said, despite having been proven in your production environment, you may get feedback about additional concerns from other users that you’ll need to address before LibBar’s maintainers can land it.
As you process the feedback, make sure that each new iteration of your branch gets re-deployed to your own production. It would be a huge bummer to go through all this trouble, and then end up unable to deploy the next publicly released version of LibBar within FooApp because you forgot to test that your responses to feedback still worked on your own environment.
Step 4a: Hurry Up And Wait
If you’re lucky, upstream will land your changes to LibBar. But, there’s still no release version available. Here, you’ll have to stay in a holding pattern until upstream can finalize the release on their end.
Depending on some particulars, it might make sense at this point to archive your internal LibBar repository and move your pinned release version to a git hash of the LibBar version where your fix landed, in their repository.
Before you do this, check in with the LibBar core team and make sure that they understand that’s what you’re doing and they don’t have any wacky workflows which may involve rebasing or eliding that commit as part of their release process.
Step 5: Unwind Everything
Finally, you eventually want to stop carrying any patches and move back to an official released version that integrates your fix.
You want to do this because this is what the upstream will expect when you are reporting bugs. Part of the benefit of using open source is benefiting from the collective work to do bug-fixes and such, so you don’t want to be stuck off on a pinned git hash that the developers do not support for anyone else.
As I said in step 2b6, make sure to maintain a tracking task for doing this work, because leaving this sort of relatively easy-to-clean-up technical debt lying around is something that can potentially create a lot of aggravation for no particular benefit. Make sure to put your internal LibBar repository into an appropriate state at this point as well.
Up Next
This is part 1 of a 2-part series. In part 2, I will explore in depth how to
execute this workflow specifically for Python packages, using some popular
tools. I’ll discuss my own workflow, standards like PEP 517 and
pyproject.toml, and of course, by the popular demand that I just know will
come, uv.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!
-
if you already have all the tooling associated with a monorepo, including the ability to manage divergence and reintegrate patches with upstream, you already have the higher-overhead version of the workflow I am going to propose, so, never mind. but chances are you don’t have that, very few companies do. ↩
-
In any business where one must wrangle with Legal, 3 hours is a wildly optimistic estimate. ↩
-
In an ideal world every project would keep its main branch ready to release at all times, no matter what but we do not live in an ideal world. ↩
-
In this case, there is no question. It’s 2b only, no not-2b. ↩
November 11, 2025 01:44 AM UTC
Ahmed Bouchefra
Tired of Pip and Venv? Meet UV, Your New All-in-One Python Tool
Hey there, how’s it going?
Let’s talk about the Python world for a second. If you’ve been around for a while, you know the drill. You start a new project, and the ritual begins: create a directory, set up a virtual environment with venv, remember to activate it, pip install your packages, and then pip freeze everything into a requirements.txt file.
It works. It’s fine. But it always felt a bit… clunky. A lot of steps. A lot to explain to newcomers.
Well, I’ve been playing with a new tool that’s been gaining a ton of steam, and honestly? I don’t think I’m going back. It’s called UV, and it comes from Astral, the same team behind the super-popular linter, ruff.
The goal here is ambitious. UV wants to be the single tool that replaces pip, venv, pip-tools, and even pipx. It’s an installer, an environment manager, and a tool runner all rolled into one. And because it’s written in Rust, it’s ridiculously fast.
So, let’s walk through what a typical project setup looks like the old way… and then see how much simpler it gets with UV.
The Old Way: The Pip & Venv Dance
Okay, so let’s say we’re starting a new Flask app. The old-school workflow would look something like this:
mkdir old-way-project && cd old-way-projectpython3 -m venv .venv(Create the virtual environment)source .venv/bin/activate(Activate it… don’t forget!)pip install flask requests(Install our packages)pip freeze > requirements.txt(Save our dependencies for later)
It’s a process we’ve all done a hundred times. But it’s also a process with a few different tools and concepts you have to juggle. For someone just starting out, it’s a lot to take in.
The New Way: Just uv
Now, let’s do the same thing with UV.
Instead of creating a directory myself, I can just run:
uv init new-app
This one command creates a new directory, cds into it, and sets up a modern Python project structure. It initializes a Git repository, creates a sensible .gitignore, and gives us a pyproject.toml file. This is the modern way to manage project metadata and dependencies.
But wait… where’s the virtual environment? Where’s the activation step?
Here’s the magic. You don’t have to worry about it.
Let’s add Flask and Requests to our new project. Instead of pip, we use uv add:
uv add flask requests
When I run this, a few amazing things happen:
- UV sees I don’t have a virtual environment yet, so it creates one for me automatically.
- It installs Flask and Requests into that environment at lightning speed.
- It updates my
pyproject.tomlfile to listflaskandrequestsas dependencies. - It creates a
uv.lockfile, which records the exact versions of every single package and sub-dependency. This is what solves the classic “but it works on my machine!” problem.
All of that, with one command, and I never had to type source ... activate.
Running Your Code (This Is the Coolest Part)
“Okay,” you might be thinking, “but how do I run my code if the environment isn’t active?”
Simple. You just tell UV to run it for you.
uv run main.py
UV finds the project’s virtual environment and runs your script inside it, even though your main shell doesn’t have it activated.
Now, get ready for the part that really blew my mind.
Let’s say I accidentally delete my virtual environment.
rm -rf .venv
Normally, this would be a disaster. I’d have to recreate the environment, activate it, and reinstall everything from my requirements.txt file. It would be a whole thing.
But with UV? I just run the same command again:
uv run main.py
UV sees the environment is gone. It reads the uv.lock file, instantly recreates the exact same environment with the exact same packages, and then runs the code. It all happens in a couple of seconds. It’s just… seamless.
If you’re sharing the project with a teammate, they just clone it and run uv sync. That’s it. Their environment is ready to go, perfectly matching yours.
It Even Replaces Pipx for Tools
Another thing I love is how it handles command-line tools. I used to use pipx to install global tools like linters and formatters. UV has that built-in, too.
Want to install ruff?
uv tool install ruff
This installs it in an isolated environment but makes it available everywhere.
But even better is the uvx command, which lets you run a tool without permanently installing it.
Let’s say I want to quickly check my code with ruff but I don’t want to install it.
uvx ruff check .
UV will download ruff to a temporary environment, run the command, and then clean up after itself. It’s perfect for trying out new tools or running one-off commands without cluttering your system.
My Takeaway
I know, I know… another new tool to learn. It can feel overwhelming. But this one is different. It doesn’t just add another layer; it simplifies and replaces a whole stack of existing tools with something faster, smarter, and more intuitive.
The smart caching alone is a huge win. If you have ten projects that all use Flask, UV only stores it once on your disk, saving a ton of space and making new project setups almost instantaneous.
I’ve fully switched my workflow over to UV, and I can’t see myself going back. It just gets out of the way and lets me focus on the code.
November 11, 2025 12:00 AM UTC
The Anatomy of a Scalable Python Project
Ever start a Python project that feels clean and simple, only to have it turn into a tangled mess a few months later? Yeah, I’ve been there more times than I can count.
Today, I want to pull back the curtain and show you the anatomy of a Python project that’s built to last. This is the setup I use for all my production projects. It’s a blueprint that helps keep things sane, organized, and ready to grow without giving you a massive headache.
We’ll walk through everything—folder structure, config, logging, testing, and tooling. The whole package.
So, What Does “Scalable” Even Mean?
It’s a word that gets thrown around a lot, right? “Scalable.” But what does it actually mean in practice?
For me, it boils down to a few things:
- Scales with Size: Your codebase is going to grow. That’s a good thing! It means you’re adding features. A scalable structure means you don’t have to constantly refactor everything just to add something new. The foundation is already there.
- Scales with Your Team: If you bring on another developer, they shouldn’t need a two-week onboarding just to figure out where to put a new function. The boundaries should be clear, and the layout should be predictable.
- Scales with Environments: Moving from your local machine to staging and then to production should be… well, boring. In a good way. Your config should be centralized, making environment switching a non-event.
- Scales with Speed: Your local setup should be a breeze. Tests should run fast. Docker should just work. You want to eliminate friction so you can actually focus on building things.
Over the years, I’ve worked with everything from TypeScript to Java to C++, and while the specifics change, the principles of good structure are universal. This is the flavor that I’ve found works beautifully for Python.
The Blueprint: A Balanced Folder Structure
You want just enough structure to keep things organized, but not so much that you’re digging through ten nested folders to find a single file. It’s a balance.
Here’s the high-level view:
/
├── app/ # Your application's source code
├── tests/ # Your tests
├── .env # Environment variables (for local dev)
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
└── ... other config files
Right away, you see the most important separation: your app code and your tests live in their own top-level directories. This is crucial. Don’t mix them.
Diving Into the app Folder
This is where the magic happens. Inside app, I follow a simple pattern. For this example, we’re looking at a FastAPI app, but the concepts apply anywhere.
app/
├── api/
│ └── v1/
│ └── users.py # The HTTP layer (routers)
├── core/
│ ├── config.py # Centralized configuration
│ └── logging.py # Logging setup
├── db/
│ └── schema.py # Database models (e.g., SQLAlchemy)
├── models/
│ └── user.py # Data contracts (e.g., Pydantic schemas)
├── services/
│ └── user.py # The business logic!
└── main.py # App entry point
Let’s break it down.
main.py - The Entry Point
This file is kept as lean as possible. Seriously, there’s almost nothing in it. It just initializes the FastAPI app and registers the routers from the api folder. That’s it.
api/ - The Thin HTTP Layer
This is where your routes live. If you look inside api/v1/users.py, you won’t find any business logic. You’ll just see the standard GET, POST, PUT, DELETE endpoints. Their only job is to handle the HTTP request and response. They act as a thin translator, calling into the real logic somewhere else.
core/ - The Cross-Cutting Concerns
This folder is for things that are used all over your application.
config.py: I use Pydantic’sSettingsfor this. It’s amazing. You define your config as a class, and it can automatically pull in values from environment variables (like from that.envfile). This makes managing settings for different environments a piece of cake.logging.py: A simple, standardized logging setup. You configure it once here, and then you can just import and use it anywhere.
db/ and models/ - The Data Layers
db/schema.py: This is where you define your database tables, probably using something like SQLAlchemy. It describes the shape of your data in the database.models/user.py: These are your Pydantic models that define the contracts for your API. What JSON should a user send to create a user? What JSON will they get back? This is where you define that, and you get free data validation out of it.
services/ - The Heart of Your Application
This is the most important folder, in my opinion. This is where your actual business logic lives. The UserService takes a database session and does the real work: querying for users, creating a new user, running validation logic, etc.
Why is this so great?
- Your API layer stays clean and simple.
- You can test your business logic directly, without needing to spin up a web server.
- Want to switch from PostgreSQL to a different database? Or even an external API? You only have to change it here. The rest of your app doesn’t care.
Let’s Talk About Testing
Your tests folder should mirror your app folder’s structure. This makes it incredibly easy to find the tests for any given piece of code.
tests/
└── api/
└── v1/
└── test_users.py
For testing, I use an in-memory SQLite database. This keeps my tests completely isolated from my production database and makes them run super fast.
FastAPI has a fantastic dependency injection system that makes testing a dream. In my tests, I can just “override” the dependency that provides the database session and swap it with my in-memory test database. Now, when I run a test that hits my API, it’s running against a temporary, clean database every single time.
Tooling That Ties It All Together
pyproject.toml: This is where your project dependencies and settings (like forpytest) live. I useuvthese days—it’s incredibly fast.Dockerfile&docker-compose.yml: This is how you guarantee that your local development environment is exactly like your production environment. When I rundocker-compose up, it spins up my app in a container, using the sameDockerfilethat will eventually be deployed to the cloud. No more “but it works on my machine!”.envfile: This holds your local environment variables, like database passwords. Crucially, you never commit this file to Git. It’s for your machine only.
How It All Flows Together
So, let’s trace a request:
- A
GET /usersrequest hits the router inapi/v1/users.py. - FastAPI’s dependency injection system automatically creates a
UserServiceinstance, giving it a fresh database session. - The route calls the
list_usersmethod on the service. - The service runs a query against the database, gets the results, and returns them.
- The router takes those results, formats them as a JSON response, and sends it back to the client.
The beauty of this is the clean separation of concerns. The API layer handles HTTP. The service layer handles business logic. The database layer handles persistence.
This structure lets you start small and add complexity later without making a mess. The boundaries are clear, which makes development faster, testing easier, and onboarding new team members a whole lot smoother.
Of course, this is a starting point. You might need a scripts/ folder for data migrations or other custom tasks. But this foundation… it’s solid. It’s been a game-changer for me, and I hope it can be for you too.
November 11, 2025 12:00 AM UTC
November 10, 2025
Brian Okken
Explore Python dependencies with `pipdeptree` and `uv pip tree`
Sometimes you just want to know about your dependencies, and their dependencies.
I’ve been using pipdeptree for a while, but recently switched to uv pip tree.
Let’s take a look at both tools.
pipdeptree
pipdeptree is pip installable, but I don’t want pipdeptree itself to be reported alongside everything else installed, so I usually install it outside of a project. We can use it system wide by:
- Installing with
uv tool install pipdeptree.- Then running with
pipdeptree --python auto
- Then running with
- Running without installing with
uvx pipdeptree --python auto
usage
The --python auto tells pipdeptree to look at the current environment.
November 10, 2025 11:24 PM UTC
Patrick Altman
Using Vite with Vue and Django
I&aposve been building web applications with Vue and Django for a long time. I don&apost remember my first one—certainly before Vite was available. As soon as I switched to using Vite, I ended up building a template tag to join the frontend and backend together rather than having separate projects. I&aposve always found things simpler to have Django serve everything.
While preparing this post to share the latest version of what is essentially a small set of files we copy between projects, I started exploring the idea of open-sourcing the solution.
The goal was twofold
- To create a reusable package instead of relying on copy-and-paste code, and
- To contribute something back to the open-source community.
In the process, I stumbled upon an excellent existing project — django-vite.
So now I think we might give this a good look to switch to and add a Redis backend.
For now though, I think it&aposs still worth sharing our simple solution in case it&aposs a better fit for you (I haven&apost fully examined django-vite yet).
The Problem
The problem we are trying to solve is using Vite to bundle/build our Vue frontend and yet have Django be able to serve the bundle entry point JS and CSS entry points automatically. Running vite build will yield output like:
main-2uqS21f4.js
main-BCI6Z1XL.cssWithout any extra tooling, we&aposd have to commit build output, hard-code these cache-busting file names to the base template, every time we made a change that could affect the bundle.
This was completely unacceptable.
The Solution
Vite offers the ability to generate a manifest file that will map the cache-busting file name with their base name in a machine readable format. This will allow us to leverage builds happening on CI/CD as part of our Docker image build, and then read the manifest produced by Vite, to keep everything neat and simple.
Here is the setting in the vite.config.ts key to this:
{
// ...
build: {
manifest: true,
// ...
}
// ...
}This will produce a file in your output folder (under .vite/) called manifest.json.
Here is a snippet; note that you typically won’t need to inspect it manually:
"main.ts": {
"file": "assets/main-2uqS21f4.js",
"name": "main",
"src": "main.ts",
"isEntry": true,
"imports": [
"_runtime-D84vrshd.js",
"_forms-OJiVtksU.js",
"_analytics-CCPQRNnj.js",
"_forms-pro-qreHBaUb.js",
"_icons-3wXMhf1p.js",
"_pv-DzJUpav-.js",
"_vue-mapbox-BRpo1ix7.js",
"_mapbox--vATkUHK.js"
],
"dynamicImports": [
"views/HomeView.vue",
"views/dispatch/DispatchNewOrdersView.vue",
...This is the key to tying things together dynamically. We constructed a template tag so that we could dynamically add our entry point in our base template:
{% load vite %}
<html>
<head>
<!-- ... base head template stuff -->
{% vite_styles &aposmain.ts&apos %}
</head>
<body>
<!-- ... base template stuff -->
{% vite_scripts &aposmain.ts&apos %}
</body>
</html>The idea behind this type of solution is conceptually pretty simple. The template tag needs to read the manifest.json, find the referenced entry point main.ts, then return the staticfiles based path to what&aposs in the file key (e.g. assets/main-2uqS21f4.js before rendering the template).
Given this, we need to optimize by reducing file I/O hits on every request, and since we’ll use caching we must also handle cache invalidation. Every deployment is a candidate for invalidation because the bundle could change at deployment, but not between.
We&aposll solve the caching using Redis. Since we have multiple nodes in our web app cluster local memory isn&apost an option. We&aposll solve the cache invalidation with a management command that runs at the end of each deployment. This uses a short stack (keeping only the latest n versions) instead of deleting.
We use a stack so we can push the new manifest to the top of the queue while leaving older references around. Requests to updated nodes can then fetch the latest bundle, while allowing older nodes to still work and serve up their existing (older) bundle. This enables random rolling upgrades on our cluster allowing us to push up updates in middle of a work day without disrupting end users.
All of this is done with basically a template tag python module and a management command.
Template Tag
We have this template tag module stored as vite.py, so that you can load it with {% load vite %} which then exposes the {% vite_styles %} and {% vite_scripts %} template tags.
import json
import re
import typing
from django import template
from django.conf import settings
from django.core.cache import cache
from django.templatetags.static import static
from django.utils.safestring import mark_safe
if typing.TYPE_CHECKING: # pragma: no cover
from django.utils.safestring import SafeString
ChunkType = typing.TypedDict("chunk", {"file": str, "css": str, "imports": list[str]})
ManifestType = typing.Mapping[str, ChunkType]
ScriptsStylesType = typing.Tuple[list[str], list[str]]
DEV_SERVER_ROOT = "http://localhost:3001/static"
register = template.Library()
def is_absolute_url(url: str) -> bool:
return re.match("^https?://", url) is not None
def set_manifest() -> "ManifestType":
with open(settings.MANIFEST_LOADER["output_path"]) as fp:
manifest: "ManifestType" = json.load(fp)
cache.set(settings.MANIFEST_LOADER["cache_key"], manifest, None)
return manifest
def get_manifest() -> "ManifestType":
if manifest := cache.get(settings.MANIFEST_LOADER["cache_key"]):
if settings.MANIFEST_LOADER["cache"]:
return manifest
return set_manifest()
def vite_manifest(entries_names: typing.Sequence[str]) -> "ScriptsStylesType":
if settings.DEBUG:
scripts = [f"{DEV_SERVER_ROOT}/@vite/client"] + [
f"{DEV_SERVER_ROOT}/{name}"
for name in entries_names
]
styles = []
return scripts, styles
manifest = get_manifest()
_processed = set()
def _process_entries(names: typing.Sequence[str]) -> "ScriptsStylesType":
scripts = []
styles = []
for name in names:
if name in _processed:
continue
chunk = manifest[name]
import_scripts, import_styles = _process_entries(chunk.get("imports", []))
scripts.extend(import_scripts)
styles.extend(import_styles)
scripts.append(chunk["file"])
styles.extend(chunk.get("css", []))
_processed.add(name)
return scripts, styles
return _process_entries(entries_names)
@register.simple_tag(name="vite_styles")
def vite_styles(*entries_names: str) -> "SafeString":
_, styles = vite_manifest(entries_names)
styles = map(lambda href: href if is_absolute_url(href) else static(href), styles)
return mark_safe("\n".join(map(lambda href: f&apos<link rel="stylesheet" href="{href}" />&apos, styles))) # nosec
@register.simple_tag(name="vite_scripts")
def vite_scripts(*entries_names: str) -> "SafeString":
scripts, _ = vite_manifest(entries_names)
scripts = map(lambda src: src if is_absolute_url(src) else static(src), scripts)
return mark_safe("\n".join(map(lambda src: f&apos<script type="module" src="{src}"></script>&apos, scripts))) # nosec
Here are a few features this supports::
- If running in local development, it will bypass loading from the manifest and load the
@vite/clientand point to the dev server that is running in a docker compose instance so we get HMR (Hot Module Replacement). - It relies on some settings that control if caching is enabled, what the cache key is (we set it to the
RELEASE_VERSIONwhich is pulled from the environment and tied to the git sha or tag. - We leverage the Django cache backend here for getting from and setting to the cache independent on what the actual cache backend is. This layer of indirection only works for this tag though and not for our cache invalidation management command.
The settings we use:
MANIFEST_LOADER = {
"cache": not DEBUG,
"cache_key": f"vite_manifest:{RELEASE_VERSION}",
"output_path": f"{STATIC_ROOT}/.vite/manifest.json",
}The management command gets a bit fancy with invalidation mainly to support running a multi-node cluster.
If you run a single web instance this probably isn&apost a lot of benefit.
However, we encountered issues spinning up additional nodes: some were updated, others weren’t, and we were seeing 500 errors during deployment because we needed to support both versions in the cache.
Our short term solution was to just put entire site into maintenance mode during deploys, but that&aposs kind of annoying for pushing out some simple fixes. This technique has solved that for us with this management command that lives in post_deploy.py
from django.conf import settings
from django.core.cache import cache
from django.core.management import BaseCommand
from redis.exceptions import RedisError
from ...templatetags.vite import set_manifest
class Command(BaseCommand):
def success(self, message: str):
self.stdout.write(self.style.SUCCESS(message))
def warning(self, message: str):
self.stdout.write(self.style.WARNING(message))
def error(self, message: str):
self.stdout.write(self.style.ERROR(message))
def set_new_manifest_in_cache(self):
current_version = settings.RELEASE_VERSION
if not current_version:
self.warning(
"RELEASE_VERSION is empty; skipping cleanup to avoid deleting default keys."
)
return
prefix = "vite_manifest:*" # Match all versionsed keys
recent_versions_key = "recent-manifest-versions" # Redis key for tracking versions
try:
redis_client = cache._client.get_client()
# Add current version to the front of the list (in bytes)
redis_client.lpush(recent_versions_key, current_version.encode("utf-8"))
# Keep only the last 5 versions
redis_client.ltrim(recent_versions_key, 0, 5)
# Get recent versions as a set for quick lookup (decoding to strings)
recent_versions = {
v.decode("utf-8")
for v in redis_client.lrange(recent_versions_key, 0, -1)
}
self.success(f"Recent versions: {recent_versions}")
cursor = "0"
deleted_count = 0
while cursor != 0:
cursor, keys = redis_client.scan(cursor=cursor, match=prefix, count=100) # Batch scan
for key in keys:
key_str = key.decode("utf-8")
self.success(f"Checking key: {key_str}")
# If the key&aposs version is not in recent versions, delete it
if not any(key_str.endswith(f":{version}") for version in recent_versions):
redis_client.delete(key)
deleted_count += 1
self.success(f"Deleted old manifest cache key: {key_str}")
self.success(
f"Added current version &apos{current_version}&apos and deleted {deleted_count} old manifest cache keys."
)
set_manifest()
self.success("Updated Vite manifest in cache.")
except RedisError as e:
self.error(f"Redis error: {e}")
def handle(self, *args, **options):
self.set_new_manifest_in_cache()
This isn&apost the prettiest code. We could probably tidy it up by extracting the Redis operations and/or the main while loop to make things more readable. But for now it&aposs working and we haven&apost had to touch it in a while.
The latest six versions in our cache:

We had to break out of the pure django cache backend here to get access to some redis specific operations for the stack operations. Again, this is something that might be worth tidying up if we build a cache backend for django-vite but maybe not necessary if we build a Redis specific backend.
Not only do we invalidate the latest cache by pushing the version key down the stack, but we then seed the cache with the current version to save some time on a lazy load.
Summary
Next up is for us to take a hard look at django-vite as this seems to be a well structured and maintained project. Perhaps we can move to using this, retire our custom code, and then contribute what remains lacking either to the project or via a sidecar package.
Have you dealt with these problems in a different way? If so, we&aposd love to hear from you and learn about your approach.
