skip to navigation
skip to content

Planet Python

Last update: March 24, 2026 01:44 AM UTC

March 23, 2026


Talk Python Blog

Updates from Talk Python - March 2026

There have been a bunch of changes to make the podcast and courses at Talk Python just a little bit better. And I wrote a few interesting articles that might pique your interest. So I thought it was time to send you all a quick little update and let you know what’s new and improved.

Talk Python Courses

Account Dashboard for courses

I spoke to a lot of users who said that it’s a bit difficult to jump back into your account and see which courses you were last taking. I can certainly appreciate that, especially if you have the bundle with every class is available. So I added this cool new dashboard that sorts and displays your progress through your most recent course activity as well as courses that you have finished.

March 23, 2026 09:04 PM UTC


"Michael Kennedy's Thoughts on Technology"

Replacing Flask with Robyn wasn't worth it

TL;DR; I converted Python Bytes from Quart/Flask to the Rust-backed Robyn framework and benchmarked it with Locust. There was no meaningful speed or memory improvement - and Robyn actually used more memory. Framework maturity, ecosystem depth, and app server flexibility still matter more than raw benchmark numbers.

Last week I played with the idea of replacing Quart (async Flask ) with Robyn for our bigger web apps. Robyn is built almost entirely in Rust, and in the benchmarks, it looks dramatically better. Not just a little bit faster, but 25 times faster. However, if you’ve been around the block for a while, you know that benchmarks and how things work for your app and your situation are not always the same thing.

So I picked the simplest complex app that I run, Python Bytes, and converted it entirely to run on the Robyn framework. This took a few hours of careful work and experimenting, and I even had to create a Python package to allow Robyn to run the Chameleon template language.

When I was done, it was time to fire up Locust and see if there was any dramatic performance improvements. I certainly wasn’t expecting 25x, but 2x? 1.5x? That would have been really impressive.

Did Robyn improve speed or memory over Flask?

The results were in and the answer was just about no difference in RPS or latency. It turns out that almost all the computational time is in the logic of our app, which of course doesn’t change and I never intended to change it.

Another area I was hoping to optimize is memory. Our web apps use a lot of memory for what they are. They’re certainly not trivial. But running a couple of copies of the app in a web garden was using way more than I expected that they should. And I thought moving closer to Rust might have positive influences for memory too.

It turns out the Robyn fork actually used more memory, not less, than the current setup. After all, our web apps run on Granian, which is mostly Rust right up to the Flask framework itself already.

Why Flask’s maturity still beats Robyn’s speed

So our fun little spike to explore the Robyn framework is going to remain just that. I’m sticking with Flask. I’ve talked about this before, but maturity in a library or framework is a big plus. The ecosystem for Flask/Quart is much bigger and more polished than for the smaller Robyn framework.

More than that, the app server runtime for Robyn is much less polished than some of the pluggable app servers out there. Think Granian, Gunicorn, uvicorn, etc. For example, Robyn does not support web garden process recycling. In many servers you can say after five hours or 10,000 requests or something like that, just slowly take the request out of a process, spin up a new one and shut down the old one just to keep things fresh. This helps if you’re using some library that holds on to too many caches or some other weird memory thing.

Was the Robyn experiment a waste of time?

Even though I spent maybe close to six hours working on this exploration and decided not to use it, I still found it super valuable. I created the fun Chameleon Robyn package to help people using Robyn have a greater choice of template languages. I got to see my apps from multiple perspectives. I built out some tooling for Claude that I’m going to write about later that is generally really awesome. And I ended up saving significant memory for some of my biggest web apps by just spending more time thinking about how I’m running them currently in Granian and Flask.

March 23, 2026 04:31 PM UTC


Antonio Cuni

My first OSS commit turns 20 today

My first OSS commit turns 20 today

Some time ago I realized that it was 20 years since I started to contribute toOpen Source. It's easy to remember, because I started to work on PyPy as part of mymaster's thesis and I graduated in 2006.

So, I did a bit of archeology to find the first commit:

$ cd ~/pypy/pypy && git show 1a086d45d9 --no-patchcommit 1a086d45d9Author: Antonio Cuni <anto.cuni@gmail.com>Date: Wed Mar 22 14:01:42 2006 +0000 Initial commit of the CLI backend

!!! note "svn, hg, git"

Funny thing, the original commit was not in `git`, which was just a few months oldat the time. In 2006 PyPy was using `subversion`, then a few years later [migratedto mercurial](../../2010/12/pypy-migrates-to-mercurial-3308736161543832134.md), and many years later[migrated to git](https://pypy.org/posts/2023/12/pypy-moved-to-git-github.html).I managed to find traces of the original `svn` commit in the archives of the[pypy-svn](https://marc.info/?l=pypy-svn&m=118495688023240) mailing list.

March 23, 2026 04:09 PM UTC


PyCharm

OpenAI Acquires Astral: What It Means for PyCharm Users

On March 19, OpenAI announced that it would acquire Astral, the company behind uv, Ruff, and ty. The Astral team, led by founder Charlie Marsh, will join OpenAI’s Codex team. The deal is subject to regulatory approval.

First and foremost: congratulations to Charlie Marsh and the entire Astral team. They shipped some of the most beloved tools in the Python ecosystem and raised the bar for what developer tooling can be. This acquisition is a reflection of the impact they’ve had.

This is big news for the Python ecosystem, and it matters to us at JetBrains. Here’s our perspective.

What Astral built

In just two years, Astral transformed Python tooling. Their tools now see hundreds of millions of downloads every month, and for good reason:

This is foundational infrastructure that millions of developers rely on every day. We’ve integrated both Ruff and uv into PyCharm because they substantially make Python development better.

The risks are real, but manageable

Change always carries risk, and acquisitions are no exception. The main concern here is straightforward: if Astral’s engineers get reassigned to OpenAI’s more commercial priorities, these tools could stagnate over time.

The good news is that Astral’s tools are open-source under permissive licenses. The community can fork them if it ever comes to that. As Armin Ronacher has noted, uv is “very forkable and maintainable.” There’s no possible future where these tools go backwards.

Both OpenAI and Astral have committed to continued open-source development. We take them at their word, and we hope for the best.

Our commitment hasn’t changed

JetBrains already has great working relationships with both the Astral and the Codex teams. We’ve been integrating Ruff and uv into PyCharm, and we will continue to do so. We’ve submitted some upstream improvements to ty. Regardless of who owns these tools, our commitment to supporting the best Python tooling for our users stays the same. We’ll keep working with whoever maintains them.

The Python ecosystem is stronger because of the work Astral has done. We hope this acquisition amplifies that work, not diminishes it. We’ll be watching closely, and we’ll keep building the best possible experience for Python developers in PyCharm.

March 23, 2026 04:04 PM UTC


James Bennett

Rewriting a 20-year-old Python library

Way back in 2005, lots of people (ordinary people, not just people who work in tech) used to have personal blogs where they wrote about things, rather than using third-party short-form social media sites. I was one of those people (though I wasn’t yet blogging on this specific site, which launched the following year). And back in 2005, and even earlier, people liked to have comment sections on their blogs where readers could leave their thoughts on posts. And that was an absolute magnet for spam.

There were a few attempts to do something about this. One of them was Akismet, which launched that year and provided a web service you could send a comment (or other user-generated-content) submission to, and get back a classification of spam or not-spam. It turned out to be moderately popular, and is still around today.

The folks behind Akismet also documented their API and set up an API key system so people could write their own clients/plugins for various programming languages and blog engines and content-management systems. And so pretty quickly after the debut of the Akismet service, Michael Foord, who the Python community, and the world, tragically lost at the beginning of 2025, wrote and published a Python library, which he appropriately called akismet, that acted as an API client for it.

He published a total of five releases of his Python Akismet library over the next few years, and people started using it. Including me, because I had several use cases for spam filtering as a service. And for a while, things were good. But then Python 3 was released, and people started getting serious about migrating to it, and Michael, who had been promoted into the Python core team, didn’t have a ton of time to work on it. So I met up with him at a conference in 2015, and offered to maintain the Akismet library, and he graciously accepted the offer, imported a copy of his working tree into a GitHub repository for me, and gave me access to publish new packages.

In the process of porting the code to support both Python 2 and 3 (as was the fashion at the time), I did some rewriting and refactoring, mostly focused on simplifying the configuration process and the internals. Some configuration mechanisms were deprecated in favor of either explicitly passing in the appropriate values, or else using the 12-factor approach of storing configuration in environment variables, and the internal HTTP request stack, based entirely on the somewhat-cumbersome (at that time) Python standard library, was replaced with a dependency on requests. The result was akismet 1.0, published in 2017.

Over the next six years, I periodically pushed out small releases of akismet, mostly focused on keeping up with upstream Python version support (and finally going Python-3-only, in 2020 when Python 2.7 reached its end of upstream support). But beginning in 2024, I embarked on a more ambitious project which spanned multiple releases and turned into a complete rewrite of akismet which finished a few months ago. So today I’d like to talk about why I chose to do that, how the process went, and what it produced.

Why?

Although I’m not generally a believer in the concept of software projects being “done” and thus no longer needing active work (in the same sense as “a person isn’t really dead as long as their name is still spoken”, I believe a piece of software isn’t really “done” as long as it has at least one user), a major rewrite is still something that needs a justification. In the case of akismet, there were two specific things I wanted to accomplish that led me to this point.

One was support for a specific feature of the Akismet API. The akismet Python client’s implementation of the most important API method—the one that tells you whether Akismet thinks content is spam, called comment-check—had, since the very first version, always returned a bool. Which at first sight makes sense, because the Akismet web service’s response body for that endpoint is plain text and is either the string true (Akismet thinks the content is spam) or the string false (Akismet thinks it isn’t spam). Except actually Akismet supports a third option: “blatant” spam, meaning Akismet is so confident in its determination that it thinks you can throw away the content without further review (while a normal “spam” determination might still need a human to look at it and double-check). It signals this by returning the true text response and also setting a custom HTTP response header (X-Akismet-Pro-Tip: discard). But the akismet Python client couldn’t usefully expose this, since the original API design of the client chose to have this method return a two-value bool instead of some other type that could handle a three-value situation. And any attempt to fix it would necessarily change the return type, which would be a breaking change.

The other big motivating factor for a rewrite was the rise of asynchronous Python via async and await, originally introduced in Python 3.5. The async Python ecosystem has grown tremendously, and I wanted to have a version of akismet that could support async/non-blocking HTTP requests to the Akismet web service.

Keep it classy?

The first thing I did was spend a bit of time exploring whether I could replace the entire class-based design of the library. Since the very first version back in 2005, the akismet library had always provided its client as a class (named Akismet) with one method for each supported Akismet HTTP API method. But it’s always worth asking if a class is actually the right abstraction. Very often it’s not! And while Python is an object-oriented language and allows you to write classes, it doesn’t require you to write them. So I spent a little while sketching out a purely function-based API.

One immediate issue with this was how to handle the API credentials. Akismet requires you to obtain an API key and to register one or more sites which will use that API key, and most Akismet web API operations require that both the API key and the current site be sent with the request. There’s also a verify-key API operation which lets you submit a key and site and tells you if they’re valid; if you don’t use this, and accidentally start trying to use the rest of the Akismet API with an invalid key and/or site, the other Akismet API operations send back responses with a body of invalid.

As noted above, the 1.0 release already nudged users of akismet in the direction of putting config in the environment, so reading the key and site from env variables was already well-supported. But some people probably can’t, or won’t want to, use environment variables for configuration. For example: they might have multiple sets of Akismet credentials in a multi-tenant application, and need to explicitly pass different sets of credentials depending on which site they’re performing checks for. So in any function-based interface, all the functions would not only need to be able to read configuration from the environment (which at least could be factored out into a helper function), they’d also need to explicitly accept credentials as optional arguments. That complicates the argument signatures (which are already somewhat gnarly because of all the optional information you can provide to Akismet to help with spam determinations), and makes the API start to look cumbersome.

This was a clue that the function-based approach was probably not the right one: if a bunch of functions all have to accept extra arguments for a common piece of data they all need, it’s a sign that they may really want to be a class which just has the necessary data available internally.

The other big sticking point was how to handle credential verification. It requires an HTTP request/response to Akismet, so ideally you’d do this once (per set of credentials per process). Say, if you’re using Akismet in a web application, you’d want to check your credentials at process startup, and then just treat them as known-good for the lifetime of the process after that. Which is what the the existing class-based code did: it performed a verify-key on instantiation and then could re-use the verified credentials after that point (or raise an immediate exception if the credentials were missing or invalid). I really like the ergonomics of that, since it makes it much more difficult to create an Akismet client in an invalid/misconfigured state, but it basically requires some sort of shared state. Even if the API key and site URL are read from the environment or passed as arguments every time, there needs to be some sort of additional information kept by the client code to indicate they’ve been validated.

It still would be possible to do this in a function-based interface. It could implicitly verify each new key/site pair on first use, and either keep a full list of ones that had been verified or maybe some sort of LRU cache of them. Or there could be an explicit function for introducing a new key/site pair and verifying them. But the end result of that is a secretly-stateful module full of functions that rely on (and in some cases act on) the state; at that point the case for it being a class is pretty overwhelming.

As an aside, I find that spending a bit of time thinking about, or perhaps even writing sample documentation for, how to use a hypothetical API often uncovers issues like this one. Also, for a lot of people it’s seemingly a lot easier, psychologically, to throw away documentation than to throw away even barely-working code.

One class or two?

Another idea that I rejected pretty quickly was trying to stick to a single Akismet client class. There is a trend of libraries and frameworks providing both sync and async code paths in the same class, often using a naming scheme which prefixes the async versions of the methods with an a (like method_name() for the sync version and amethod_name() for async), but it wasn’t really compatible with what I wanted to do. As mentioned above, I liked the ergonomics of having the client automatically validate your API key and site URL, but doing that in a single class supporting both sync and async has a problem: which code path to use to perform the automatic credential validation? Users who want async wouldn’t be happy about a synchronous/blocking request being automatically issued. And trying to choose the async path by default would introduce issues of how to safely obtain a running event loop (and not just any event loop, but an instance of the particular event loop implementation the end user of the library actually wants).

So I made the decision to have two client classes, one sync and one async. As a nice bonus, this meant I could do all the work of rewriting in new classes with new names. That would let me mark the old Akismet class as deprecated but not have to immediately remove it or break its API, giving users of akismet plenty of notice of what was going on and a chance to migrate to the new clients. So I started working on the new client classes, calling them akismet.SyncClient and akismet.AsyncClient to be as boringly clear as possible about what they’re for.

How to handle async, part one

Unfortunately, the two-class solution didn’t fully solve the issue of how to handle the automatic credential validation. On the old Akismet client class it had been easy, and on the new SyncClient class it would still be easy, because the __init__() method could perform a verify-key operation before returning, and raise an exception if the credentials weren’t found or were invalid.

But in Python, __init__() cannot be (usefully) async, which posed the tricky question of how to perform automatic credential validation at instantiation time for AsyncClient.

As I dug into this I considered a few different options, and at one point even thought about going back to the one-class approach just to be able to issue a single HTTP request at instantiation without needing an event loop. But I wanted AsyncClient to be truly and thoroughly async, so I ended up settling for a compromise solution, implemented in two phases:

  1. Both SyncClient and AsyncClient were given an alternate constructor method named validated_client(). Alternate constructors can be usefully async, so the AsyncClient version could be implemented as an async method. I documented that if you’re directly constructing a client instance you intend to keep around for a while, this is the preferred constructor since it will perform automatic credential validation for you (direct instantiation via __init__() will not, on either class). And then…
  2. I implemented the context-manager protocol for SyncClient and the async context-manager protocol for AsyncClient. This allows constructing the sync client in a with statement, or an async with statement for AsyncClient. And since async with is an async execution context, it can issue an async HTTP request for credential validation.

So you can get automatic credential validation from either approach, depending on your needs:

import akismet


# Long-lived client object you'll keep around:
sync_client = akismet.SyncClient.validated_client()
async_client = await akismet.AsyncClient.validated_client()

# Or for the duration of a "with" block, cleaned up at exit:
with akismet.SyncClient() as sync_client:
    # Do things...

async with akismet.AsyncClient() as async_client:
    # Do things...

Most Python libraries can benefit from these sorts of conveniences, so I’d recommend investing time into learning how to implement them. If you’re looking for ideas, Lynn Root’s “The Design of Everyday APIs” covers a lot of ways to make your own code easier to use.

How to handle async, part deux

The other thing about writing code that supports both sync and async operations is how to handle the things they have in common. There are a few different ways to do this: you can write one implementation and have the other one call it. Or you can write two full implementations and live with the duplication. Or you can try to separate the I/O and the pure logic as much as possible, and reuse the logic while duplicating only the I/O code (or, since the two implementations aren’t perfect duplicates, writing two I/O implementations which heavily rhyme).

For akismet, I went with a hybrid of the last two of these approaches. I started out with my two classes each fully implementing everything they needed, including a lot of duplicate code between them (in fact, the first draft was just one class which was then copy/pasted and async-ified to produce the other). Then I gradually extracted the non-I/O bits into a common module they could both import from and use, building up a library of helpers for things like validating arguments, preparing requests, processing the responses, and so on.

One final object-oriented design decision here (or, I guess, not object-oriented decision): that common code is a set of functions in a module. It’s not a class. It’s not stateful the way the clients themselves are: turning an Akismet web API response into the desired Python return value, or validating a set of arguments and turning them into the correct request parameters (to pick a couple examples) are literally pure functions, whose outputs are dependent solely on their inputs.

And the common code also isn’t some sort of abstract base class that the two concrete clients would inherit from. An akismet.SyncClient and an akismet.AsyncClient are not two different subtypes of a parent “Akismet client” class or interface! Because of the different calling conventions of sync and async Python, there is no public parent interface that they share or could be substitutable for.

The current code of akismet still has some duplication, primarily around error handling since the try/except blocks need to wrap the correct version of their respective I/O operations, and I might be able to achieve some further refactoring to reduce that to the absolute minimum (for example, by splitting out a bunch of duplicated except clauses into a single common pattern-matching implementation now that Python 3.10 is the minimum supported version). But I’m not in a big hurry to do that; the current code is, I think, in a pretty reasonable state.

Enumerating the options

As I mentioned back at the start of this post, the akismet library historically used a Python bool to indicate the result of a spam-checking operation: either the content was spam (True) or it wasn’t (False). Which makes a lot of sense at first glance, and also matches the way the Akismet web service behaves: for content it thinks is spam, the HTTP response has a body consisting of the string true, and for content that it doesn’t think is spam the response body is the string false.

But for many years now, the Akismet web service has actually supported three possible values, with the third option being “blatant” spam, spam so obvious that it can simply be thrown away with no further human review. Akismet signals this by returning the true response body, and then adding a custom HTTP header to the response: X-Akismet-Pro-Tip, with a value of discard.

Python has had support for enums (via the enum module in the standard library) since Python 3.4, so that seemed the most natural way to represent the possible results. The enum module lets you use lots of different data types for enum values, but I went with an integer-valued enum (enum.IntEnum) for this, because it lets developers still work with the result as a pseudo-boolean type if they don’t care about the extra information from the third option (since in Python 0 is false and all other integers are true).

Python historical trivia

Originally, Python did not have a built-in boolean type, and the typical convention was similar to C, using the integers 0 and 1 to indicate false/true.

Python phased in a real boolean type early in the Python 2 days. First, the Python 2.2 release series (technically, Python 2.2.1) assigned the built-in names False and True to the integer values 0 and 1, and introduced a built-in bool() function which returned the integer truth value of its argument. Then in Python 2.3, the bool type was formally introduced, and was implemented as a subclass of int, constrained to have only two instances. Those instances are bound to the names False and True and have the integer values 0 and 1.

That’s how Python’s bool still works today: it’s still a subclass of int, and so you can use a bool anywhere an int is called for, and do arithmetic with booleans if you really want to, though this isn’t really useful except for writing deliberately-obfuscated code.

For more details on the history and decision process behind Python’s bool type, check out PEP 285 and this blog post from Guido van Rossum.

The only tricky thing here was how to name the third enum member. The first two were HAM and SPAM to match the way Akismet describes them. The third value is described as “blatant spam” in some documentation, but is represented by the string “discard” in responses, so BLATANT_SPAM and DISCARD both seemed like reasonable options. I ended up choosing DISCARD; it probably doesn’t matter much, but I like having the name match the actual value of the response header.

The enum itself is named CheckResponse since it represents the response values of the spam-checking operation (Akismet actually calls it comment-check because that’s what its original name was, despite the fact Akismet now supports sending other types of content besides comments).

Bring your own HTTP client

Back when I put together the 1.0 release, akismet adopted the requests library as a dependency, which greatly simplified the process of issuing HTTP requests to the Akismet web API. As part of the more recent rewrite, I switched instead to the Python HTTPX library, which has an API broadly compatible with requests but also, importantly, provides both sync and async implementations.

Async httpx requires the use of a client object (the equivalent of a requests.Session), so the Akismet client classes each internally construct the appropriate type of httpx object: httpx.Client for akismet.SyncClient, and httpx.AsyncClient for akismet.AsyncClient.

And since the internal usage was switching from directly calling the function-based API of requests to using HTTP client objects, it seemed like a good idea to also allow passing in your own HTTP client object in the constructors of the Akismet client classes. These are annotated as httpx.Client/httpx.AsyncClient, but as a practical matter anything with a compatible API will work.

One immediate benefit of this is it’s easier to accommodate situations like HTTP proxies, and server environments where all outbound HTTP requests must go through a particular proxy. You can just create the appropriate type of HTTP client object with the correct proxy settings, and pass it to the constructor of the Akismet client class:

import akismet
import httpx

from your_app.config import settings

akismet_client = akismet.SyncClient.validated_client(
    http_client=httpx.Client(
        proxy=settings.PROXY_URL,
        headers={"User-Agent": akismet.USER_AGENT}
    )
)

But an even bigger benefit came a little bit later on, when I started working on improvements to akismets testing story.

Testing should be easy

Right here, right now, I’m not going to get into a deep debate about how to define “unit” versus “integration” tests or which types you should be writing. I’ll just say that historically, libraries which make HTTP requests have been some of my least favorite code to test, whether as the author of the library or as a user of it verifying my usage. Far too often this ends up with fragile piles of patched-in mock objects to try to avoid the slowdowns (and other potential side effects and even dangers) of making real requests to a live, remote service during a test run.

I do think some fully end-to-end tests making real requests are necessary and valuable, but they probably should not be used as part of the main test suite that you run every time you’re making changes in local development.

Fortunately, httpx offers a feature that I wrote about a few years ago, which greatly simplifies both akismets own test suite, and your ability to test your usage of it: swappable HTTP transports which you can drop in to affect HTTP client behavior, including a MockTransport that doesn’t make real requests but lets you programmatically supply responses.

So akismet ships with two testing variants of its API clients: akismet.TestSyncClient and akismet.TestAsyncClient. They’re subclasses of the real ones, but they use the ability to swap out HTTP clients (covered above) to plug in custom HTTP clients with MockTransport and hard-coded stock responses. This lets you write code like:

import akismet


class AlwaysSpam(akismet.TestSyncClient):
    comment_check_response = akismet.CheckResponse.SPAM

and then use it in tests. That test client above will never issue a real HTTP request, and will always label any content you check with it as spam. You can also set the attribute verify_key_response to False on a test client to have it always fail API key verification, if you want to test your handling of that situation.

This means you can test your use of akismet without having to build piles of custom mocks and patch them in to the right places. You can just drop in instances of appropriately-configured test clients, and rely on their behavior.

If I ever became King of Programming, with the ability to issue enforceable decrees, requiring every network-interacting library to provide this kind of testing-friendly version of its core constructs would be among them. But since I don’t have that power, I do what I can by providing it in my own libraries.

(py)Testing should be easy

In the Python ecosystem there are two major testing frameworks:

For a long time I stuck to unittest, or unittest-derived testing tools like the ones that ship with Django. Although I understand and appreciate the particular separation of concerns pytest is going for, I found its fixture system a bit too magical for my taste; I personally prefer dependency injection to use explicit registration so I can know what’s available, versus the implicit way pytest discovers fixtures based on their presence or absence in particularly-named locations.

But pytest pretty consistently shows up as more popular and more broadly used in surveys of the Python community, and every place I’ve worked for the last decade or so has used it. So I decided to port akismet’s tests to pytest, and in the process decided to write a pytest plugin to help users of akismet with their own tests.

That meant writing a pytest plugin to automatically provide a set of dependency-injection fixtures. There are four fixtures: two sync and two async, with each flavor getting a fixture to provide a client class object (which lets you test instantiation-time behavior like API key verification failures), and a fixture to provide an already-constructed client object. Configuration is through a custom pytest mark called akismet_client, which accepts arguments specifying the desired behavior. For example:

import akismet
import pytest

@pytest.mark.akismet_client(comment_check_response=akismet.CheckResponse.DISCARD)
def test_akismet_discard_response(akismet_sync_client: akismet.SyncClient):
    # Inside this test, akismet_sync_client's comment_check() will always
    # return DISCARD.

@pytest.mark.akismet_client(verify_key_response=False)
def test_akismet_fails_key_verification(akismet_sync_class: type[akismet.SyncClient]):
    # API key verification will always fail on this class.
    with pytest.raises(akismet.APIKeyError):
        akismet_sync_class.validated_client()

Odds and ends

Python has had the ability to add annotations to function and method signatures since 3.0, and more recently gained the ability to annotate attributes as well; originally, no specific use case was mandated for this feature, but everybody used it for type hints, so now that’s the official use case for annotations. I’ve had a lot of concerns about the way type hinting and type checking have been implemented for Python, largely around the fact that idiomatic Python really wants to be a structurally-typed language, or as some people have called it “interfacely-typed”, rather than nominally-typed. Which is to say: in Python you almost never care about the actual exact type name of something, you care about the interfaces (nowadays, called “protocols” in Python typing-speak) it implements. So you don’t care whether something is precisely an instance of list, you care about it being iterable or indexable or whatever.

On top of which, some design choices made in the development of type-hinted Python have made it (as I understand it) impossible to distribute a single-file module with type hints and have type checkers actually pick them up. Which was a problem for akismet, because traditionally it was a single-file module, installing a file named akismet.py containing all its code.

But as part of the rewrite I was reorganizing akismet into multiple files, so that objection no longer held, and eventually I went ahead and began running mypy as a type checker as part of the CI suite for akismet. The type annotations had been added earlier, because I find them useful as inline documentation even if I’m not running a type checker (and the Sphinx documentation tool, which all my projects use, will automatically extract them to document argument signatures for you). I did have to make some changes to work around mypy, though It didn’t find any bugs, but did uncover a few things that were written in ways it couldn’t handle, and maybe I’ll write about those in more detail another time.

As part of splitting akismet up into multiple files, I also went with an approach I’ve used on a few other projects, of prefixing most file names with an underscore (i.e., the async client is defined in a file named _async_client.py, not async_client.py). By convention, this marks the files in question as “private”, and though Python doesn’t enforce that, many common Python linters will flag it. The things that are meant to be supported public API are exported via the __all__ declaration of the akismet package.

I also switched the version numbering scheme to Calendar Versioning. I don’t generally trust version schemes that try to encode information about API stability or breaking changes into the version number, but a date-based version number at least tells you how old something is and gives you a general idea of whether it’s still being actively maintained.

There are also a few dev-only changes:
 * Local dev environment management and packaging are handled by PDM and its package-build backend. Of the current crop of clean-sheet modern Python packaging tools, PDM is my personal favorite, so it’s what my personal projects are using. * I added a Makefile which can execute a lot of common developer tasks, including setting up the local dev environment with proper dependencies, and running the full CI suite or subsets of its checks. * As mentioned above, the test suite moved from unittest to pytest, using AnyIO’s plugin for supporting async tests in pytest. There’s a lot of use of pytest parametrization to generate test cases, so the number of test cases grew a lot, but it’s still pretty fast—around half a second for each Python version being tested, on my laptop. The full CI suite, testing every supported Python version and running a bunch of linters and packaging checks, takes around 30 seconds on my laptop, and about a minute and a half on GitHub CI.

That’s it (for now)

In October of last year I released akismet 25.10.0 (and then 25.10.1 to fix a documentation error, because there’s always something wrong with a big release), which completed the rewrite process by finally removing the old Akismet client class. At this point I think akismet is feature-complete unless the Akismet web service itself changes, so although there were more frequent releases over a period of about a year and a half as I did the rewrite, it’s likely the cadence will settle down now to one a year (to handle supporting new Python versions as they come out) unless someone finds a bug.

Overall, I think the rewrite was an interesting process, because it was pretty drastic (I believe it touched literally every pre-existing line of code, and added a lot of new code), but also… not that drastic? If you were previously using akismet with your configuration in environment variables (as recommended), I think the only change you’d need to make is rewriting imports from akismet.Akismet to akismet.SyncClient. The mechanism for manually passing in configuration changed, but I believe that and the new client class names were the only actual breaking changes in the entire rewrite; everything else was adding features/functionality or reworking the internals in ways that didn’t affect public API.

I had hoped to write this up sooner, but I’ve struggled with this post for a while now, because I still have trouble with the fact that Michael’s gone, and every time I sat down to write I was reminded of that. It’s heartbreaking to know I’ll never run into him at a conference again. I’ll miss chatting with him. I’ll miss his energy. I’m thankful for all he gave to the Python community over many years, and I wish I could tell him that one more time. And though it’s a small thing, I hope I’ve managed to honor his work and to repay some of his kindness and his trust in me by being a good steward of his package. I have no idea whether Akismet the service will still be around in another 20 years, or whether I’ll still be around or writing code or maintaining this Python package in that case, but I’d like to think I’ve done my part to make sure it’s on sound footing to last that long, or longer.

March 23, 2026 02:09 PM UTC


Real Python

How to Use Note-Taking to Learn Python

Learning Python can be genuinely hard, and it’s normal to struggle with fundamental concepts. Research has shown that note-taking is invaluable when learning new things. This guide will help you get the most out of your learning efforts by showing you how to take better notes as you walk through an existing tutorial and keep handwritten notes on the side:

Photo of handwritten Python Learning Notes

In this guide, you’ll begin by briefly learning about the benefits of note-taking. Then, you’ll follow along with an existing Real Python tutorial as you perform note-taking steps to help make the information in the tutorial really stick. To help you stay organized as you practice, download the Python Note-Taking Worksheet below. It outlines the process you’ll learn here and provides a repeatable framework you can use with future tutorials:

Get Your PDF: Click here to download your free Python Note-Taking Worksheet that outlines that note-taking process.

Take the Quiz: Test your knowledge with our interactive “How to Use Note-Taking to Learn Python” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Use Note-Taking to Learn Python

Test your understanding of note-taking techniques that help you learn Python more effectively and retain what you study.

What Is Python Note-Taking?

In the context of learning, note-taking is the process of recording information from a source while you’re consuming it. A traditional example is a student jotting down key concepts during a lecture. Another example is typing out lines of code or unfamiliar words while watching a video course, listening to a presentation, or reading a learning resource.

In this guide, Python note-taking refers to taking notes specific to learning Python.

People take notes for a variety of reasons. Usually, the intent is to return to the notes at a later time to remind the note-taker of the information covered during the learning session.

In addition to the value of having a physical set of notes to refer back to, studies have found that the act of taking notes alone improves a student’s ability to recall information on a topic.

This guide focuses on handwritten note-taking—that is, using a writing utensil and paper. Several studies suggest that this form of note-taking is especially effective for understanding a topic and remembering it later. If taking notes by hand isn’t viable for you, don’t worry! The concepts presented here should be applicable to other forms of note-taking as well.

Prerequisites

Since this guide focuses on taking notes while learning Python programming, you’ll start by referencing the Real Python tutorial Python for Loops: The Pythonic Way. This resource is a strong choice because it clearly explains a fundamental programming concept that you’ll use throughout your Python journey.

Once you have the resource open in your browser, set aside a few pieces of paper and have a pen or pencil ready. Alternatively, you can take notes on a tablet with a stylus or another writing tool.

Generally, taking notes by hand has a stronger impact on learning than other methods, such as typing into a text document. For more information on the effectiveness of taking notes by hand versus typing, see this article from the Harvard Graduate School of Education.

Step 1: Write Down Major Concepts

With your note-taking tools ready, start by skimming the learning resource. Usually, you want to look at the major headings to see what topics the material covers. For Real Python content, you can instead just look at the table of contents at the top of the page, since this lists the main sections.

The major headings for your example resource are as follows:

The list above doesn’t include subheadings like “Sequences: Lists, Tuples, Strings, and Ranges” under “Traversing Built-In Collections in Python”. For now, stick to top-level headings.

Read the full article at https://realpython.com/python-note-taking-guide/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 23, 2026 02:00 PM UTC

Quiz: Strings and Character Data in Python

In this quiz, you’ll test your understanding of Python Strings and Character Data.

This quiz helps you deepen your understanding of Python’s string and byte data types. You’ll explore core concepts like string immutability, interpolation with f-strings, Unicode handling, key string methods, and working with bytes objects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 23, 2026 12:00 PM UTC


PyPy

Using Claude to fix PyPy3.11 test failures securely

I got access to Claude Max for 6 months, as a promotional move Anthropic made to Open Source Software contributors. My main OSS impact is as a maintainer for NumPy, but I decided to see what claude-code could to for PyPy's failing 3.11 tests. Most of these failures are edge cases: error messages that differ from CPython, or debugging tools that fail in certain cases. I was worried about letting an AI agent loose on my development machine. I noticed a post by Patrick McCanna (thanks Patrick!) that pointed to using bubblewrap to sandbox the agent. So I set it all up and (hopefully securely) pointed claude-code at some tests.

Setting up

There were a few steps to make sure I didn't open myself up to obvious gotchas. There are stories about agents wiping out data bases, or deleting mail boxes.

Bubblewrap

First I needed to see what bubblewrap does. I followed the instructions in the blog post to set things up with some minor variations:

sudo apt install bubblewrap

I couldn't run bwrap. After digging around a bit, I found I needed to add an exception for appamor on Ubuntu 24.04:

sudo bash -c 'cat > /etc/apparmor.d/bwrap << EOF
abi <abi/4.0>,
include <tunables/global>

profile bwrap /usr/bin/bwrap flags=(unconfined) {
  userns,
}
EOF'
sudo apparmor_parser -r /etc/apparmor.d/bwrap

Then bwrap would run. It is all locked down by default, so I opened up some exceptions. The arguments are pretty self-explanatory. Ubuntu spreads the executables around the operating system, so I needed access to various directories. I wanted a /tmp for running pytest. I also wanted the prompt to reflect the use of bubblewrap, so changed the hostname:

cat << 'EOL' >> ./run_bwrap.sh
  function call_bwrap() {
    bwrap \
      --ro-bind /usr /usr \
      --ro-bind /etc /etc \
      --ro-bind /run /run \
      --symlink usr/lib /lib \
      --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin \
      --proc /proc \
      --dev /dev \
      --bind $(pwd) $(pwd) \
      --chdir $(pwd) \
      --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \
      --die-with-parent \
      --hostname bwrap \
      --tmpfs /tmp \
      /bin/bash "$@"
  }
EOL

source ./run_bwrap.sh
call_bwrap
# now I am in a sandboxed bash shell
# play around, try seeing other directories, getting sudo, or writing outside
# the sandbox
exit

I did not do --unshare-network since, after all, I want to use claude and that needs network access. I did add rw access to $(pwd) since I want it to edit code in the current directory, that is the whole point.

Basic claude

After trying out bubblewrap and convincing myself it does actually work, I installed claude code

curl -fsSL https://claude.ai/install.sh | bash

Really Anthropic, this is the best way to install claude? No dpkg?

I ran claude once (unsafely) to get logged in. It opened a webpage, and saved the login to the oathAccount field in ~/.claude.json. Now I changed my bash script to this to get claude to run inside the bubblewrap sandbox:

cat << 'EOL' >> ./run_claude.sh
  claude-safe() {
    bwrap \
      --ro-bind /usr /usr \
      --ro-bind /etc /etc \
      --ro-bind /run /run \
      --ro-bind "$HOME/.local/share/claude" "$HOME/.local/share/claude" \
      --symlink usr/lib /lib \
      --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin \
      --symlink "$HOME/.local/share/claude/versions/2.1.81" "$HOME/.local/bin/claude" \
      --proc /proc \
      --dev /dev \
      --bind $(pwd) $(pwd) \
      --bind "$HOME/.claude" "$HOME/.claude" \
      --bind "$HOME/.claude.json" "$HOME/.claude.json" \
      --chdir $(pwd) \
      --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \
      --die-with-parent \
      --hostname bwrap \
      --tmpfs /tmp \
      --setenv PATH "$HOME/.local/bin:$PATH" \
      claude "$@"
  }
EOL

source ./run_claude.sh
claude-safe

Now I can use claude. Note it needs some more directories in order to run. This script hard-codes the version, in the future YMMV. I want it to be able to look at github, and also my local checkout of cpython so it can examine differences. I created a read-only token by clicking on my avatar in the upper right corner of a github we page, then going to Settings → Developer settings → Personal access tokens → Fine-grained tokens → Generate new token. Since pypy is in the pypy org, I used "Repository owner: pypy", "Repository access: pypy (only)" and "Permissions: Contents". Then I made doubly sure the token permissions were read-only. And checked again. Then I copied the token to the bash script. I also added a ro-bind to the cpython checkout, so I could tell claude code where to look for CPython implementations of missing PyPy functionality.

--ro-bind "$HOME/oss/cpython" "$HOME/oss/cpython" \
--setenv GH_TOKEN "hah, sharing my token would not have been smart" \

Claude /sandbox

Claude comes with its own sandbox, configured by using the /sandbox command. I chose the defaults, which prevents malicious code in the repo from accessing the file system and the network. I was missing some packages to get this to work. Claude would hang until I installed them, and I needed to kill it with kill.

sudo apt install socat
sudo npm install -g @anthropic-ai/sandbox-runtime

Final touches

One last thing that I discovered later: I needed to give claude access to some grepping and git tools. While git should be locked down externally so it cannot push to the repo, I do want claude to look at other issues and pull requests in read-only mode. So I added a local .claude/settings.json file inside the repo (see below for which directory to do this):

{
  "permissions": {
    "allow": [
      "Bash(sed*)",
      "Bash(grep*)",
      "Bash(cat*)",
      "Bash(find*)",
      "Bash(rg*)",
      "Bash(python*)",
      "Bash(pytest*)"
    ]
  }
}

Then I made git ignore it, even when doing a git clean, in a local (not part of the repo) configuration

echo -n .claude >> ~/.config/git/ignore

What about git push?

I don't want claude messing around with the upstream repo, only read access. But I did not actively prevent git push. So instead of using my actual pypy repo, I cloned it to a separate directory and did not add a remote pointing to github.com.

Fixing tests - easy

Now that everything is set up (I hope I remembered everything), I could start asking questions. The technique I chose was to feed claude the whole test failure from the buildbot. So starting from the buildbot py3.11 summary, click on one of the F links and copy-paste all that into the claude prompt. It didn't take long for claude to come up with solutions for the long-standing ctype error missing exception which turned out to be due to an missing error trap when already handling an error.

Also a CTYPES_MAX_ARGCOUNT check was missing. At first, claude wanted to change the ctypes code from CPython's stdlib, and so I had to make it clear that claude was not to touch the files in lib-python. They are copied verbatim from CPython and should not be modified without really good reasons.

The fix to raise TypeError rather than Attribute Error for deleting ctype object's value was maybe a little trickier: claude needed to create its own property class and use it in assignments.

The fix for a failing test for a correct repr of a ctypes array was a little more involved. Claude needed to figure out that newmemoryview was raising an exception, dive into the RPython implementation and fix the problem, and then also fix a pure-python __buffer__ shape edge case error.

There were more, but you get the idea. With a little bit of coaching, and by showing claude where the CPython implementation was, more tests are now passing.

Fixing tests - harder

PyPy has a HPy backend. There were some test failures that were easy to fix (a handle not being closed, an annotation warning). But the big one was a problem with the context tracking before and after ffi function calls. In debug mode there is a check that the ffi call is done using the correct HPy context. It turns out to be tricky to hang on to a reference to a context in RPython since the context RPython object is pre-built. The solution, which took quite a few tokens and translation cycles to work out, was to assign the context on the C level, and have a getter to fish it out in RPython.

Conclusion

I started this journey not more than 24 hours ago, after some successful sessions using claude to refactor some web sites off hosting platforms and make them static pages. I was impressed enough to try coding with it from the terminal. It helps that I was given a generous budget to use Anthropic's tool.

Claude seems capable of understanding the layers of PyPy: from the pure python stdlib to RPython and into the small amount of C code. I even asked it to examine a segfault in the recently released PyPy7.3.21, and it seems to have found the general area where there was a latent bug in the JIT.

Like any tool, agentic programming must be used carefully to make sure it cannot do damage. I hope I closed the most obvious foot-guns, if you have other ideas of things I should do to protect myself while using an agent like this, I would love to hear about them.

March 23, 2026 10:27 AM UTC


Tryton News

Release 0.8.0 of mt940

We are proud to announce the release of the version 0.8.0 of mt940.

mt940 is a library to parse MT940 files. MT940 is a specific SWIFT message type used by the SWIFT network to send and receive end-of-day bank account statements.

In addition to bug-fixes, this release contains the following improvements:

mt940 is available on PyPI: mt940 0.8.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:33 AM UTC

Release 0.4.0 of febelfin-coda

We are proud to announce the release of the version 0.4.0 of febelfin-coda.

febelfin-coda is a library to parse CODA files. This bank standard (also called CODA) specifies the lay-out for the electronic files, by banks to customers, of the account transactions and the information concerning the enclosures in connection with the movement.

In addition to bug-fixes, this release contains the following improvements:

febelfin-coda is available on PyPI: febelfin-coda 0.4.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:29 AM UTC

Release 0.2.0 of aeb43

We are proud to announce the release of the version 0.2.0 of aeb43.

aeb43 is a library to parse AEB43 files. AEB43 is a standard, fixed-length 80-character file format used by Spanish banks for transmitting bank statements, transaction details, and account balances.

In addition to bug-fixes, this release contains the following improvements:

aeb43 is available on PyPI: aeb43 0.2.0.

1 post - 1 participant

Read full topic

March 23, 2026 09:24 AM UTC

Release 0.12.0 of Relatorio

We are proud to announce the release of Relatorio version 0.12.0.

Relatorio is a templating library mainly for OpenDocument using also OpenDocument as source format.

In addition to bug-fixes, this release contains the following improvements:

The package is available at relatorio · PyPI
The documentation is available at Relatorio — A templating library able to output odt and pdf files

1 post - 1 participant

Read full topic

March 23, 2026 09:16 AM UTC


Python Bytes

#474 Astral to join OpenAI

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://starlette.dev/release-notes/#100rc1-february-23-2026">Starlette 1.0.0</a></strong></li> <li><strong><a href="https://astral.sh/blog/openai?featured_on=pythonbytes">Astral to join OpenAI</a></strong></li> <li><strong>uv audit</strong></li> <li><strong><a href="https://mkennedy.codes/posts/fire-and-forget-or-never-with-python-s-asyncio/?featured_on=pythonbytes">Fire and forget (or never) with Python’s asyncio</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=k8BJzKSMwvQ' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="474">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a> <strong>Connect with the hosts</strong></li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky) Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</li> </ul> <p><strong>Brian #1: <a href="https://starlette.dev/release-notes/#100rc1-february-23-2026">Starlette 1.0.0</a></strong></p> <ul> <li>As a reminder, Starlette is the foundation for FastAPI</li> <li><a href="https://marcelotryle.com/blog/2026/03/22/starlette-10-is-here/?featured_on=pythonbytes">Starlette 1.0 is here!</a> - fun blog post from Marcello Trylesinski</li> <li>“The changes in 1.0 were limited to removing old deprecated code that had been on the way out for years, along with a few bug fixes. From now on we'll follow SemVer strictly.”</li> <li>Fun comment in the “What’s next?” section: <ul> <li>“Oh, and Sebastián, Starlette is now out of your way to release FastAPI 1.0. 😉”</li> </ul></li> <li>Related: <a href="https://simonwillison.net/2026/Mar/22/starlette/?featured_on=pythonbytes">Experimenting with Starlette 1.0 with Claude skills</a> <ul> <li>Simon Willison</li> <li>example of the new lifespan mechanism, very pytest fixture-like <div class="codehilite"> <pre><span></span><code><span class="nd">@contextlib</span><span class="o">.</span><span class="n">asynccontextmanager</span> <span class="k">async</span> <span class="k">def</span><span class="w"> </span><span class="nf">lifespan</span><span class="p">(</span><span class="n">app</span><span class="p">):</span> <span class="k">async</span> <span class="k">with</span> <span class="n">some_async_resource</span><span class="p">():</span> <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Run at startup!&quot;</span><span class="p">)</span> <span class="k">yield</span> <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Run on shutdown!&quot;</span><span class="p">)</span> <span class="n">app</span> <span class="o">=</span> <span class="n">Starlette</span><span class="p">(</span> <span class="n">routes</span><span class="o">=</span><span class="n">routes</span><span class="p">,</span> <span class="n">lifespan</span><span class="o">=</span><span class="n">lifespan</span> <span class="p">)</span> </code></pre> </div></li> </ul></li> </ul> <p><strong>Michael #2: <a href="https://astral.sh/blog/openai?featured_on=pythonbytes">Astral to join OpenAI</a></strong></p> <ul> <li>via John Hagen, thanks</li> <li>Astral has agreed to join <a href="https://openai.com/?featured_on=pythonbytes"><strong>OpenAI</strong></a> as part of the <a href="https://chatgpt.com/codex?featured_on=pythonbytes"><strong>Codex</strong></a> team</li> <li>Congrats Charlie and team</li> <li>Seems like <a href="https://github.com/astral-sh/ruff?featured_on=pythonbytes">**Ruff</a>** and <a href="https://github.com/astral-sh/uv?featured_on=pythonbytes"><strong>uv</a></strong> play an important roll.</li> <li>Perhaps <a href="https://github.com/astral-sh/ty?featured_on=pythonbytes"><strong>ty</strong></a> holds the most value to directly boost Codex (understanding codebases for the AI)</li> <li>All that said, these were open source so there is way more to the motivations than just using the tools.</li> <li>After joining the Codex team, we'll continue building our open source tools.</li> <li><a href="https://simonwillison.net/2026/Mar/19/openai-acquiring-astral/?featured_on=pythonbytes">Simon Willison has thoughts</a></li> <li><a href="http://discuss.python.org?featured_on=pythonbytes">d</a><a href="https://discuss.python.org/t/openai-to-acquire-astral/106605?featured_on=pythonbytes">iscuss.python.org also has thoughts</a></li> <li>The <a href="https://arstechnica.com/ai/2026/03/openai-is-acquiring-open-source-python-tool-maker-astral/?featured_on=pythonbytes">Ars Technica article</a> has interesting comments too</li> <li>It’s probably the death <a href="https://astral.sh/pyx?featured_on=pythonbytes">pyx</a> <ul> <li>Simon points out “pyx is notably absent from both the Astral and OpenAI announcement posts.”</li> </ul></li> </ul> <p><strong>Brian #3: uv audit</strong></p> <ul> <li>Submitted by Owen Lemont</li> <li>Pieces of <code>uv audit</code> have been trickling in. <a href="https://github.com/astral-sh/uv/releases?featured_on=pythonbytes">uv 0.10.12 exposes it to the cli help</a></li> <li>Here’s the <a href="https://github.com/astral-sh/uv/issues/18506?featured_on=pythonbytes">roadmap for uv audit</a></li> <li>I tried it out on a package and found a security issue with a dependency <ul> <li>not of the project, but of the testing dependencies</li> <li>but only if using Python &lt; 3.10, even though I’m using 3.14</li> </ul></li> <li>Kinda cool</li> <li>Looks like it generates a uv.lock file, which includes dependencies for all project supported versions of Python and systems, which is a very thorough way to check for vulnerabilities.</li> <li>But also, maybe some pointers on how to fix the problem would be good. No <code>--fix</code> yet.</li> </ul> <p><strong>Michael #4: <a href="https://mkennedy.codes/posts/fire-and-forget-or-never-with-python-s-asyncio/?featured_on=pythonbytes">Fire and forget (or never) with Python’s asyncio</a></strong></p> <ul> <li>Python’s <code>asyncio.create_task()</code> can silently garbage collect your fire-and-forget tasks starting in Python 3.12</li> <li>Formerly fine async code can now stop working, so heads up</li> <li>The fix? Use a set to upgrade to a strong ref and a callback to remove it</li> <li>Is there a chance of task-based memory leaks? Yeah, maybe.</li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/?featured_on=pythonbytes">Nobody Gets Promoted for Simplicity</a> - interesting read and unfortunate truth in too many places.</li> <li><a href="https://github.com/okken/pytest-check?featured_on=pythonbytes">pytest-check</a> - All built-in check helper functions in this list also accept an optional <code>xfail</code> reason. <ul> <li>example: <code>check.equal(actual, expected, xfail="known issue #123")</code></li> <li>Allows some checks to still cause a failure to happen because you no longer have to mark the whole test as xfail Michael:</li> </ul></li> <li><a href="https://x.com/rachpradhan/status/2034191434182738096?featured_on=pythonbytes">TurboAPI</a> - FastAPI + Pydantic compatible framework in Zig (see <a href="https://x.com/rachpradhan/status/2035928730242371716?featured_on=pythonbytes">follow up</a>)</li> <li><a href="https://docs.pylonsproject.org/projects/pyramid/en/2.1-branch/whatsnew-2.1.html?featured_on=pythonbytes">Pyramid 2.1</a> is out (yes really! :) first release in 3 years)</li> <li><a href="https://vivaldi.com/blog/vivaldi-on-desktop-7-9/?featured_on=pythonbytes">Vivaldi 7.9 adds</a> minimalist hide mode.</li> <li>Migrated <a href="http://pythonbytes.fm">pythonbytes.fm</a> and <a href="http://talkpython.fm?featured_on=pythonbytes">talkpython.fm</a> to <a href="https://mkennedy.codes/posts/raw-dc-the-orm-pattern-of-2026/?featured_on=pythonbytes">Raw+DC design pattern</a></li> <li><a href="https://mkennedy.codes/posts/use-chameleon-templates-in-the-robyn-web-framework/?featured_on=pythonbytes">Robyn + Chameleon package</a></li> </ul> <p><strong>Joke: We now have <a href="https://translate.kagi.com?featured_on=pythonbytes">translation services</a></strong></p>

March 23, 2026 08:00 AM UTC


Antonio Cuni

Inside SPy, part 1: Motivations and Goals

Inside SPy🥸, part 1: Motivations and Goals

This is the first of a series of posts in which I will try to give a deep explanation ofSPy, including motivations, goals, rules of thelanguage, differences with Python and implementation details.

This post focuses primarily on the problem space: why Python is fundamentally hardto optimize, what trade-offs existing solutions require, and where current approachesfall short. Subsequent posts in this series will explore the solutions in depth. Fornow, let's start with the essential question: what is SPy?

!!! Success "" Before diving in, I want to express my gratitude to my employer, Anaconda, for giving me the opportunity to dedicate 100% of my time to this open-source project.

March 23, 2026 07:58 AM UTC

March 22, 2026


Reuven Lerner

Do you teach Python? Then check out course-setup

TL;DR: If you teach Python, then you should check out course-setup (https://pypi.org/project/course-setup/) at PyPI!

I’ve been teaching Python and Pandas for many years. And while I started my teaching career like many other instructors, with slides, I quickly discovered that it was better for my students — and for me! — to replace them with live coding.

Every day I start teaching, I open a new Jupyter or Marimo notebook, and I type. I type the day’s agenda. I type the code that I want to demonstrate, and then do it right there, in front of people. I type the instructions for each exercise we’re going to use. I type explanatory notes. If people have questions, I type those, along with my answers.

In other words, every day’s notebook contains a combination of documentation, demonstration, explanation, and exercise solutions. That combination is unique to the group I’m teaching. If we get sidetracked with someone’s question, that’s OK — I include whatever I can in each day’s notebook.

Teaching in this way raises some issues. Among the biggest: If I’m working on my own computer, then how can someone see the notebook that I’m writing? Obviously, I could scroll my screen up and down, but that’s frustrating for everyone, especially when we’re doing an exercise.

I was thus delighted to learn, years ago, about “gitautopush” (https://pypi.org/project/gitautopush/), a simple PyPI project that takes a local Git repository and monitors it for any changes. When something changes, it commits those changes to Git and then pushes them to a remote repository. The fact that GitHub renders Jupyter notebooks into HTML made this a perfect solution for me.

For years, then, my setup has been:

This worked fine for many years, but it took about 10 minutes of prep before each class. I finally realized that this was silly: I’m a programmer, and shouldn’t I be automating repetitive tasks that take a long time?

That’s where course-setup started. I wrote two Python programs that would let me create a new course (doing all of the setup tasks I mentioned above) or retire an existing one. Did it do everything I wanted? No, but it was good enough.

Once I started to use uv, I turned these programs into uv tools, always available in my shell. I made some additional progress with course-setup, but most of my thoughts about improvements stayed on the back burner.

And then? I started to use Claude Code. I decided to see just how far I could improve course-setup with Claude Code — and to be honest, the improvements were beyond my wildest dreams:

It’s hard to exaggerate how much of this work was done by Claude Code. I supervised, checked things, added new functionality, pushed back on a number of things it suggested, and am ultimately responsible. But really, the code itself was largely written by Claude, often using a number of agents working in parallel, and I couldn’t be happier with the result. I’ve included the CLAUDE.md file in the GitHub repo, if you’re interested in learning from it and/or using it.

This suite of utilities is now available on PyPI as “course-setup” (https://pypi.org/project/course-setup/). It includes a ton of functionality, and I’m always looking to improve it — so tell me how, or send a PR my way at https://github.com/reuven/course-setup!

The post Do you teach Python? Then check out course-setup appeared first on Reuven Lerner.

March 22, 2026 03:10 PM UTC


EuroPython

Humans of EuroPython: Niklas Mertsch

EuroPython runs on people power—real people giving their time to make it happen. No flashy titles, just real work: setting up rooms, guiding speakers, helping attendees find their way, or making sure everyone feels welcome. Some help run sessions, others support accessibility needs or troubleshoot the Wi-Fi. 

It’s all about showing up, pitching in, and sharing a passion for Python. This is what a community looks like.

Today we’d like to introduce you to Niklas Mertsch, member of the Operations team at EuroPython 2025. Check out what he has to say about the volunteering experience.

altNiklas Mertsch, member of the Operations team at EuroPython 2025

EP: What&aposs one thing about the programming community that made you want to give back by volunteering?

For me, it is not about “giving back” but about “participating”. I started volunteering out of curiosity, and continued because of the people and interactions. It started with a conversation, and it led to many more.

EP: Did you learn any new skills while volunteering at EuroPython? If so, which ones?

I can&apost name a “new” skill, but working with an intrinsically motivated, international and intercultural team definitely improved my social and communication skills.

EP: Did you have any unexpected or funny experiences during the EuroPython?

Tons of them, you never know what happens before or during the event. One time I just tried to print a WiFi QR code, then spent the next hours talking to someone I now call a good friend. And some months later that friend nudged me to answer these questions. You never know what you get and where it will lead you, but you know it will be good.

EP: Thank you for your work, Niklas!

March 22, 2026 01:53 PM UTC


Tryton News

Release 1.7.0 of python-sql

We are proud to announce the release of the version 1.7.0 of python-sql.

python-sql is a library to write SQL queries in a pythonic way. It is mainly developed for Tryton but it has no external dependencies and is agnostic to any framework or SQL database.

In addition to bug-fixes, this release contains the following improvements:

python-sql is available on PyPI: python-sql 1.7.0.

1 post - 1 participant

Read full topic

March 22, 2026 09:18 AM UTC

March 20, 2026


Real Python

The Real Python Podcast – Episode #288: Automate Exploratory Data Analysis & Invent Python Comprehensions

How do you quickly get an understanding of what's inside a new set of data? How can you share an exploratory data analysis with your team? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 20, 2026 12:00 PM UTC

Quiz: Python Decorators 101

In this quiz, you’ll test your understanding of Python Decorators 101.

Work through this quiz to review first-class functions, inner functions, and decorators, and learn how to create, reuse, and apply them to extend behavior cleanly in Python.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 20, 2026 12:00 PM UTC


Armin Ronacher

Some Things Just Take Time

Trees take quite a while to grow. If someone 50 years ago planted a row of oaks or a chestnut tree on your plot of land, you have something that no amount of money or effort can replicate. The only way is to wait. Tree-lined roads, old gardens, houses sheltered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.

Because some things just take time.

We know this intuitively. We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them. Either because of the time it took to build them or because of their age. We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.

Yet right now we also live in a time of instant gratification, and it’s entering how we build software and companies. As much as we can speed up code generation, the real defining element of a successful company or an Open Source project will continue to be tenacity. The ability of leadership or the maintainers to stick to a problem for years, to build relationships, to work through challenges fundamentally defined by human lifetimes.

Friction Is Good

The current generation of startup founders and programmers is obsessed with speed. Fast iteration, rapid deployment, doing everything as quickly as possible. For many things, that’s fine. You can go fast, leave some quality on the table, and learn something along the way.

But there are things where speed is actively harmful, where the friction exists for a reason. Compliance is one of those cases. There’s a strong desire to eliminate everything that processes like SOC2 require, and an entire industry of turnkey solutions has sprung up to help — Delve just being one example, there are more.

There’s a feeling that all the things that create friction in your life should be automated away. That human involvement should be replaced by AI-based decision-making. Because it is the friction of the process that is the problem. When in fact many times the friction, or that things just take time, is precisely the point.

There’s a reason we have cooling-off periods for some important decisions in one’s life. We recognize that people need time to think about what they’re doing, and that doing something right once doesn’t mean much because you need to be able to do it over a longer period of time.

Vibe Slop At Inference Speeds

AI writes code fast which isn’t news anymore. What’s interesting is that we’re pushing this force downstream: we seemingly have this desire to ship faster than ever, to run more experiments and that creates a new desire, one to remove all the remaining friction of reviews, designing and configuring infrastructure, anything that slows the pipeline. If the machines are so great, why do we even need checklists or permission systems? Express desire, enjoy result.

Because we now believe it is important for us to just do everything faster. But increasingly, I also feel like this means that the shelf life of much of the software being created today — software that people and businesses should depend on — can be measured only in months rather than decades, and the relationships alongside.

In one of last year’s earlier YC batches, there was already a handful that just disappeared without even saying what they learned or saying goodbye to their customers. They just shut down their public presence and moved on to other things. And to me, that is not a sign of healthy iteration. That is a sign of breaking the basic trust you need to build a relationship with customers. A proper shutdown takes time and effort, and our current environment treats that as time not wisely spent. Better to just move on to the next thing.

This is extending to Open Source projects as well. All of a sudden, everything is an Open Source project, but many of them only have commits for a week or so, and then they go away because the motivation of the creator already waned. And in the name of experimentation, that is all good and well, but what makes a good Open Source project is that you think and truly believe that the person that created it is either going to stick with it for a very long period of time, or they are able to set up a strategy for succession, or they have created enough of a community that these projects will stand the test of time in one form or another.

My Time

Relatedly, I’m also increasingly skeptical of anyone who sells me something that supposedly saves my time. When all that I see is that everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things.

We all sell each other the idea that we’re going to save time, but that is not what’s happening. Any time saved gets immediately captured by competition. Someone who actually takes a breath is outmaneuvered by someone who fills every freed-up hour with new output. There is no easy way to bank the time and it just disappears.

I feel this acutely. I’m very close to the red-hot center of where economic activity around AI is taking place, and more than anything, I have less and less time, even when I try to purposefully scale back and create the space. For me this is a problem. It’s a problem because even with the best intentions, I actually find it very hard to create quality when we are quickly commoditizing software, and the machines make it so appealing.

I keep coming back to the trees. I’ve been maintaining Open Source projects for close to two decades now. The last startup I worked on, I spent 10 years at. That’s not because I’m particularly disciplined or virtuous. It’s because I or someone else, planted something, and then I kept showing up, and eventually the thing had roots that went deeper than my enthusiasm on any given day. That’s what time does! It turns some idea or plan into a commitment and a commitment into something that can shelter and grow other people.

Nobody is going to mass-produce a 50-year-old oak. And nobody is going to conjure trust, or quality, or community out of a weekend sprint. The things I value most — the projects, the relationships, the communities — are all things that took years to become what they are. No tool, no matter how fast, was going to get them there sooner.

We recently planted a new tree with Colin. I want it to grow into a large one. I know that’s going to take time, and I’m not in a rush.

March 20, 2026 12:00 AM UTC

March 19, 2026


"Michael Kennedy's Thoughts on Technology"

Use Chameleon templates in the Robyn web framework

TL;DR; Chameleon-robyn is a new Python package I created that brings Chameleon template support to the Robyn web framework. If you prefer Chameleon’s structured, HTML-first approach over Jinja and want to try Robyn’s Rust-powered performance, this package bridges the two.


People who have known me for a while know that I’m very much not a fan of the Jinja templating language. Neither am I a fan of the Django templating language, since it’s very similar. I dislike the fact that you’re mostly programming with interlaced HTML rather than having mostly HTML that is very restricted in what it allows in terms of coding. While nowhere near perfect, I prefer Chameleon because it requires you to write well-structured code. Sadly, I think Jinja won exactly because it allows you to write whatever Python code in your HTML you want. For most frameworks, Jinja is the only templating language they support.

Why migrate Chameleon templates to a new framework?

I’d love to try out some new frameworks, but I have so much existing Chameleon code that any sort of migration will never include converting to Jinja, if I have a say in it. Not because of my dislike for it, but because it’s incredibly error prone, and it would mean changing my entire web design, not just my code.

Here’s the code breakdown for just Talk Python Training.

That design category is 14,650 lines of HTML and 11,104 lines of CSS! If I can get Chameleon running on a framework, it will 100% reuse every line of that to perfection. If I cannot, I’m rewriting them all. No thanks.

How does Robyn use Rust to speed up Python web apps?

I’ve been thinking a lot about what if our web frameworks actually ran in Rust? Right now I’m running Quart (async Flask) on top of Granian. So Rust is the base of my web server and processing. But there is a lot of infrastructure provided by Flask and Werkzueg leading up to my code actually running that is all based on Python.

Would it be a lot faster? Maybe. My exploring this idea was inspired by TurboAPI. TurboAPI did exactly this as I was thinking about, but with Zig and for FastAPI. While I am not recommending people leave FastAPI, their headline “FastAPI-compatible. Zig HTTP core. 22x faster,” does catch one’s attention.

Eventually I found my way to Robyn. Robyn merges Python’s async capabilities with a Rust runtime for reliable, scalable web solutions. Here are a few key highlights:

There’s this quite interesting performance graph from Robyn’s benchmarks. Of course, take it with all the caveats that benchmarks come with.

Benchmark comparing Robyn, FastAPI, Flask, and Django on request throughput

How to use Chameleon templates with Robyn

I want to try this framework on real projects that I’m running in production to see how they size up. However, given all of my web UI is written in Chameleon, there’s absolutely no way I’m converting to Jinja. I can hear everyone now, “So just use it for something simple and new, Michael.” For me that defeats the point. Thus, my obsession with getting Chameleon to work.

I created the integration for Chameleon for Flask with my chameleon-flask package. Could I do the same thing for Robyn?

It turns out that I can! Without further ado, introducing chameleon-robyn:

It’s super early days and I’m just starting to use this package for my prototype. I’m sure as I put it into production in a real app, I’ll see if it’s feature complete or not.

For now, it’s out there on GitHub and on PyPI. If Chameleon + Robyn sounds like an interesting combo to you as well, give this a try. PRs are welcome.

March 19, 2026 11:18 PM UTC


Talk Python to Me

#541: Monty - Python in Rust for AI

When LLMs write code to accomplish a task, that code has to actually run somewhere. And right now, the options aren't great. Spin up a sandboxed container and you're paying a full second of cold start overhead plus the complexity of another service. Let the LLM loose on your actual machine and... well, you'd better be watching. <br/> <br/> On this episode, I sit down with Samuel Colvin, creator of Pydantic, now at 10 billion downloads, to explore Monty, a Python interpreter written from scratch in Rust, purpose-built to run LLM-generated code. It starts in microseconds, is completely sandboxed by design, and can even serialize its entire state to a database and resume later. We dig into why this deliberately limited interpreter might be exactly what the AI agent era needs.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br> <a href='https://talkpython.fm/devopsbook'>Python in Production</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Samuel Colvin</strong>: <a href="https://github.com/samuelcolvin?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>CPython</strong>: <a href="https://github.com/python/cpython?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>IronPython</strong>: <a href="https://ironpython.net?featured_on=talkpython" target="_blank" >ironpython.net</a><br/> <strong>Jython</strong>: <a href="https://www.jython.org?featured_on=talkpython" target="_blank" >www.jython.org</a><br/> <strong>Pyodide</strong>: <a href="https://pyodide.com?featured_on=talkpython" target="_blank" >pyodide.com</a><br/> <strong>monty</strong>: <a href="https://github.com/pydantic/monty?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Pydantic AI</strong>: <a href="https://pydantic.dev/pydantic-ai?featured_on=talkpython" target="_blank" >pydantic.dev</a><br/> <strong>Python AI conference</strong>: <a href="https://pyai.events?featured_on=talkpython" target="_blank" >pyai.events</a><br/> <strong>bashkit</strong>: <a href="https://github.com/everruns/bashkit?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>just-bash</strong>: <a href="https://github.com/vercel-labs/just-bash?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Narwhals</strong>: <a href="https://narwhals-dev.github.io/narwhals/?featured_on=talkpython" target="_blank" >narwhals-dev.github.io</a><br/> <strong>Polars</strong>: <a href="https://pola.rs?featured_on=talkpython" target="_blank" >pola.rs</a><br/> <strong>Strands Agents</strong>: <a href="https://aws.amazon.com/blogs/opensource/introducing-strands-agents-an-open-source-ai-agents-sdk/?featured_on=talkpython" target="_blank" >aws.amazon.com</a><br/> <strong>Subscribe Running Pydantic’s Monty Rust sandboxed Python subset in WebAssembly</strong>: <a href="https://simonwillison.net/2026/Feb/6/pydantic-monty/?featured_on=talkpython" target="_blank" >simonwillison.net</a><br/> <strong>Rust Python</strong>: <a href="https://github.com/RustPython/RustPython?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Valgrind</strong>: <a href="https://valgrind.org?featured_on=talkpython" target="_blank" >valgrind.org</a><br/> <strong>Cod Speed</strong>: <a href="https://codspeed.io?featured_on=talkpython" target="_blank" >codspeed.io</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=TjTV4jlMcRw" target="_blank" >youtube.com</a><br/> <strong>Episode #541 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/541/monty-python-in-rust-for-ai#takeaways-anchor" target="_blank" >talkpython.fm/541</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/541/monty-python-in-rust-for-ai" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

March 19, 2026 07:38 PM UTC


Real Python

Quiz: How to Add Python to PATH

In this quiz, you’ll test your understanding of Add Python to PATH.

By working through this quiz, you’ll review what the PATH environment variable is, how the shell searches it in order.

You’ll also practice adding Python to PATH on Windows, Linux, and macOS, prepending directories with export, refreshing your session with source, and managing unwanted entries so the python command works as expected.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

March 19, 2026 12:00 PM UTC


Tryton News

Release 0.1.0 of hatch-tryton

We are proud to announce the first release of the version 0.1.0 of hatch-tryton.

hatch-tryton is a hatchling plugin that manages Tryton dependencies.

We will rely on this tool to upgrade Tryton’s packages to pyproject.toml for future releases.

hatch-tryton is available on PyPI as hatch-tryton 0.1.0

1 post - 1 participant

Read full topic

March 19, 2026 10:18 AM UTC


Nicola Iarocci

Eve 2.3.0

Eve v2.3 was just released on PyPI. It adds optimize_pagination_for_speed, a resource-level setting that allows granular control, overriding the equivalent global option that goes by the same name. Many thanks to Emanuele Di Giacomo for contributing to the project.

March 19, 2026 09:38 AM UTC