Planet Python
Last update: April 01, 2026 04:46 PM UTC
April 01, 2026
"Michael Kennedy's Thoughts on Technology"
Cutting Python Web App Memory Over 31%
tl;dr; I cut 3.2 GB of memory usage from our Python web apps using five techniques: async workers, import isolation, the Raw+DC database pattern, local imports for heavy libraries, and disk-based caching. Here are the exact before-and-after numbers for each optimization.
Over the past few weeks, I’ve been ruthlessly focused on reducing memory usage on my web apps, APIs, and daemons. I’ve been following the one big server pattern for deploying all the Talk Python web apps, APIs, background services, and supporting infrastructure.
There are a ridiculous number of containers running to make everything go around here at Talk Python (23 apps, APIs, and database servers in total).
Even with that many apps running, the actual server CPU load is quite low. But memory usage is creeping up. The server was running at 65% memory usage on a 16GB server. While that may be fine - the server’s not that expensive - I decided to take some time and see if there were some code level optimizations available.
What I learned was interesting and much of it was a surprise to me. So, I thought I’d share it here with you. I was able to drop the memory usage by 3.2GB basically for free just by changing some settings, changing how I import packages in Python, and proper use of offloading some caching to disk.
How much memory were the Python apps using before optimization?
For this blog post, I’m going to focus on just two applications. However, I applied this to most of the apps that we own the source code for (as opposed to Umami, etc). Take these as concrete examples more than the entire use case.
Here are the initial stats we’ll be improving on along the way.
| Application | Starting Memory |
|---|---|
| Talk Python Training | 1,280 MB |
| Training Search Indexer Daemon | 708 MB |
| Total | 1,988 MB |
How async workers and Quart cut Python web app memory in half
I knew that starting with a core architectural change in how we run our apps and access our database would have huge implications. You see, we’re running our web apps as a web garden, one orchestrator, multiple worker processes via the lovely Granian.
I’ve wanted to migrate our remaining web applications to some fully asynchronous application framework. See Talk Python rewritten in Quart (async Flask) for a detailed discussion on this topic. If we have a truly async-capable application server (Granian) and a truly async web framework (Quart), then we can change our deployment style to one worker running fully asynchronous code. Much less blocking code means a single worker is more responsive now. Thus we can work with a single worker instance.
This one change alone would cut the memory usage nearly in half. To facilitate this, we needed two actions:
Action 1: Rewrite Talk Python Training in Quart
The first thing I had to do was rewrite Talk Python Training, the app I was mostly focused on at the time, in Quart. This was a lot of work. You might not know it from the outside, but Talk Python Training is a significant application.

178,000 lines of code! Rewriting this from the older framework, Pyramid, to async Flask (aka Quart), was a lot of work, but I pulled it off last week.
Action 2: Rewrite data access to raw + dc design pattern
Data access was based on MongoEngine, a barely maintained older database ODM for talking to MongoDB, which does not support async code and never will support async code. Even though we have Quart as a runtime option, we hardly can do anything async without the data access layer.
So I spent some time removing MongoEngine and implementing the Raw + DC design pattern. That saved us a ton of memory, facilitated writing async queries, and almost doubled our requests per second.
I actually wrote this up in isolation here with some nice graphs: Raw+DC Database Pattern: A Retrospective. Switching from a formalized ODM to raw database queries along with data classes with slots saved us 100 MB per worker process, or in this case, 200 MB of working memory. Given that it also sped up the app significantly, that’s a serious win.
| Change | Memory Saved | Bonus |
|---|---|---|
| Rewrite to Quart (async Flask) | Enabled single-worker mode | Async capable |
| Raw + DC database pattern | 200 MB (100 MB per worker) | Almost 2x requests/sec |
How switching to a single async Granian worker saved 542 MB
Now that our web app runs asynchronously and our database queries fully support it, we could trim our web garden down to a single, fully asynchronous worker process using Granian. When every request is run in a blocking mode, one worker not ideal. But now the requests all interleave using Python concurrency.
This brought things down to a whopping 536 MB in total (a savings of 542 MB!) I could have stopped there, and things would have been excellent compared to where we were before, but I wanted to see what else was a possibility.
| Metric | Value |
|---|---|
| Before (multi-worker) | 1,280 MB |
| After (single async worker and raw+dc) | 536 MB |
| Savings | 542 MB |
How isolating Python imports in a subprocess cut memory from 708 MB to 22 MB
The next biggest problem was that the Talk Python Training search indexer. It reads literally everything from the many gigabyte database backing Talk Python Training, indexes it, and stores it into a custom data structure that we use for our ultra-fast search. It was running at 708 MB in its own container.
Surely, this could be more efficient.
And boy, was it. There were two main takeaways here. I noticed first that even if no indexing ran, just at startup, this process was using almost 200 megabytes of memory. Why? Import chains.
The short version is it was importing almost all of the files of Talk Python Training and their in third-party dependencies because that was just the easiest way to write the code and because of PEP 8. When the app starts, it imports a few utilities from Talk Python Training. That, in turn, pulls in the entire mega application plus all of the dependencies that the application itself is using, bloating the memory way, way up.
All this little daemon needs to do is every few hours re-index the site. It sits there, does nothing in particular related to our app, loops around, waits for exit commands from Docker, and if enough time has elapsed, then it runs the search process with our code.
We could move all of that search indexing code into a subprocess. And only that subprocess’s code actually imports anything of significance. When the search index has to run, that process kicks off for maybe 30 seconds, builds the index, uses a bunch of memory, but once the indexing is done, it shuts down and even the imports are unloaded.
What was the change? Amazing. The search indexer went from 708 MB to just 22 MB! All we had to do was isolate imports into its own separate file and then run that separately using a Python subprocess. That’s it, 32x less memory used.
| Metric | Value |
|---|---|
| Before (monolithic process) | 708 MB |
| After (subprocess isolation) | 22 MB |
| Reduction | 32x |
How much memory do Python imports like boto3, pandas, and matplotlib use?
When we write simple code such as import boto3 it looks like no big deal. You’re just telling Python you need to use this library. But as I hinted at above, what it actually does is load up that library in total, and any static data or singleton-style data is created, as well as transitive dependencies for that library.
Unbeknownst to me, boto3 takes a ton of memory.
| Import Statement | Memory Cost (3.14) |
|---|---|
import boto3 |
25 MB |
import matplotlib |
17 MB |
import pandas |
44 MB |
Yet for our application, these are very rarely used. Maybe we need to upload a file to blob storage using boto3, or use matplotlib and pandas to generate some report that we rarely run.
By moving these to be local imports, we are able to save a ton of memory. What do I mean by that? Simply don’t follow PEP 8 here - instead of putting these at the top of your file, put them inside of the functions that use them, and they will only be imported if those functions are called.
def generate_usage_report():
import matplotlib
import pandas
# Write code with these libs...
Now eventually, this generate_usage_report function probably will get called, but that’s where you go back to DevOps. We can simply set a time-to-live on the worker process. Granian will gracefully shut down the worker process and start a new one every six hours or once a day or whatever you choose.
PEP 810 – Explicit lazy imports
This makes me very excited for Python 3.15. That’s where the lazy imports feature will land. That should make this behavior entirely automatic without the need to jump through hoops.
How moving Python caches to diskcache reduced memory usage
Finally I addressed our caches. This was probably the smallest of the improvements, but still relevant. We had quite a few things that were small to medium-sized caches being kept in memory. For example, the site takes a fragment of markdown which is repeatedly used, and instead of regenerating it every time, we would stash the generated markdown and just return that from cache.
We moved most of this caching to diskcache. If you want to hear me and Vincent nerd out on how powerful this little library is, listen to the Talk Python episode diskcache: Your secret Python perf weapon.
Total memory savings: from 1,988 MB to 472 MB

So where are things today after applying these optimizations?
| Application | Before | After | Savings |
|---|---|---|---|
| Talk Python Training | 1,280 MB | 450 MB | 1.8x |
| Training Search Indexer Daemon | 708 MB | 22 MB | 32x |
| Total | 1,988 MB | 472 MB | 3.2x |
Applying these techniques and more to all of our web apps reduced our server load by 3.2 GB of memory. Memory is often the most expensive and scarce resource in production servers. This is a huge win for us.
Real Python
Python Classes: The Power of Object-Oriented Programming
Python classes are blueprints for creating objects that bundle data and behavior together. Using the class keyword, you define attributes to store state and methods to implement behavior, then create as many instances as you need. Classes are the foundation of object-oriented programming (OOP) in Python and help you write organized, reusable, and maintainable code.
By the end of this tutorial, you’ll understand that:
- A Python class is a reusable blueprint that defines object attributes and methods.
- Instance attributes hold data unique to each object, while class attributes are shared across all instances.
- Python classes support single and multiple inheritance, enabling code reuse through class hierarchies.
- Abstract base classes (ABCs) define formal interfaces that subclasses must implement.
- Classes enable polymorphism, allowing you to use different object types interchangeably through shared interfaces.
To get the most out of this tutorial, you should be familiar with Python variables, data types, and functions. Some experience with object-oriented programming (OOP) is a plus, but you’ll cover all the key concepts you need here.
Get Your Code: Click here to download your free sample code that shows you how to build powerful object blueprints with classes in Python.
Take the Quiz: Test your knowledge with our interactive “Python Classes - The Power of Object-Oriented Programming” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Classes - The Power of Object-Oriented ProgrammingIn this quiz, you'll test your understanding of Python classes, including attributes, methods, inheritance, and object-oriented programming concepts.
Getting Started With Python Classes
Python is a multiparadigm programming language that supports object-oriented programming (OOP) through classes that you can define with the class keyword. You can think of a class as a piece of code that specifies the data and behavior that represent and model a particular type of object.
What is a class in Python? A common analogy is that a class is like the blueprint for a house. You can use the blueprint to create several houses and even a complete neighborhood. Each concrete house is an object or instance that’s derived from the blueprint.
Each instance can have its own properties, such as color, owner, and interior design. These properties carry what’s commonly known as the object’s state. Instances can also have different behaviors, such as locking the doors and windows, opening the garage door, turning the lights on and off, watering the garden, and more.
In OOP, you commonly use the term attributes to refer to the properties or data associated with a specific object of a given class. In Python, attributes are variables defined inside a class with the purpose of storing all the required data for the class to work.
Similarly, you’ll use the term methods to refer to the different behaviors that objects will show. Methods are functions that you define within a class. These functions typically operate on or with the attributes of the underlying instance or class. Attributes and methods are collectively referred to as members of a class or object.
You can write classes to model the real world. These classes will help you better organize your code and solve complex programming problems.
For example, you can use classes to create objects that emulate people, animals, vehicles, books, buildings, cars, or other objects. You can also model virtual objects, such as a web server, directory tree, chatbot, file manager, and more.
Finally, you can use classes to build class hierarchies. This way, you’ll promote code reuse and remove repetition throughout your codebase.
In this tutorial, you’ll learn a lot about classes and all the cool things that you can do with them. To kick things off, you’ll start by defining your first class in Python. Then you’ll dive into other topics related to instances, attributes, and methods.
Defining a Class in Python
To define a class, you need to use the class keyword followed by the class name and a colon, just like you’d do for other compound statements in Python. Then you must define the class body, which will start at the next indentation level:
class ClassName:
<body>
In a class’s body, you can define attributes and methods as needed. As you already learned, attributes are variables that hold the class data, while methods are functions that provide behavior and typically act on the class data.
Note: In Python, the body of a given class works as a namespace where attributes and methods live. You can only access those attributes and methods through the class or its objects.
As an example of how to define attributes and methods, say that you need a Circle class to model different circles in a drawing application. Initially, your class will have a single attribute to hold the radius. It’ll also have a method to calculate the circle’s area:
circle.py
import math
class Circle:
def __init__(self, radius):
self.radius = radius
def calculate_area(self):
return math.pi * self.radius ** 2
In this code snippet, you define Circle using the class keyword. Inside the class, you write two methods. The .__init__() method has a special meaning in Python classes. This method is known as the object initializer because it defines and sets the initial values for the object’s attributes. You’ll learn more about this method in the Instance Attributes section.
The second method of Circle is conveniently named .calculate_area() and will compute the area of a specific circle by using its radius. In this example, you’ve used the math module to access the pi constant as it’s defined in that module.
Read the full article at https://realpython.com/python-classes/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Exploring Keywords in Python
In this quiz, you’ll test your understanding of Exploring Keywords in Python.
By working through this quiz, you’ll revisit how to identify Python keywords, understand the difference between regular and soft keywords, categorize keywords by purpose, and avoid common pitfalls with deprecated keywords.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Tryton News
Tryton News April 2026
During the last month we focused on fixing bugs, improving the behaviour of things, speeding-up performance issues - building on the changes from our last release. We also added some new features which we would like to introduce to you in this newsletter.
For an in depth overview of the Tryton issues please take a look at our issue tracker or see the issues and merge requests filtered by label.
Changes for the User
Sales, Purchases and Projects
Now we add the support for the pick-up delivery method from Shopify.
We now add a number field to the project work efforts, based on a default sequence.
Now we use the party lang functionality to set the language for the taxes and and chats.
We now add support for gift card products on Shopify.
We now add a field that indicates if an invoice is sent via Peppol to help on filtering.
Now we unpublish products from the web-shops when they are deactivated.
We now allow the Peppol admin access group to create incoming documents.
Now we improve the address schema for web-shop usage and add attn to Party Addresses.
We now allow to copy the resources from sale rental to its invoices.
Now we move the Purchase Lines in its own tab on the Production form.
We now allow the manual method on sale.
Now the sale is always confirmed when a Shopify order has payment terms.
Accounting, Invoicing and Payments
Now we set the related-to field of an account statement line to invoice when the statement line overpays an invoice.
Now we forbid a zero amount in draft state and also processing state for manual payments.
We now accrue and allocate rounding errors when using multiple taxes to calculate correct total amounts.
Now we allow cash-rounding with the opposite method.
We now add UNECE codes to Belgian taxes.
Now we
add the VAT exemption code on tax for electronic invoices in the EU.
We now add payment means to invoices to be used in UBL and UNCEFACT.
Now we add an optional account to the list of payable/receivable lines to be able to reconcile lines with the same accounts.
Stock, Production and Shipments
Now we implemented the management of ethanol in stock.
We now set the default locations when adding manually stock moves to production.
Now we add a reference field on quality inspections.
We now add a wizard to pack shipments.
User Interface
As a first step to add labels inside the widgets, we now add a generic management for the label style in form widgets.
Now we strip the user field from white-space characters in the login window.
We now add a new right drop-down menu in SAO, including Logout and Help menu items.
Now we add a visual hint on widgets of modified fields.
New Modules
We now support the common European requirements for excise products.
Now we add the sale_project_task module which creates tasks for services sold.
Now we add the account_payment_check module which manages and prints checks.
New Releases
We released bug fixes for the currently maintained long term support series 7.0 and 6.0, and for the penultimate series 7.8 and 7.6.
Security
Please update your systems to take care of a security related bug we found last month.Changes for the System Administrator
We now use cookies to store authentication informations in Sao.
Now we order the cron logs by descending ID to show the last cron-runs first.
Changes for Implementers and Developers
Now we allow to specify subdirectories in the tryton.cfg file and include them when activating a module.
We now replace the ShopifyAPI library by shopifyapp.
Now we add a contextual _log key to force the logging of events.
We now add notify_user to ModelStorage to store them in an efficient way at
the end of the transaction.
Now we add a deprecation warning when a new API version for Stripe is available.
We now show missing modules when running tests.
Now we remove the obsolete methods dump_values and load_values in ir.ModelData.
We now also check the button states when testing access.
Now we introduced the option model.fields.Binary.queue_for_removal which makes it possible to remove a file from the filestore on setting a binary field to None.
We now make the retrieval of metadata via documentation build very quiet.
Now we remove the default import of wizards in our cookicutter module
template.
Now we upgrade to Psycopg 3.
We now upgrade our setup to pyproject using hatchling as build-system.
Now we replace C3.js by its fork billboard.js as it seems better maintained.
We now remove the Sao dependency to bower.
Now we remove the internal name from the record name of the trytond.model.fields.
We now implement a recursive search for unused XML views in our tests.
Changes for Translators
Now the translation mechanism in Tryton does find the translations of type view.
1 post - 1 participant
Python⇒Speed
Timesliced reservoir sampling: a new(?) algorithm for profilers
Imagine you are processing a stream of events, of unknown length. It could end in 3 seconds, it could run for 3 months; you simply don’t know. As a result, storing the whole stream in memory or even on disk is not acceptable, but you still need to extract relevant information.
Depending on what information you need, choosing a random sample of the stream will give you almost as good information as storing all the data. For example, consider a performance profiler, used to find which parts of your running code are slowest. Many profilers records a program’s callstack every few microseconds, resulting a stream of unlimited size: you don’t know how long the program will run. For this use case, a random sample of callstacks, say 2000 of them, can usually give you sufficient information to do performance optimization.
Why does this work?
- Slow code will result in the same callstack being repeated.
- A random sample of callstacks is more likely to contain callstacks that repeat a lot.
- Thus, a random sample is more likely to include slow code, the code you specifically want to identify with your profiler.
When you need to extract a random sample from a stream of unknown length, a common solution is the family of algorithms known as reservoir sampling. In this article you will learn:
- How basic reservoir sampling works.
- Some problems with reservoir sampling, motivated by a profiler that wants to generate a timeline.
- A (new?) variant of reservoir sampling that allows you to ensure samples are spread evenly across time.
March 31, 2026
PyCoder’s Weekly
Issue #728: Django With Alpine, Friendly Classes, SQLAlchemy, and More (March 31, 2026)
#728 – MARCH 31, 2026
View in Browser »
Django Apps With Alpine AJAX, Revisited
The author has been modifying his approach to Django projects with Alpine AJAX over the last nine months. This post describes what he’s changed and how his process has improved.
LOOPWERK
Making Friendly Classes
What’s a friendly class? One that accepts sensible arguments, has a nice string representation, and supports equality checks. Read on to learn how to write them.
TREY HUNNER
Right-Size Your Celery & RQ Workers
CPU doesn’t tell you if tasks are piling up. Queue latency does. Autoscale your workers based on the metric that matters →
JUDOSCALE sponsor
Understanding CRUD Operations in SQL
Learn how CRUD operations work in SQL by writing raw SQL queries with SQLite and using SQLAlchemy as an ORM in Python.
REAL PYTHON course
PyCon Austria April 19-20, Registrations Open
PYCON.AT • Shared by Horst JENS
PyOhio 2026 Call for Proposals Now Open!
PRETALX.COM • Shared by Anurag Saxena
Articles & Tutorials
When Vectorized Arrays Aren’t Enough
This is a deep dive post about vectorized arrays in NumPy and how some optimizations work and some do not. There is also a follow-up as well: Vectorized Hardware Instructions Rule Everything Around Me.
NRPOSNER
Zensical: A Modern Static Site Generator
Talk Python interviews Martin Donath a contributor to MKDocs and recent creator of the new Zensical package. They talk about why he has built something new and what lessons he’s applied to the new project.
TALK PYTHON podcast
Dignified Python: 10 Rules to Improve your LLM Agents
At Dagster, we created “Dignified Python” to improve LLM-generated code by embedding clear coding principles into prompts. Instead of messy, pattern-based output, our agents produce code that reflects intent, consistency, and team standards. Here are the 10 rules from our Claude prompt →
DAGSTER LABS sponsor
Smello for HTTP Requests
Roman built Smello, an open-source tool that captures outgoing HTTP requests from your Python code and displays them in a local web dashboard. Learn why he did it and how he uses it to debug API access.
ROMAN IMANKULOV
Gotchas With SQLite in Production
What you need to know before putting a Django project that uses SQLite in production. This is part 5 of a series that includes information on write-ahead logging, locking errors, performance, and more.
ANŽE
Comparing Portable DataFrame Tools in Python
This article explores three tools for DataFrame portability in Python: Ibis, Narwhals, and Fugue. Learn when to use each to write code that runs across multiple backends.
CODECUT.AI • Shared by Khuyen Tran
Lessons From Pyre That Shaped Pyrefly
Pyrefly is a Python type checker from the same team that developed pyre. This article discusses lessons from developing Pyre that influenced how they designed Pyrefly.
PYREFLY
Connecting MongoDB to Python
This tutorial is a hands-on introduction to connecting MongoDB with Python using PyMongo, guiding readers through the essential first steps in just 10 minutes.
ANAIYA RAISINGHANI • Shared by Tony Kim
How Do Large Companies Manage CI/CD at Scale?
What changes for CI/CD when your company grows to hundreds of developers, dozens of services, and thousands of daily builds?
PETE MILORAVAC
Apply to Join the PSF Meetup Pro Network
The PSF helps support approved Python Meetup groups and the process to become one has recently been re-opened.
PYTHON SOFTWARE FOUNDATION
Inspect a Lazy Import in Python 3.15
This quick “things I learned” post shows you how to inspect a lazy import object in Python 3.15.
MATHSPP.COM
Projects & Code
Events
Weekly Real Python Office Hours Q&A (Virtual)
April 1, 2026
REALPYTHON.COM
Canberra Python Meetup
April 2, 2026
MEETUP.COM
Sydney Python User Group (SyPy)
April 2, 2026
SYPY.ORG
Python Leiden User Group
April 2, 2026
PYTHONLEIDEN.NL
PyDelhi User Group Meetup
April 4, 2026
MEETUP.COM
Melbourne Python Users Group, Australia
April 6, 2026
J.MP
PyBodensee Monthly Meetup
April 6, 2026
PYBODENSEE.COM
PyCon Lithuania 2026
April 8 to April 11, 2026
PYCON.LT
Happy Pythoning!
This was PyCoder’s Weekly Issue #728.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Real Python
Adding Python to PATH
You may need to add Python to PATH if you’ve installed Python, but typing python on the command line doesn’t seem to work. You might see a message saying that python isn’t recognized, or you might end up running the wrong version of Python.
A common fix for these problems is adding Python to the PATH environment variable. In this video course, you’ll learn how to add Python to PATH. You’ll also learn what PATH is and why it’s essential for tools like the command line to be able to find your Python installation.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Test-Driven Development With pytest
In this quiz, you’ll test your understanding of Test-Driven Development With pytest.
By working through this quiz, you’ll revisit creating and executing Python unit tests with pytest, practicing test-driven development, finding bugs before users, and checking code coverage.
Use this quiz to confirm what you learned and spot gaps to review. Return to the video course for hands-on examples and guidance.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCon
Introducing the 8 Companies on Startup Row at PyCon US 2026
Each year at PyCon US, Startup Row highlights a select group of early-stage companies building ambitious products with Python at their core. The 2026 cohort reflects a rapidly evolving landscape, where advances in AI, data infrastructure, and developer tooling are reshaping how software is built, deployed, and secured.
This year’s companies aim to solve an evolving set of problems facing independent developers and large-scale organizations alike: securing AI-driven applications, managing multimodal data, orchestrating autonomous agents, automating complex workflows, and extracting insight from increasingly unstructured information. Across these domains, Python continues to serve as a unifying layer: encouraging experimentation, enabling systems built to scale, and connecting open-source innovation with real-world impact.
Startup Row brings these emerging teams into direct conversation with the Python community at PyCon US. Throughout the conference, attendees can meet founders, explore new tools, and see firsthand how these companies are applying Python to solve meaningful problems. For the startups in attendance, it’s an opportunity to share their work, connect with users and collaborators, and contribute back to the ecosystem that helped shape them. Register now to experience Startup Row and much more at PyCon US 2026.
Supporting Startups at PyCon US
There are many ways to support Startup Row companies, during PyCon US and long after the conference wraps:- Stop by Startup Row: Spend a few minutes with each team, ask what they’re building, and see their products in action.
- Try their tools: Whether it’s an open-source library or a hosted service, hands-on usage (alongside constructive feedback) is one of the most valuable forms of support. If a startup seems compelling, consider a pilot project and become a design partner.
- Share feedback: Early-stage teams benefit enormously from thoughtful questions, real-world use cases, and honest perspectives from the community.
- Contribute to their open source projects: Many Startup Row companies are deeply rooted in open source. Startup Row companies with open-source roots welcome bug reports, documentation improvements, and pull requests. Contributions and constructive feedback are always appreciated.
- Help spread the word: If you find something interesting, tell a friend, post about it, or share it with your team. (And if you're posting to social media, consider using tags like #PyConUS and #StartupRow to share the love.)
- Explore opportunities to work together: Many of these companies are hiring, looking for design partners, or open to collaborations; don’t hesitate to ask.
- But, most importantly, be supportive. Building a startup is hard, and every team is learning in real time. Curiosity, patience, and encouragement make a meaningful difference.
Meet Startup Row at PyCon US 2026
We’re excited to introduce the companies selected for Startup Row at PyCon US 2026.Arcjet
Embedding security directly into application code is fast becoming as indispensable as logging, especially as AI services open new attack surfaces. Arcjet offers a developer‑first platform that lets teams add bot detection, rate limiting and data‑privacy checks right where the request is processed.The service ships open‑source JavaScript and Python SDKs that run a WebAssembly module locally before calling Arcjet’s low‑latency decision API, ensuring full application context informs every security verdict. Both SDKs are released under a permissive open‑source license, letting developers integrate the primitives without vendor lock‑in while scaling usage through Arcjet’s SaaS tiered pricing.
The JavaScript SDK alone has earned ≈1.7 k GitHub stars and the combined libraries have attracted over 1,000 developers protecting more than 500 production applications. Arcjet offers a free tier and usage‑based paid plans, mirroring Cloudflare’s model to serve startups and enterprises alike.
Arcjet is rolling out additional security tools and deepening integrations with popular frameworks such as FastAPI and Flask, aiming to broaden adoption across AI‑enabled services. In short, Arcjet aims to be the security‑as‑code layer every modern app ships with.
CapiscIO
As multi‑agent AI systems become the backbone of emerging digital workflows, developers lack a reliable way to verify agent identities and enforce governance. CapiscIO steps into that gap, offering an open‑core trust layer built for the nascent agent economy.CapiscIO offers cryptographic Trust Badges, policy enforcement, and tamper‑evident chain‑of‑custody wrapped in a Python SDK. Released under Apache 2.0, it ships a CLI, LangChain integration, and an MCP SDK that let agents prove identity without overhauling existing infrastructure.
The capiscio‑core repository on GitHub hosts the open‑source core and SDKs under Apache 2.0, drawing early contributors building agentic pipelines.
Beon de Nood, Founder & CEO, brings two decades of enterprise development experience and a prior successful startup to the table. “AI governance should be practical, not bureaucratic. Organizations need visibility into what they have, confidence in what they deploy, and control over how agents behave in production,” he says.
CapiscIO is continuously adding new extensions, expanding its LangChain and MCP SDKs, and preparing a managed agent‑identity registry for enterprises. In short, CapiscIO aims to be the passport office of the agent economy, handing each autonomous component an unspoofable ID and clear permissions.
Chonkie
The explosion of retrieval‑augmented generation (RAG) is unlocking AI’s ability to reason over ever‑larger knowledge bases. Yet the first step of splitting massive texts into meaningful pieces still lags behind.Chonkie offers an open‑core suite centered on Memchunk, a Python library with Cython acceleration that delivers up to 160 GB/s throughput and ten chunking strategies under a permissive license. It also ships Catsu, a unified embeddings client for nine providers, and a lightweight ingestion layer; the commercial Chonkie Labs service combines them into a SaaS that monitors the web and synthesises insights.
Co‑founder and CEO Shreyash Nigam, who grew up in India and met his business partner in eighth grade, reflects the team’s open‑source ethos, saying “It’s fun to put a project on GitHub and see a community of developers crowd around it.” That enthusiasm underpins Chonkie’s decision to release its core tooling openly while building a commercial deep‑research service.
Backed by Y Combinator’s Summer 2025 batch, Chonkie plans to grow from four to six engineers and launch the next version of Chonkie Labs later this year, adding real‑time web crawling and multi‑modal summarization. In short, Chonkie aims to be the Google of corporate intelligence.
Pixeltable
Multimodal generative AI is turning simple datasets into sprawling collections of video, images, audio and text, forcing engineers to stitch together ad‑hoc pipelines just to keep data flowing. That complexity has created a new bottleneck for teams trying to move from prototype to production.The open‑source Python library from Pixeltable offers a declarative table API that lets developers store, query and version multimodal assets side by side while embedding custom Python functions. Built with incremental update capabilities, combined lineage and schema tracking, and a development‑to‑production mirror, the platform also provides orchestration capabilities that keep pipelines reproducible without rewriting code.
The project has earned ≈1.6 k GitHub stars and a growing contributor base, closed a $5.5 million seed round in December 2024, and is already used by early adopters such as Obvio and Variata to streamline computer‑vision workflows.
Co‑founder and CTO Marcel Kornacker, who previously founded Apache Impala and co-founded Apache Parquet, says “Just as relational databases revolutionized web development, Pixeltable is transforming AI application development.”
The company's roadmap centers on launching Pixeltable Cloud, a serverless managed service that will extend the open core with collaborative editing, auto‑scaling storage and built‑in monitoring. In short, Pixeltable aims to be the relational database of multimodal AI data.
Skyvern
Manual browser work remains a hidden bottleneck for many teams, turning simple data‑entry tasks into fragile scripts that break on the slightest UI change. Skyvern’s open‑source agent is one of the tools reshaping how developers and non‑technical users automate the web.The Skyvern library lets anyone build a no‑code browser agent that combines computer‑vision models with a large language model to see, plan, act, and validate each step of a web workflow. Its planner–actor–validator loop compiles successful runs into deterministic code, while the free open‑source core can be run locally or via Skyvern Cloud on a per‑automation pricing model.
The GitHub repository has attracted ≈20 k stars, drawing an active community of contributors who extend the framework and share evaluation datasets. The company monetizes through Skyvern Cloud, letting teams run agents without managing infrastructure.
Skyvern is preparing a release that tightens vision‑model integration, adds support for additional LLM providers, and launches a self‑serve dashboard aimed at non‑technical teams. In short, Skyvern aspires to be the Django of browser‑automation, pairing developer friendliness with production reliability.
SubImage
The sheer complexity of modern multi‑cloud environments turns security visibility into a labyrinth, and SubImage offers a graph‑first view that cuts through the noise.It builds an infrastructure graph using the open‑source Cartography library (Apache‑2.0, Python), then highlights exploit chains as attack paths and applies AI models to prioritize findings based on ownership and contextual risk.
Cartography, originally developed at Lyft and now a Cloud Native Computing Sandbox project, has ≈3.7 k GitHub stars, is used by over 70 organizations, and SubImage’s managed service already protects security teams at Veriff and Neo4j; the company closed a $4.2 million seed round in November 2025.
Co‑founder Alex Chantavy, an offensive‑security engineer, says “The most important tool was our internal cloud knowledge graph because it showed us a map of the easiest attack paths … One of the most effective ways to defend an environment is to see it the same way an attacker would.”
The startup is focusing on scaling its managed service and deepening AI integration as it targets larger enterprise customers. In short, SubImage aims to be the map of the cloud for defenders.
Tetrix
Private‑market data pipelines still rely on manual downloads and spreadsheet gymnastics, leaving analysts chasing yesterday’s numbers. Tetrix’s AI investment intelligence platform is part of a wave that brings automation to this lagging workflow.Built primarily in Python, Tetrix automates document collection from fund portals and other sources, extracts structured data from PDFs and other unstructured sources using tool-using language models, then presents exposures, cash flows, and benchmarks through an interactive dashboard that also accepts natural‑language queries.
The company is growing quickly, doubling revenue quarter over quarter and, at least so far, maintains an impressive record of zero customer churn. In the coming year or so, Tetrix plans to triple its headcount from fifteen to forty‑five employees.
TimeCopilot
Time‑series forecasting has long been a tangled mix of scripts, dashboards, and domain expertise, and the recent surge in autonomous agents is finally giving it a unified voice. Enter TimeCopilot, an open‑source framework that brings agentic reasoning to the heart of forecasting.The platform, built in Python under a permissive open‑source license, lets users request forecasts in plain English. It automatically orchestrates more than thirty models from seven families, including Chronos and TimesFM, while weaving large language model reasoning into each prediction. Its declarative API was born from co‑founder Azul Garza‑Ramírez’s economics background and her earlier work on TimeGPT for Nixtla (featured SR'23), evolving from a weekend experiment started nearly seven years ago.
The TimeCopilot/timecopilot repository has amassed roughly 420 stars on GitHub, with the release of OpenClaw marking a notable spike in community interest.
Upcoming plans include a managed SaaS offering with enterprise‑grade scaling and support, the rollout of a benchmarking suite to measure agentic forecast quality, and targeted use cases such as predicting cloud‑compute expenses for AI workloads.
Thank You's and Acknowledgements
Startup Row is a volunteer-driven program, co-led by Jason D. Rowley and Shea Tate-Di Donna (SR'15; Zana, acquired Startups.com), in collaboration with the PyCon US organizing team. Thanks to everyone who makes PyCon US possible.We also extend a gracious thank-you to all startup founders who submitted applications to Startup Row at PyCon US this year. Thanks again for taking the time to share what you're building. We hope to help out in whatever way we can.
Good luck to everyone, and see you in Long Beach, CA!
March 30, 2026
"Michael Kennedy's Thoughts on Technology"
Raw+DC Database Pattern: A Retrospective
TL;DR; After migrating three production Python web apps from MongoEngine to the Raw+DC database pattern, I measured nearly 2x the requests per second, 18% less memory, and gained native async support. Raw+DC delivered real-world performance gains, not just synthetic benchmarks.
About a month ago, I wrote about a new design pattern I’m seeing gain traction in the software space: Raw+DC: The ORM pattern of 2026. This article generated a lot of interest and a lot of debate. The short version: instead of using an ORM or ODM, you write raw database queries paired with Python dataclasses for type safety. This gives AI coding assistants a much larger training base to work from, reduces dependency risk, and delivers comparable or better performance.
Putting Raw+DC into practice
Now that some time has passed and I’ve thought about it more, I’ve had a chance to migrate three of my most important web apps to Raw+DC: Talk Python the podcast, Talk Python Courses, and Python Bytes.
So how did it go? From a pure functionality perspective, it went great. There were maybe one to three problems per web app. This might not sound great, and I didn’t love it, but given this is thousands and thousands of lines of code per app, that’s a small percentage of issues, given how many things went right.
More importantly, I was able to remove a dependency on two faltering database libraries. Mongoengine, the one that I’m going to pull numbers from for Talk Python Training below, has not had a meaningful release in years. It was one of the two core blockers that prevented me from using async programming patterns on the website entirely.
How much faster is Raw+DC than MongoEngine?
I said I imagined that we would save in memory and CPU costs, but did it actually pan out in a practical application? After all, we saw that Robyn, the web framework, is 25 times faster than Flask. However, in practice, it was almost a dead even heat.
I’m thrilled to report that yes, the web app is much faster using Raw+DC.
Below is an apples-to-apples comparison for Talk Python Training using MongoEngine and the Raw+DC pattern.
| Metric | MongoEngine (ODM) | Raw+DC | Improvement |
|---|---|---|---|
| Requests/sec | baseline | ~1.75x | 1.75x faster |
| Response time | baseline | ~50% less | ~50% faster |
| Memory usage | baseline | 200 MB less | 18% less |

The memory story is really great as well. After letting the web app run for over 24 hours for each mode, we saw a 200 MB memory usage decrease using Raw+DC.

That amount of memory might still look high to you. This Raw+DC transformation actually facilitates future work that will cut it in half again, down to about 500 MB for the full app, up and running in production at equilibrium.
Is Raw+DC worth migrating to?
To me, this seems 100% worth it. I’ve gained four important things with Raw+DC.
- 1.75x the requests per second on the exact same hardware and codebase (sans data layer swap)
- 18% less memory usage with much more savings on the horizon
- New data layer natively supports async/await
- Removal of problematic, core data access library
All of these benefits and none of that even touches on whether or not this new programming model is better for AI (it is).
PyCharm
What’s New in PyCharm 2026.1
Welcome to PyCharm 2026.1. This release doesn’t just add features – it rethinks how you build, debug, and scale Python projects. From a brand-new debugging engine powered by debugpy to first-class uv support on remote targets and expanded JavaScript support in the free tier, this version is all about removing friction and letting you focus on your code. Whether you’re working locally, over SSH, or inside Docker, PyCharm now adapts to your setup instead of the other way around.
In this post, we’ll explore the highlights of this update and show you how these improvements can streamline your daily workflow.
Standardizing the future of debugging with debugpy
PyCharm now offers the option to use debugpy as the default debugger backend, providing the industry-standard Debug Adapter Protocol (DAP) that aligns the IDE with the broader Python ecosystem. By replacing complex, legacy socket-waiting logic with a more stable connection model, race conditions and timing edge cases will no longer interfere with your debugging experience.
A modern foundation for Python development
The new engine provides full native support for PEP 669, utilizing Python 3.12’s low-impact monitoring API to significantly reduce debugger overhead compared to the legacy sys.settrace() approach. This ensures that your debugging sessions are faster and less intrusive. Furthermore, the migration introduces comprehensive asyncio support. You can now use the full suite of debugger tools, such as the debug console and expression evaluation, directly within async contexts for modern frameworks like FastAPI and aiohttp.
Reliability across environments
Beyond performance improvements, debugpy simplifies the Attach to Process experience by providing a standardized approach for Docker containers, remote servers on AWS, Azure, or GCP, and local running processes. For specialized workflows, we have introduced a new Attach to DAP run configuration. This allows you to connect to targets using the debugpy.listen() command, eliminating the friction of manual connection management and allowing you to focus on your code instead of debugging infrastructure.
Support for uv as a remote interpreter
Many developers work on projects where the code and dependencies live on a remote server – whether via SSH, in WSL, or inside Docker. By connecting PyCharm to a remote machine and using uv as the interpreter, you can keep the environment fully synchronized, ensure package management works as expected, and run projects smoothly – just as if everything were local.
Free professional web development for everyone
With PyCharm 2026.1, the core IDE experience continues to evolve as we bring a broader set of professional-grade web tools to all users for free. Everyone, from beginners to backend-first developers, now has access to a substantial set of JavaScript, TypeScript, and CSS features, as well as advanced navigation and code intelligence previously available only with a Pro subscription.
For a complete breakdown of all new features, check out this blog post.
Advancements in AI integration
PyCharm is evolving into an open platform that gives you the freedom to bring the AI tools of your choice directly into your professional development workflow. This release focuses on providing a flexible ecosystem where you can orchestrate the best models and agents available today.
The ACP Registry: Your gateway to new agents
Keeping up with the rapid pace of AI development can be a challenge, with new coding agents appearing almost daily. To help you navigate this dynamic landscape, we’ve launched the ACP Registry – a built-in directory of AI coding agents integrated directly into your IDE via the Agent Client Protocol.
Whether you want to experiment with open-source agents like OpenCode or specialized tools like Gemini CLI, you can now discover and install them in just a few clicks. If you have a custom setup or an agent that isn’t listed yet, you can easily add it via the acp.json configuration, giving you the flexibility to use your favorite tools, with no strings attached.
Native OpenAI Codex integration and BYOK
OpenAI Codex is now natively integrated into the JetBrains AI chat. This means you can tackle complex development tasks without switching to a browser or copy-pasting code between windows.
We’ve also introduced Bring Your Own Key (BYOK) support. You can now connect your own API keys from OpenAI, Anthropic, or other compatible providers – including local models – directly in the IDE settings. This allows you to choose the setup that fits your workflow and budget best, while keeping all your AI-powered development inside PyCharm.
Stay in the flow with next edit suggestions
Small changes in your code often trigger a cascade of mechanical follow-up edits. Adding a parameter to a function or renaming a symbol can lead to errors popping up across your entire file.
Next edit suggestions (NES) offer a smarter, lightweight alternative to asking an AI agent for a full rewrite. As you modify your code, PyCharm proactively predicts the most likely next changes and suggests them inline.
- Effortless consistency: Update all call sites across a file with a simple Tab Tab experience.
- Stay in control: Move step by step through changes rather than reviewing large, automated diffs.
- No quota required: Use NES without consuming AI credits – available without consuming the AI quota of your JetBrains AI Pro subscription.
This natural evolution of code completion keeps you in the flow, making those small cascading fixes feel almost effortless.
All of the updates mentioned above are just a glimpse of what’s new in PyCharm 2026.1.
There is even more under the hood, including performance improvements, stability upgrades, and thoughtful refinements across the IDE that make everyday development smoother and faster.
To explore the full list of updates, check out our What’s New page.
As always, we would love to hear your feedback. Your insights help us shape the future of PyCharm – and we cannot wait to see what you build next.
Real Python
How to Use Ollama to Run Large Language Models Locally
Running Ollama in your terminal allows you to start chatting with a local large language model (LLM) quickly. You won’t need API keys, cloud services, or ongoing costs. Ollama is a free, open-source tool that lets you download and run models directly on your machine. By following this guide, you’ll install Ollama, chat with local models from your terminal, and use them to power agentic coding tools:
Example of Using Ollama to Run an LLM LocallyLarge language models traditionally require expensive API subscriptions and a constant internet connection. Ollama eliminates both requirements by running models directly on your hardware. Because everything runs locally, your prompts stay on your machine, and no per-token fees apply.
Get Your Cheat Sheet: Click here to download your free Ollama cheat sheet and keep the essential steps and commands for running LLMs locally at your fingertips.
Take the Quiz: Test your knowledge with our interactive “How to Use Ollama to Run Large Language Models Locally” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Use Ollama to Run Large Language Models LocallyTest your knowledge of running LLMs locally with Ollama. Install it, pull models, chat, and connect coding tools from your terminal.
Prerequisites
To follow this guide, you’ll need the following software and hardware:
- macOS 14 Sonoma or newer, Windows 10 or newer, or a recent Linux distribution
- At least 8 GB of RAM, or 16 GB or more for larger models
- 5–16 GB of free disk space to store models
- Basic skills with the command line or terminal, including opening a terminal and running commands
No Python installation is required for this guide, and no prior experience with LLMs or AI is needed. If you want to integrate Ollama with Python after finishing here, check out How to Integrate Local LLMs With Ollama and Python.
Step 1: Install Ollama and Pull Your First Model
To quickly install Ollama on your operating system, run the following command based on your platform:
Once this command finishes, Ollama will be installed on your system.
Note: In some Linux distributions, you may need to install curl to download the installer and the zstd library for extraction. On Debian/Ubuntu, you can install them with the following command:
$ sudo apt update && sudo apt install curl zstd
Alternatively, you can download a dedicated installer for Windows and macOS. Visit Ollama’s download page to get the installer for those operating systems.
Note: Ollama has a GUI application for macOS and Windows users. This quick guide focuses solely on the command-line (CLI) tool. See Ollama’s app announcement if you want to explore that option.
After installation, you can verify that the CLI is available with the following command:
$ ollama -v
ollama version is 0.17.7
The Ollama service should be running in the background. Normally, you don’t need to start it manually. It runs on port 11434 by default. If you get a warning after running the command above, then you may need to run the background server manually:
$ ollama serve
Read the full article at https://realpython.com/ollama/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCon
Support PyLadies: Donate to the PyLadies Auction at PyCon US 2026!
PyCon US 2026 is quickly approaching and we can’t wait to see all of you there! This year, we are reaching out to our community to help support one of our favorite events: the PyLadies Auction. Our wonderful community is bursting with creative, artsy, and generous folks and we invite you to donate items to this year’s Auction. If you’d like to support PyLadies by making a donation, please fill out the PyLadies Charity Auction donation form. To learn more about the event and what to donate, read on!
What to Donate
Need inspiration? Over the years, we’ve seen that people love to bid on items that are artsy, nerdy, and Python-specific. Art, decor, customized tech, and vintage PyCon or Python-related swag are all examples of things that will inspire a friendly bidding war. To get more inspiration search ‘PyLadies Auction’ on social media or check out the PyLadies Instagram.Items that are personalized or have a story tend to attract bidders– DIYers, artists, crafters, and collectors, this is your time to shine! It's also worth thinking about what will show up well on stage. We have a magnifying camera, but smaller and less colorful items like jewelry and watches might be harder for bidders to see than a big piece of art or an adorable, bright colored snake stuffy! If you have smaller items– like vintage swag, stickers, or even pens (iykyk)– they can go into the raffle.
Please note that we cannot accept donations of guns, alcohol, cars, real estate, or houses. We may decline other gifts that are inappropriate or impractical at the PSF’s discretion (for example, sorry, no real pythons!). Although we would love to accept signed books, we will also not be accepting those this year.
If you’ve got an item to contribute that you’d like to share in a sneak preview, feel free to post a photo and tag us on the social media platform of your choice!
Donation details
To donate an item, please fill out the PyLadies Charity Auction donation form. This form asks for required information around the person or organization donating, item specifics, and logistics. You will need to indicate if you will ship the item in advance or deliver it in person. You will receive shipping information once you complete the form. The deadline to drop off donations in person is May 16th, 2026, at 4 pm PST (that’s just about 3 hours before the event!).About the PyLadies Charity Auction
The fourteenth PyLadies Charity Auction will be held in-person during PyCon US 2026 in Long Beach, California. It’s an evening of fun and refreshments while supporting the PyLadies community!The Entry cover charge of $50.00 includes dinner and a drink ticket, while the Supporter cover charge of $100.00 includes dinner, a drink ticket, and a $50.00 contribution to PyLadies. We have increased the cover charge this year to help offset rising food and venue costs. Everyone who attends will also have access to the cash bar. To save your place at the Auction, add the PyLadies Auction to your PyCon US registration via your dashboard. Last year we sold out quickly, so if you’d like to go, reserve your ticket soon!
Mike Driscoll
Vibe Coding Pong with Python and pygame
Pong is one of the first computer games ever created, way back in 1972. If you have never heard of Pong, you can think of it as a kind of “tennis” game. There are two paddles, on each side of the screen. They move up and down. The goal is to bounce a ball between the two paddles. If you miss the ball, your opponent scores a point.
Atari created Pong and put it in arcades. The name of the game is derived from the game of ping-pong. Pong has been remade many times, with different features and gameplay elements added or removed.
Want to know more? Get the Book!
You will use Anthropic’s Claude AI to create this game in Python using the pygame package.
Let’s get started!
Creating the Game
Prompt: Please create the pong videogame using Python and pygame. Give the user a way to start a new game and keep score.
I’ll create a classic Pong game for you using Python and pygame. This will include two paddles, a ball, scoring, and the ability to start new games.
import pygame
import sys
import random
# Initialize Pygame
pygame.init()
# Constants
SCREEN_WIDTH = 800
SCREEN_HEIGHT = 600
PADDLE_WIDTH = 15
PADDLE_HEIGHT = 90
BALL_SIZE = 15
PADDLE_SPEED = 7
BALL_SPEED_X = 6
BALL_SPEED_Y = 6
# Colors
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
GRAY = (128, 128, 128)
class Paddle:
def __init__(self, x, y):
self.rect = pygame.Rect(
x, y, PADDLE_WIDTH, PADDLE_HEIGHT)
self.speed = PADDLE_SPEED
def move_up(self):
if self.rect.top > 0:
self.rect.y -= self.speed
def move_down(self):
if self.rect.bottom < SCREEN_HEIGHT:
self.rect.y += self.speed
def draw(self, screen):
pygame.draw.rect(screen, WHITE, self.rect)
class Ball:
def __init__(self):
self.rect = pygame.Rect(
SCREEN_WIDTH // 2,
SCREEN_HEIGHT // 2,
BALL_SIZE,
BALL_SIZE)
self.reset_ball()
def reset_ball(self):
self.rect.center = (SCREEN_WIDTH // 2,
SCREEN_HEIGHT // 2)
self.speed_x = BALL_SPEED_X * random.choice([-1, 1])
self.speed_y = BALL_SPEED_Y * random.choice([-1, 1])
def move(self):
self.rect.x += self.speed_x
self.rect.y += self.speed_y
# Bounce off top and bottom walls
if (self.rect.top <= 0
or self.rect.bottom >= SCREEN_HEIGHT):
self.speed_y = -self.speed_y
def draw(self, screen):
pygame.draw.rect(screen, WHITE, self.rect)
class PongGame:
def __init__(self):
self.screen = pygame.display.set_mode(
(SCREEN_WIDTH, SCREEN_HEIGHT))
pygame.display.set_caption("Pong Game")
self.clock = pygame.time.Clock()
# Create game objects
self.left_paddle = Paddle(
30, SCREEN_HEIGHT // 2 - PADDLE_HEIGHT // 2)
self.right_paddle = Paddle(
SCREEN_WIDTH - 30 - PADDLE_WIDTH,
SCREEN_HEIGHT // 2 - PADDLE_HEIGHT // 2)
self.ball = Ball()
# Scores
self.left_score = 0
self.right_score = 0
self.font = pygame.font.Font(None, 74)
self.small_font = pygame.font.Font(None, 36)
# Game state
self.game_state = "menu" # "menu", "playing", "paused"
def handle_collision(self):
# Ball collision with paddles
if self.ball.rect.colliderect(self.left_paddle.rect):
# Only bounce if moving toward paddle
if self.ball.speed_x < 0:
self.ball.speed_x = -self.ball.speed_x
# Add some variation based on where ball hits paddle
hit_pos = (self.ball.rect.centery - self.left_paddle.rect.centery
) / (PADDLE_HEIGHT / 2)
self.ball.speed_y += hit_pos * 2
if self.ball.rect.colliderect(self.right_paddle.rect):
# Only bounce if moving toward paddle
if self.ball.speed_x > 0:
self.ball.speed_x = -self.ball.speed_x
# Add some variation based on where ball hits paddle
hit_pos = (self.ball.rect.centery - self.right_paddle.rect.centery
) / (PADDLE_HEIGHT / 2)
self.ball.speed_y += hit_pos * 2
# Limit ball speed
max_speed = 12
if abs(self.ball.speed_y) > max_speed:
self.ball.speed_y = max_speed if self.ball.speed_y > 0 else -max_speed
def check_scoring(self):
# Check if ball went off screen
if self.ball.rect.left <= 0:
self.right_score += 1
self.ball.reset_ball()
elif self.ball.rect.right >= SCREEN_WIDTH:
self.left_score += 1
self.ball.reset_ball()
def handle_input(self):
keys = pygame.key.get_pressed()
if self.game_state == "playing":
# Left paddle controls (W/S)
if keys[pygame.K_w]:
self.left_paddle.move_up()
if keys[pygame.K_s]:
self.left_paddle.move_down()
# Right paddle controls (UP/DOWN arrows)
if keys[pygame.K_UP]:
self.right_paddle.move_up()
if keys[pygame.K_DOWN]:
self.right_paddle.move_down()
def draw_menu(self):
self.screen.fill(BLACK)
title_text = self.font.render("PONG", True, WHITE)
title_rect = title_text.get_rect(
center=(SCREEN_WIDTH // 2, 150))
self.screen.blit(title_text, title_rect)
start_text = self.small_font.render(
"Press SPACE to Start", True, WHITE)
start_rect = start_text.get_rect(
center=(SCREEN_WIDTH // 2, 250))
self.screen.blit(start_text, start_rect)
controls_text = [
"Controls:",
"Left Player: W (Up) / S (Down)",
"Right Player: Arrow Keys",
"Press R to restart during game",
"Press ESC to return to menu"
]
for i, text in enumerate(controls_text):
rendered_text = self.small_font.render(
text, True, GRAY if i == 0 else WHITE)
text_rect = rendered_text.get_rect(
center=(SCREEN_WIDTH // 2, 320 + i * 40))
self.screen.blit(rendered_text, text_rect)
def draw_game(self):
self.screen.fill(BLACK)
# Draw center line
for i in range(0, SCREEN_HEIGHT, 20):
if i % 40 == 0:
pygame.draw.rect(
self.screen,
WHITE,
(SCREEN_WIDTH // 2 - 2, i, 4, 10))
# Draw paddles and ball
self.left_paddle.draw(self.screen)
self.right_paddle.draw(self.screen)
self.ball.draw(self.screen)
# Draw scores
left_score_text = self.font.render(
str(self.left_score), True, WHITE)
right_score_text = self.font.render(
str(self.right_score), True, WHITE)
self.screen.blit(left_score_text, (SCREEN_WIDTH // 4, 50))
self.screen.blit(
right_score_text,
(3 * SCREEN_WIDTH // 4 - right_score_text.get_width(),
50))
# Draw instructions
instruction_text = self.small_font.render(
"Press ESC for menu, R to restart", True, GRAY)
instruction_rect = instruction_text.get_rect(
center=(SCREEN_WIDTH // 2, SCREEN_HEIGHT - 30))
self.screen.blit(instruction_text, instruction_rect)
def reset_game(self):
self.left_score = 0
self.right_score = 0
self.ball.reset_ball()
self.left_paddle.rect.y = (SCREEN_HEIGHT // 2 -
PADDLE_HEIGHT // 2)
self.right_paddle.rect.y = (SCREEN_HEIGHT // 2 -
PADDLE_HEIGHT // 2)
def run(self):
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.KEYDOWN:
if (event.key == pygame.K_SPACE
and self.game_state == "menu"):
self.game_state = "playing"
self.reset_game()
elif event.key == pygame.K_ESCAPE:
self.game_state = "menu"
elif (event.key == pygame.K_r
and self.game_state == "playing"):
self.reset_game()
if self.game_state == "menu":
self.draw_menu()
elif self.game_state == "playing":
self.handle_input()
self.ball.move()
self.handle_collision()
self.check_scoring()
self.draw_game()
pygame.display.flip()
self.clock.tick(60)
pygame.quit()
sys.exit()
if __name__ == "__main__":
game = PongGame()
game.run()
I’ve created a complete Pong game in Python using pygame! Here are the key features:
Game Features:
- Classic Pong gameplay with two paddles and a bouncing ball
- Score tracking for both players
- Menu system with game start option
- Ability to restart games and return to menu
Controls:
- Left Player: W (up) / S (down)
- Right Player: Arrow keys (up/down)
- SPACE: Start new game from menu
- R: Restart current game
- ESC: Return to main menu
Game Mechanics:
- Ball bounces off top/bottom walls and paddles
- Ball speed varies slightly based on where it hits the paddle
- Scores increase when ball goes off either side
- Ball resets to center after each score
To run the game:
- Make sure you have pygame installed:
pip install pygame - Save the code to a
.pyfile (e.g.,pong.py) - Run it with:
python pong.py
The game includes a clean menu system, smooth 60 FPS gameplay, and responsive controls. The ball physics include some randomization to keep gameplay interesting, and the paddle collisions add a bit of strategy by allowing players to influence the ball’s trajectory based on where they hit it.
Prompt: Can you explain how the code works?
I’ll break down how the Pong game code works, explaining each major component and how they work together.
Code Structure Overview
The code is organized using object-oriented programming with three main classes:
Paddle– represents the player paddlesBall– handles the bouncing ballPongGame– manages the overall game logic and flow
1. Initialization and Constants
pygame.init() # Constants define game dimensions and speeds SCREEN_WIDTH = 800 SCREEN_HEIGHT = 600 PADDLE_SPEED = 7
The game starts by initializing pygame and defining constants for screen size, object dimensions, and movement speeds. Using constants makes the code easier to modify and maintain.
2. The Paddle Class
class Paddle:
def __init__(self, x, y):
self.rect = pygame.Rect(x, y, PADDLE_WIDTH, PADDLE_HEIGHT)
Each paddle is represented as a pygame.Rect object, which provides built-in collision detection and positioning. The paddle has methods to:
move_up()andmove_down()– handle movement with boundary checkingdraw()– renders the paddle as a white rectangle
The boundary checking ensures paddles can’t move off-screen:
if self.rect.top > 0: # Don't go above screen if self.rect.bottom < SCREEN_HEIGHT: # Don't go below screen
3. The Ball Class
class Ball:
def reset_ball(self):
self.speed_x = BALL_SPEED_X * random.choice([-1, 1])
self.speed_y = BALL_SPEED_Y * random.choice([-1, 1])
The ball maintains its position and velocity. Key features:
reset_ball()– centers the ball and gives it a random directionmove()– updates position and handles wall bouncing- Wall collision detection flips the Y velocity:
self.speed_y = -self.speed_y
4. Main Game Class Structure
The PongGame class manages everything:
Game States
self.game_state = "menu" # "menu", "playing", "paused"
The game uses states to control what’s displayed and what input is accepted.
Game Loop
def run(self):
while running:
# Handle events (keyboard, quit)
# Update game logic based on current state
# Draw everything
# Control frame rate (60 FPS)
5. Collision Detection
The most complex part handles ball-paddle collisions:
def handle_collision(self):
if self.ball.rect.colliderect(self.left_paddle.rect):
# Only bounce if moving toward paddle
if self.ball.speed_x < 0:
self.ball.speed_x = -self.ball.speed_x
Key points:
- Uses pygame’s built-in
colliderect()for collision detection - Checks ball direction to prevent multiple bounces
- Adds spin based on where the ball hits the paddle:
hit_pos = (self.ball.rect.centery -
self.left_paddle.rect.centery
) / (PADDLE_HEIGHT / 2)
self.ball.speed_y += hit_pos * 2
6. Input Handling
def handle_input(self):
keys = pygame.key.get_pressed()
if keys[pygame.K_w]:
self.left_paddle.move_up()
The game checks which keys are currently pressed each frame, allowing for smooth continuous movement. Different keys are handled based on the game state.
7. Scoring System
def check_scoring(self):
if self.ball.rect.left <= 0: # Ball went off left side
self.right_score += 1
self.ball.reset_ball()
Scoring is checked every frame by seeing if the ball has moved past the screen edges.
8. Rendering System
The game uses different drawing functions for different states:
draw_menu()– shows title and instructionsdraw_game()– renders paddles, ball, scores, and game elements
The center dashed line is drawn using a loop:
for i in range(0, SCREEN_HEIGHT, 20):
if i % 40 == 0: # Only draw every other dash
pygame.draw.rect(
self.screen,
WHITE,
(SCREEN_WIDTH // 2 - 2, i, 4, 10))
9. Game Flow
- Initialization: Create paddles, ball, set up pygame
- Main Loop:
- Process input events
- Update game logic (if playing)
- Draw current state
- Maintain 60 FPS with clock.tick(60)
- State Management: Switch between menu and gameplay based on user input
Key Programming Concepts Used
- Object-Oriented Design: Separate classes for different game elements
- Event-Driven Programming: Responding to keyboard and quit events
- Game Loop Pattern: Continuous update-draw cycle
- State Management: Different behaviors based on current game state
- Collision Detection: Using pygame’s rectangle collision methods
- Vector Math: Ball movement with X/Y velocity components
The code is structured to be easily extensible – you could add features like AI opponents, power-ups, or different game modes by building on this foundation.
Running Pong
Now that you have the code and understand how to use it, you are ready to run the code. You can run your code in your Python IDE or on the command line like this:
python pong.py
When you first launch your game, you will see the following welcome screen:

If you press the SPACE key, you will see the actual game, which will look something like this:

Grab a friend or play against yourself. The game works pretty well, although playing solo can be rather challenging.
Pong is a fun little game. Much like ping-pong or table tennis, the game is designed to be played with someone else. Technically, you could also create a computer opponent, but you usually play Pong between two people instead.
You should play the game with a friend or family member. You can also study the code to discover how it works. Then, when you are ready, you can start improving the game.
Get the Book
This article is from my latest Python book, Vibe Coding Video Games with Python.
You can get Vibe Coding Video Games with Python at the following websites:
The post Vibe Coding Pong with Python and pygame appeared first on Mouse Vs Python.
Real Python
Quiz: Using Jupyter Notebooks
In this quiz, you’ll test your understanding of Using Jupyter Notebooks.
By working through this quiz, you’ll review Jupyter Notebook basics: running code cells, switching modes, using shortcuts, formatting Markdown, managing the server, and exporting notebooks.
Use it to check your workflow knowledge and spot areas to review.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Bytes
#475 Haunted warehouses
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://www.cert.at/en/blog/2026/3/lock-the-ghost?featured_on=pythonbytes">Lock the Ghost</a></strong></li> <li><strong><a href="https://fencesandbox.com?featured_on=pythonbytes">Fence for Sandboxing</a></strong></li> <li><strong><a href="https://malus.sh/?featured_on=pythonbytes">MALUS: Liberate Open Source</a></strong></li> <li><strong><a href="https://mattsch.com/blog/2026/03/28/harden-your-github-actions-workflows-with-zizmor-dependency-pinning-and-dependency-cooldowns/?featured_on=pythonbytes">Harden your GitHub Actions Workflows with zizmor, dependency pinning, and dependency cooldowns</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=4OAoEBfNzNQ' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="475">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes">**Patreon Supporters</a>Connect with the hosts**</li> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://www.cert.at/en/blog/2026/3/lock-the-ghost?featured_on=pythonbytes">Lock the Ghost</a></strong></p> <ul> <li>The five core takeaways: <ol> <li><strong>PyPI "removal" doesn't delete distribution files.</strong> When a package is removed from PyPI, it disappears from the index and project page, but the actual distribution files remain accessible if you have a direct URL to them.</li> <li><strong>uv.lock uniquely preserves access to ghost packages.</strong> Because uv.lock stores direct URLs to distribution files rather than relying on the index API at install time, uv sync can successfully install packages that have already been removed, even with cache disabled. No other Python lock file implementation tested behaved this way.</li> <li><strong>This creates a supply chain attack vector.</strong> An attacker could upload a malicious package, immediately remove it to dodge automated security scanning, and still have it installable via a uv.lock file, or combine this with the xz-style strategy of hiding malicious additions in large, auto-generated lock files that nobody reviews.</li> <li><strong>Removed package names can be hijacked with version collisions.</strong> When an owner removes a package, the name can be reclaimed by someone else who can upload different distribution types under the same version number, as happened with "umap." Lock files help until you regenerate them, then you're exposed.</li> <li><strong>Your dependency scanning needs to cover lock files, not just manifest files.</strong> Scanning only pyproject.toml or requirements.txt misses threats embedded in lock files, which is where the actual resolved URLs and hashes live.</li> </ol></li> </ul> <p><strong>Brian #2: <a href="https://fencesandbox.com?featured_on=pythonbytes">Fence for Sandboxing</a></strong></p> <ul> <li>Suggested by Martin Häcker</li> <li>“Some coding platforms have since integrated built-in sandboxing (e.g., Claude Code) to restrict write access to directories and/or network connectivity. However, these safeguards are typically optional and not enabled by default.”</li> <li>“JY Tan (on cc) has extracted the sandboxing logic from Claude Code and repackaged it into a standalone Go binary.”</li> <li>Source code on GitHub: https://github.com/Use-Tusk/fence</li> <li>Related: <ul> <li><a href="https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/?featured_on=pythonbytes">Simon Willison lethal trifecta for AI agents article from June 2025</a></li> <li><a href="https://code.claude.com/docs/en/sandboxing?featured_on=pythonbytes">Claude Code Sandboxing</a></li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://malus.sh/?featured_on=pythonbytes">MALUS: Liberate Open Source</a></p> <ul> <li>via Paul Bauer</li> <li>The service will generate the specs of a library with one AI and build the newly licensed library using the specs with another AI circumventing the licensing and copyright rules.</li> <li>AI that has not been trained on open source reads the docs and API signature, creates a spec. Another AI processes that spec into working software.</li> <li>Is it a real site? Are they accepting real money, or are they just trying to cause a stir around copyright?</li> </ul> <p><strong>Brian #4: <a href="https://mattsch.com/blog/2026/03/28/harden-your-github-actions-workflows-with-zizmor-dependency-pinning-and-dependency-cooldowns/?featured_on=pythonbytes">Harden your GitHub Actions Workflows with zizmor, dependency pinning, and dependency cooldowns</a></strong></p> <ul> <li>Matthias Schoettle</li> <li>Avoid things like this: <a href="https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation?featured_on=pythonbytes">hackerbot-claw: An AI-Powered Bot Actively Exploiting GitHub Actions - Microsoft, DataDog, and CNCF Projects Hit So Far</a></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://gh.io/AAzfaht?featured_on=pythonbytes">GitHub is asking to spy on us, that’s nice</a></li> </ul> <p>Michael:</p> <ul> <li>Michael’s new SaaS for podcasters: <a href="https://interviewcue.com?featured_on=pythonbytes">InterviewCue</a></li> <li><a href="https://www.digitalocean.com/blog/nfs-cold-storage-backups#spaces-cold-storage-for-infrequently-accessed-data">DigitalOcean’s Spaces cold storage</a> for infrequently accessed data</li> <li>Minor issue about <a href="https://mkennedy.codes/posts/fire-and-forget-or-never-with-python-s-asyncio/?featured_on=pythonbytes">my fire and forget post</a>, was a latent bug?</li> <li><a href="https://mkennedy.codes/posts/fire-and-forget-at-textual/?featured_on=pythonbytes">Fire and Forget at Textual</a> follow up article</li> </ul> <p><strong>Joke:</strong> <a href="https://x.com/McGrewSecurity/status/2034030394262110593?featured_on=pythonbytes">Can you?</a></p>
March 29, 2026
"Michael Kennedy's Thoughts on Technology"
Fire and Forget at Textual
If you read my Fire and Forget (or Never) about Python and asynchronous programming, you could think it’s a super odd edge case. But a reader/listener, Richard, pointed me at Will McGugan’s article The Heisenbug lurking in your async code. This is basically the same article, but in Will-style.
Will does say “This behavior is well documented, as you can see from this excerpt.” True, but the documentation got this emphasis and warning in Python 3.12 whereas the feature create_task was added in Python 3.6/3.5 timeframe. So it’s not just a matter of did we read the docs carefully. It’s a matter of did we reread the docs carefully, years later?
Luckily Will added some nice concrete numbers I didn’t have:
https://github.com/search?q=%22asyncio.create_task%28%22&type=code
This appears in over 0.5M separate code files on GitHub. To be clear, not every search result for create_task uses the fire-and-forget pattern, but just on the first page of results there are 5 instances.
If the design pattern to fix this is to:
- Create a global set
- When a task is added to the event loop, add it to the set
- Remove it from the set when it’s done
Wouldn’t it have been better for the Python team to add this to the event loop internally once and solve this problem for everyone globally across the entire Python ecosystem?
It doesn’t look like that’s going to happen. So make sure you double check your code for create_task. And don’t let the Heisenbugs bite.
And yes, I know about task groups. Several people told me that we could use task groups to hang on to the task. Yes, that’s true. But task groups are incongruent with the fire-and-forget design pattern. Why? Because you create the group in a context manager and then you wait for all the tasks in the group to be finished. That doesn’t allow you to fire off a task and then continue working. So task groups may or may not have fixed Will’s problem, but they don’t solve the one I was originally talking about.
March 28, 2026
EuroPython
Humans of EuroPython: Jodie Burchell
What does it take to run Europe’s largest Python conference? 🐍 Not budgets or venues—it’s people.
EuroPython isn’t powered by code alone, but by a vibrant network of volunteers who shape every session and welcome every attendee. From ensuring talks run seamlessly to curating world-class content these are the unsung heroes building community, one contribution at a time.
We’re shining a spotlight on the people behind the magic. Read our full conversation with Jodie Burchell, co-lead of the EuroPython 2025 Programme Team and discover what drives those who give their time to grow our community.
Jodie Burchell, Co-Lead of the Programme Team at EuroPython 2025EP: What first inspired you to volunteer for EuroPython?
I first attended EuroPython in 2023, and was asked by my friends Cheuk and Lais to help run the Humble Data workshop. I had so much fun, and really liked all the people I met, so I decided to help out with comms and other things in 2024, and ended up working on the Programme Team and helping run the Beginners’ Day in 2025.
EP: What was your primary role as a volunteer, and what did a typical day look like for you?
I was one of two co-team leads of the Programme Team in 2025. This team tends to touch a lot of the conference, although the tasks vary from week-to-week. We actually started work all the way back in December, and worked up until the end of the conference! My team’s role included running the CfP, selecting talks, and assembling the schedule, finding keynote speakers, organising special events, coordinating the open spaces, and finding last minute speakers when people cannot make it. It involved a lot of logistics, following up with other teams at EuroPython, and communicating with speakers.
I think one of my favourite things I organised at the conference was the international snack exchange. Seeing people sharing snacks from their home countries was so much fun, and really made us feel like a big international family.
EP: What&aposs your favorite memory from volunteering at EuroPython?
I actually can’t pick just one!
One of my favourites was seeing the programme team in person after so many months of working together, and sharing some international snacks together to celebrate.
Watching the excerpt of “Python: the Documentary” that CultRepo created for us, and seeing the reaction of the audience to the film and the panel was very moving.
And of course, running Humble Data at the Beginner’s Day during the sprints. As someone with a non-traditional path into tech myself, I am really passionate about helping beginners and making them feel welcome, and having beginners starting to learn Python, and then speaking with core developers of well-established projects was really special.
EP: Did you make any lasting friendships or professional connections through volunteering?
Many! The Python community is incredible, and I am lucky to have found some of my closest friends through the EuroPython, Humble Data and wider Python community. I look forward to EuroPython every year (in whatever capacity I attend) so I can see all of these amazing, special friends.
EP: What&aposs one misconception about conference volunteering you&aposd like to clear up?
I think one of the biggest misconceptions that people have about community conferences like EuroPython is that they’re run by professionals. While the EuroPython Society does have one (very talented) paid employee, most of the work you see at these conferences is done by members of the community, just like you and me. So if you feel inspired to contribute to EuroPython or another Python conference, reach out and find out how you can help! Although it can be a lot of work, it’s also very meaningful to know you’ve shaped an event that means a lot to the Python community.
EP: Thank you for your work, Jodie!
March 27, 2026
Real Python
The Real Python Podcast – Episode #289: Limitations in Human and Automated Code Review
With the mountains of Python code that it's possible to generate now, how's your code review going? What are the limitations of human review, and where does machine review excel? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Quiz: Interacting With REST APIs and Python
In this quiz, you’ll test your understanding of Interacting with REST APIs in Python.
This quiz reviews REST principles, HTTP methods, status codes, and Python tools like requests, Flask, FastAPI, and Django REST Framework.
Test your understanding of consuming and designing REST APIs, Pydantic validation, and endpoint design. For more practice, revisit the course page for guided lessons and examples.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
March 26, 2026
Real Python
Quiz: Getting Started With Django: Building a Portfolio App
In this quiz, you’ll test your understanding of Getting Started With Django: Building a Portfolio App.
By working through this quiz, you’ll review core Django concepts like projects, apps, URL routing, views, templates, models, and migrations.
You’ll reinforce how Django’s framework pieces fit together to handle requests and interact with the database. This quiz helps solidify the foundations needed to build and structure real-world Django applications.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
EuroPython
March Newsletter: Sponsorship Early Bird Ending, Programme Due Soon
Hey there! 👋
Hope you&aposre all having a fantastic March. We sure have been busy and we’ve got some exciting updates for you as we gear up for EuroPython 2026. This year the conference will take place in Kraków, the city of castles and dragons 🐉 Let’s dive right in!
🗃️ Programme Updates Coming Shortly
Our Programme Team is hard at work putting the final touches on an amazing lineup. We received 635 fantastic proposals this year, a whooping record high at EuroPython. We are incredibly grateful to everyone who took the time to write a proposal — the conference wouldn’t be possible without you.
We closed the Community Voting phase with 20,799 individual reviews! Being able to use the views of the community to inform our selection process is super important to us, and such a great turnout really helps us create the best conference possible. Anyone who participated in EuroPython in the past 3 years or submitted a proposal this year was eligible to vote, and almost 400 people took part in the review process.
We are currently finalising the selection and will start sending emails out to all submitters in the coming days. Hang in there!
🤖 Beyond Python
EuroPython is not just a conference focused on Python internals, tooling, and libraries - although we’ll have plenty of those sessions of course! For a few years now, the conference has included two other major topics related to our favourite programming language:
➡️ Data Science
Tutorials and talks in this area have been covering data preparation and visualisation, data engineering, MLOps, as well as Jupyter and scientific Python.
➡️ Machine Learning & AI
From machine learning research and application, through to natural language processing, computer vision, ethics, and beyond — we’re covering it all at EuroPython.
Do you know someone who is interested in these topics? Let them know and we’ll see you all in Kraków!
💰 Sponsorship: Early Bird Ends on March 31st
If you&aposre passionate about supporting EuroPython and helping us make the event accessible to all, consider becoming a sponsor or asking your employer to join us in this effort.
By sponsoring EuroPython, you’re not just backing an event – you&aposre gaining highly targeted visibility that will present your company or personal brand to one of the largest and most diverse Python communities in the world!
But don’t take our world for it, hear what sponsors from 2025 have to say about EuroPython:
EuroPython 2025 Sponsor Highlights
We offer a range of sponsorship tiers, some with limited slots available. Along with our main packages, there are optional add-ons and extras to craft your brand messaging in exactly the way that you need.
Don’t wait too long if you’d like to sponsor the conference. We’ve already sold out 80% of the Gold packages!
🐦 We have an Early Bird 10% discount for companies that sign up by March 31st.
👉 More information at: https://ep2026.europython.eu/sponsorship/sponsor/
👉 Contact us at sponsoring@europython.eu
💰 Financial Aid
Our Financial Aid Program is here to make sure that your financial situation never stands between you and EuroPython.
We want to support those who make the Python community a positive, inclusive, and welcoming space. We especially encourage applications from speakers, students, volunteers, community organizers, and open-source contributors who need financial support to attend.
👉 For more information about our Financial Aid programme and our selection criteria, please visit https://ep2026.europython.eu/finaid/
🤝 Community Partners
EuroPython organisers are dedicated to supporting and growing the European Python community — even beyond the conference. We are proud that our Community Partners, who all share our mission, include some of the most established meetups and organisations across Europe. Meet some of the local communities that will be joining us in Kraków this year:
🚣 Python Łódź
Python Łódź is a local community of Python enthusiasts organizing regular meetups in Łódź, Poland. They bring together developers of all experience levels to share knowledge and connect.
Their events take place on the 13th floor with a beautiful view of the city, are streamed live on YouTube, and - for a bit of fun and nostalgia - even recorded on cassette tapes.
Python Łódź is currently looking for additional sponsors to support their events, as well as people interested in helping with organisation and community building. Any support - financial, logistical, or promotional - would help them grow and keep the meetups accessible and high-quality.
👉 Reach out to Python Łódź directly https://www.linkedin.com/company/python-lodz/
🐴 Pykonik
Python user group based in Kraków organizing regular meetups for enthusiasts. The main goal of the initiative is to exchange experiences face-to-face and channel the energy of enthusiasts of this dynamic programming language into concrete actions for the good of humanity.
At their latest meetup Pykonik members discussed what should there be more of at EuroPython 2026
Fun fact: the name “Pykonik” is a play on words Python and Lajkonik, an unofficial symbol of Kraków.
🌐 You can find more information about EuroPython 2026 Community Partners at https://ep2026.europython.eu/community-partners/
📣 Community Outreach
The EuroPython Society isn’t just present at other Python events—we actively support them too! As a community partner, we love helping local PyCons grow and thrive. We love giving back to the community and strengthening Python events across Europe! 🐍💙
🍕Python Pizza Ostrava
In February we joined our friends in Ostrava to support Python Pizza. Organized entirely by volunteers, the event brought the Python Community together for some amazing original Pizza™ and Python Talks. It was hosted by Impact Hub Ostrava and each talk was 10 minutes long.
Moisés Guimarães, as usual, was channeling his inner Cthulu. This time at Python Pizza Ostrava.📽️ Watch the stream from the micro conference Ostrava Python Pizza 2026
🌐 Have a look at their website: https://ostrava.python.pizza/
🔓FOSDEM 2026
FOSDEM is Europe&aposs largest free and open source software conference, bringing together thousands of developers every year in Brussels. Members of the EuroPython Society have been attending for a long time, and this year marked the first time in many years that the Python community had a stand at the event.
We were so pleased to share a spot in the exhibition space with our friends at the Django Software Foundation and the Python Software Foundation. Thanks to everyone who came to say hello!
ABP! Always Be Promoting EuroPython CfP :) Angel Ramboi was the ultimate recruiter of speakers.🌐 You can find our full report at https://europython-society.org/europython-society-at-fosdem-2026/
💝 Humans of EuroPython
EuroPython – the oldest Python conference – is led by volunteers. In 2025, EuroPython was a gathering of over 1,500 attendees and 160+ speakers across six tracks. We hosted tutorials, summits, workshops, and a full day of events for beginners.
Organising all of this took a lot of time and care: building the programme, ordering catering, answering help desk questions, MCing each stage, and much more. None of this would have been possible without the (huge!) group of committed volunteers who dedicated their time to making EuroPython happen.
Here’s what two of them said about their experience volunteering last year:
“I’ve built lasting friendships, met people I had only known from online talks and tutorials, and had the chance to become a familiar face in the community myself. Every EuroPython, and every local conference, strengthens those connections and leaves you with renewed energy and inspiration to keep contributing.”
Jakub Červinka, Operations Team
“Seeing how an event like this is created from the inside is incredible, plus the team is lovely, and you&aposll learn a lot because you’ll be surrounded by top people from the community.”
Marina Moro López, Programme Team
We wanted to shine a light on some of the members of the EuroPython 2025 organising team in our interview series Humans of EuroPython.
📰 Read what Jakub, Marina, and others gained whilst contributing to the conference https://blog.europython.eu/tag/humans-of-europython/
👋 Stay Connected
Follow us on social media for all the updates:
- LinkedIn: https://www.linkedin.com/company/europython/
- X/Twitter: https://x.com/europython
- Mastodon: https://fosstodon.org/@europython
- Bluesky: https://bsky.app/profile/europython.eu
- Instagram: https://www.instagram.com/europython/
- YouTube: https://www.youtube.com/@EuroPythonConference
That&aposs all for now! Keep an eye on your inbox, our website, and social media for more news and announcements. We&aposre counting down the days until we can come together in Kraków to celebrate our shared love for Python. 🐍❤️
Cheers,
The EuroPython Team
Sign up for EuroPython Blog
The official blog of everything & anything EuroPython! EuroPython 2026 13-19 July, Kraków
No spam. Unsubscribe anytime.
Brett Cannon
Why pylock.toml includes digital attestations
A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data. That led to someone asking for a link to something to explain what I meant. I didn&apost have a link handy since it&aposs buried in 4 years and over 1,800 comments of discussion, so I figured I would write a blog post. 😁
Since trusted publishing is a prerequisite for digital attestations, I&aposll cover that quickly. Basically you can set a project up on PyPI such that a continuous deployment (CD) system can upload a release to PyPI on your behalf. Since PyPI has to trust the CD system to do security right as it lets other sites upload to PyPI on your behalf, not every CD system out there is supported, but the big ones are and others get added as appropriate. Since this helps automate releases without exposing any keys to PyPI that someone might try to steal, it&aposs a great thing to make your life as a maintainer easier while doing something safer; win-win!
Digital attestations are a way for a CD system to attest that a file came from that CD system. That&aposs handy as once you know where a file should come from you can verify that fact to make sure nothing nefarious is going on. And since this is just a thing to flip on, it&aposs extremely simple to do. If you use the official PyPA publish action for GitHub Actions, you get it automatically. For other CD systems it should be a copy-and-paste thing into your CD configuration.
Now, the thing that pylock.toml records is who the publisher is for a file. Taking packaging as an example, you can look at the provenance for packaging-26.0-py3-none-any.whl that comes from the digital attestation and you will notice it tells you the file came from GitHub via the pypa/packaging repo, using the publish.yml workflow run in the "pypi" environment (which you can also see via the file&aposs details on PyPI):
"publisher": {
"environment": "pypi",
"kind": "GitHub",
"repository": "pypa/packaging",
"workflow": "publish.yml"
}So what can you do with this information once it&aposs recorded in your pylock.toml? Well, the publisher details are stored for each package in the lock file. That lets code check that any files listed in the lock file for that package version were published from the same publisher that PyPI or whatever index you&aposre using says the file came from. So if the lock file and index differ on where they say a file came from, something bad may have happened.
What can you do as a person if you don&apost have code to check that things line up (which isn&apost a lot of code; the lock file should have the index server for the package, so you follow the index server API to get the digital attestation for each file and compare)? There are two things you can do manually. One, if you know that a project uses trusted publishing then that digital attestation details should be in the lock file (you can manually check by looking at the file details on PyPI); if it&aposs missing or changed to something suspicious then something bad may have happened. Two, when looking at a PR to update your lock file (and pylock.toml was designed to be human-readable), if digital attestation details suddenly disappear then something bad probably happened.
So to summarize:
- Use trusted publishing if you&aposre a maintainer
- Upload digital attestations if you&aposre a maintainer
- Use lock files where appropriate (and I&aposm partial to
pylock.toml😁) - If you&aposre using
pylock.tomlhave code check the recorded attestations are consistent - When reviewing lock file diffs (which you should do!), make sure the digital attestations don&apost look weird or were suddenly deleted
A special thanks to William Woodruff, Facundo Tuesca, Dustin Ingram, and Donald Stufft for helping to make trusted publishers and digital attestations happen.
March 25, 2026
Talk Python to Me
#542: Zensical - a modern static site generator
If you've built documentation in the Python ecosystem, chances are you've used Martin Donath's work. His Material for MKDocs powers docs for FastAPI, uv, AWS, OpenAI, and tens of thousands of other projects. But when MKDocs 2.0 took a direction that would break Material and 300 ecosystem plugins, Martin went back to the drawing board. The result is Zensical: A new static site generator with a Rust core, differential builds in milliseconds instead of minutes, and a migration path designed to bring the whole community along.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Martin Donath</strong>: <a href="https://github.com/squidfunk?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Zensical</strong>: <a href="https://zensical.org?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <strong>Material for MkDocs</strong>: <a href="https://squidfunk.github.io/mkdocs-material/?featured_on=talkpython" target="_blank" >squidfunk.github.io</a><br/> <strong>Getting Started</strong>: <a href="https://zensical.org/docs/get-started/?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <strong>Github pages</strong>: <a href="https://docs.github.com/en/pages?featured_on=talkpython" target="_blank" >docs.github.com</a><br/> <strong>Cloudflare pages</strong>: <a href="https://pages.cloudflare.com?featured_on=talkpython" target="_blank" >pages.cloudflare.com</a><br/> <strong>Michaels Example</strong>: <a href="https://gist.github.com/mikeckennedy/f03686c4c4ce7ce88b41c6b91c3226cf?featured_on=talkpython" target="_blank" >gist.github.com</a><br/> <strong>Material for MkDocs</strong>: <a href="https://zensical.org/docs/setup/basics/#transition-from-mkdocs" target="_blank" >zensical.org</a><br/> <strong>gohugo.io/content-management/shortcodes</strong>: <a href="https://gohugo.io/content-management/shortcodes/?featured_on=talkpython" target="_blank" >gohugo.io</a><br/> <strong>a sense of size of the project</strong>: <a href="https://blobs.talkpython.fm/zensical-size.webp?cache_id=fe7bda" target="_blank" >blobs.talkpython.fm</a><br/> <strong>Zensical Spark</strong>: <a href="https://zensical.org/spark/?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=V1BvvIPUzes" target="_blank" >youtube.com</a><br/> <strong>Episode #542 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/542/zensical-a-modern-static-site-generator#takeaways-anchor" target="_blank" >talkpython.fm/542</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/542/zensical-a-modern-static-site-generator" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>
PyCharm
Expanding Our Core Web Development Support in PyCharm 2026.1
With PyCharm 2026.1, our core IDE experience continues to evolve as we’re bringing a broader set of professional-grade web tools to all users for free. Everyone, from beginners to backend-first developers, is getting access to a substantial set of JavaScript, TypeScript, and CSS features that were previously only available with a Pro subscription.
React, JavaScript, TypeScript, and CSS support
Leverage a comprehensive set of editing and formatting tools for modern web languages within PyCharm, including:
- Basic React support with code completion, component and attribute navigation, and React component and prop rename refactorings.
- Advanced import management:
- Enjoy automatic JavaScript and TypeScript imports as you work.
- Merge or remove unnecessary references via the Optimize imports feature.
- Get required imports automatically when you paste code into the editor.
- Enhanced styling: Access CSS-tailored code completion, inspections, and quick-fixes, and view any changes in real time via the built-in web preview.
- Smart editor behavior: Utilize smart keys, code vision inlay hints, and postfix code completions designed for web development.
Navigation and code intelligence
Finding your way around web projects is now even more efficient with tools that allow for:
- Pro-grade navigation: Use dedicated gutter icons for Jump to… actions, recursive calls, and TypeScript source mapping.
- Core web refactorings: Perform essential code changes with reliable Rename refactorings and actions (Introduce variable, Change signature, Move members, and more).
- Quality control: Maintain high code standards with professional-grade inspections, intentions, and quick-fixes.
- Code cleanup: Identify redundant code blocks through JavaScript and TypeScript duplicate detection.
Frameworks and integrated tools
With the added essential support for some of the most popular frontend frameworks and tools, you will have access to:
- Project initialization: Create new web projects quickly using the built-in Vite generator.
- Standard tooling: Standardize code quality with integrated support for Prettier, ESLint, TSLint, and StyleLint.
- Script management: Discover and execute NPM scripts directly from your
package.json. - Security: Check project dependencies for security vulnerabilities.
We’re excited to bring these tried and true features to the core PyCharm experience for free! We’re certain these tools will help beginners, students, and hobbyists tackle real-world tasks within a single, powerful IDE. Best of all, core PyCharm can be used for both commercial and non-commercial projects, so it will grow with you as you move from learning to professional development.


