skip to navigation
skip to content

Planet Python

Last update: December 02, 2020 07:47 AM UTC

December 02, 2020

Marius Gedminas

Switching to GitHub Actions

I am grateful to Travis CI for providing many years of free CI service to all of my FOSS projects. However the free lunch is over and I don’t want to constantly ask for free build credits by email (the first 10,000 ran out in 10 days).

I’ve chosen to migrate to GitHub Actions. There are already helpful resources about this:

In fact part of the difficulty with the migration is that there’s too much documentation available! And all the examples do things slightly differently.

So here’s a bunch of tips and discoveries from converting over 20 repositories:

If you’re curious, you can look at my workflow file for findimports (very simple test suite, no linters) or workflow file for project-summary (tox, two matrices), or workflow file for check-manifest (a real 2D matrix of python-version × version control system).

December 02, 2020 07:23 AM UTC


📖 👆🏻 Making the Printed Links Clickable Using TensorFlow 2 Object Detection API

📃 TL;DR In this article we will start solving the issue of making the printed links (i.e. in a book or in a magazine) clickable via your smartphone camera. We will use TensorFlow 2 Object Detection...

December 02, 2020 06:55 AM UTC


Python Docstrings

In this tutorial, we will learn about Python docstrings. More specifically, we will learn how and why docstrings are used with the help of examples.

December 02, 2020 05:33 AM UTC

December 01, 2020

Stack Abuse

Seaborn Bar Plot - Tutorial and Examples


Seaborn is one of the most widely used data visualization libraries in Python, as an extension to Matplotlib. It offers a simple, intuitive, yet highly customizable API for data visualization.

In this tutorial, we'll take a look at how to plot a Bar Plot in Seaborn.

Bar graphs display numerical quantities on one axis and categorical variables on the other, letting you see how many occurrences there are for the different categories.

Bar charts can be used for visualizing a time series, as well as just categorical data.

Plot a Bar Plot in Seaborn

Plotting a Bar Plot in Matplotlib is as easy as calling the bar() function on the PyPlot instance, and passing in the categorical and continuous variables that we'd like to visualize.

import matplotlib.pyplot as plt
import seaborn as sns


x = ['A', 'B', 'C']
y = [1, 5, 3]

sns.barplot(x, y)

Here, we've got a few categorical variables in a list - A, B and C. We've also got a couple of continuous variables in another list - 1, 5 and 3. The relationship between these two is then visualized in a Bar Plot by passing these two lists to sns.barplot().

This results in a clean and simple bar graph:

basic bar plot in seaborn

Though, more often than not, you'll be working with datasets that contain much more data than this. Sometimes, operations are applied to this data, such as ranging or counting certain occurences.

Whenever you're dealing with means of data, you'll have some error padding that can arise from it. Thankfully, Seaborn has us covered, and applies error bars for us automatically, as it by default calculates the mean of the data we provide.

Let's import the classic Titanic Dataset and visualize a Bar Plot with data from there:

import matplotlib.pyplot as plt
import seaborn as sns

# Set Seaborn style
# Import Data
titanic_dataset = sns.load_dataset("titanic")

# Construct plot
sns.barplot(x = "sex", y = "survived", data = titanic_dataset)

This time around, we've assigned x and y to the sex and survived columns of the dataset, instead of the hard-coded lists.

If we print the head of the dataset:


We're greeted with:

   survived  pclass     sex   age  sibsp  parch     fare  ...
0         0       3    male  22.0      1      0   7.2500  ...
1         1       1  female  38.0      1      0  71.2833  ...
2         1       3  female  26.0      0      0   7.9250  ...
3         1       1  female  35.0      1      0  53.1000  ...
4         0       3    male  35.0      0      0   8.0500  ...

[5 rows x 15 columns]

Make sure you match the names of these features when you assign x and y variables.

Finally, we use the data argument and pass in the dataset we're working with and from which the features are extracted from. This results in:

plot bar plot from dataset in seaborn

Plot a Horizontal Bar Plot in Seaborn

To plot a Bar Plot horizontally, instead of vertically, we can simply switch the places of the x and y variables.

This will make the categorical variable be plotted on the Y-axis, resulting in a horizontal plot:

import matplotlib.pyplot as plt
import seaborn as sns

x = ['A', 'B', 'C']
y = [1, 5, 3]

sns.barplot(y, x)

This results in:

plot horizontal bar plot seaborn

Going back to the Titanic example, this is done in much the same way:

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "survived", y = "class", data = titanic_dataset)

Which results in:

plot horizontal bar plot of dataset seaborn

Change Bar Plot Color in Seaborn

Changing the color of the bars is fairly easy. The color argument accepts a Matplotlib color and applies it to all elements.

Let's change them to blue:

import matplotlib.pyplot as plt
import seaborn as sns

x = ['A', 'B', 'C']
y = [1, 5, 3]

sns.barplot(x, y, color='blue')

This results in:

change bar plot color in seaborn

Or, better yet, you can set the palette argument, which accepts a wide variety of palettes. A pretty common one is hls:

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "embark_town", y = "survived", palette = 'hls', data = titanic_dataset)

This results in:

set color palette in seaborn bar plot

Plot Grouped Bar Plot in Seaborn

Grouping Bars in plots is a common operation. Say you wanted to compare some common data, like, the survival rate of passengers, but would like to group them with some criteria.

Say, we want to visualize the relationship of passengers who survived, segregated into classes (first, second and third), but also factor in which town they embarked from.

This is a fair bit of information in a plot, and it can easily all be put into a simple Bar Plot.

To group bars together, we use the hue argument. Technically, as the name implies, the hue argument tells Seaborn how to color the bars, but in the coloring process, it groups together relevant data.

Let's take a look at the example we've just discussed:

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "class", y = "survived", hue = "embark_town", data = titanic_dataset)

This results in:

plot grouped bar plot in seaborn

Now, the error bars on the Queenstown data are pretty large. This incidates that the data on passengers who survived, and embarked from Queenstown varies a lot for the fisrt and second class.

Ordering Grouped Bars in a Bar Plot with Seaborn

You can change the order of the bars from the default order (whatever Seaborn thinks makes most sense) into something you'd like to highlight or explore.

This is done via the order argument, which accepts a list of the values and the order you'd like to put them in.

For example, so far, it ordered the classes from the first to the third. What if we'd like to do it the other way around?

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "class", y = "survived", hue = "embark_town", order = ["Third", "Second", "First"], data = titanic_dataset)

Running this code results in:

ordering grouped bar plots in seaborn

Change Confidence Interval on Seaborn Bar Plot

You can also easily fiddle around with the confidence interval by setting the ci argument.

For example, you can turn it off, by setting it to None, or use standard deviation instead of the mean by setting sd, or even put a cap size on the error bars for aesthetic purposes by setting capsize.

Let's play around with the confidence interval attribute a bit:

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "class", y = "survived", hue = "embark_town", ci = None, data = titanic_dataset)

This now removes our error bars from before:

change confidence interval of error bars in seaborn

Or, we could use standard deviation for the error bars and set a cap size:

import matplotlib.pyplot as plt
import seaborn as sns

titanic_dataset = sns.load_dataset("titanic")

sns.barplot(x = "class", y = "survived", hue = "who", ci = "sd", capsize = 0.1, data = titanic_dataset)

remove error bars from seaborn bar plot


In this tutorial, we've gone over several ways to plot a Bar Plot using Seaborn and Python. We've started with simple plots, and horizontal plots, and then continued to customize them.

We've covered how to change the colors of the bars, group them together, order them and change the confidence interval.

If you're interested in Data Visualization and don't know where to start, make sure to check out our book on Data Visualization in Python.

Data Visualization in Python, a book for beginner to intermediate Python developers, will guide you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair.

Data Visualization in Python

Understand your data better with visualizations! With over 275+ pages, you'll learn the ins and outs of visualizing data in Python with popular libraries like Matplotlib, Seaborn, Bokeh, and more.

December 01, 2020 07:36 PM UTC

PyCoder’s Weekly

Issue #449 (Dec. 1, 2020)

#449 – DECEMBER 1, 2020
View in Browser »

The PyCoder’s Weekly Logo

Unravelling not in Python

In the next blog post in his series about Python’s syntactic sugar, Brett Cannon tackles what would seem to be a very simple bit of syntax, but which actually requires diving into multiple layers to fully implement: not.

How Python Manages Memory

Get ready for a deep dive into the internals of Python to understand how it handles memory management. By the end of this course, you’ll know more about low-level computing, understand how Python abstracts lower-level operations, and find out about Python’s internal memory management algorithms.

Python Developers Are in Demand on Vettery


Get discovered by top companies using Vettery to actively grow their tech teams with Python developers (like you). Here’s how it works: create a profile, name your salary, and connect with hiring managers at startups to Fortune 500 companies. Sign up today - it’s completely free for job-seekers →
VETTERY sponsor

PyQt Layouts: Create Professional-Looking GUI Applications

In this step-by-step tutorial, you’ll learn how to use PyQt layouts to arrange and manage the graphical components on your GUI applications. With the help of PyQt’s layout managers, you’ll be able to create polished and professional GUIs with minimal effort.

Python Type Checking

What is type checking? Why do we need it? What’s the difference between static and runtime type checking?

Django Bugfix Release: 3.1.4

The π release!

pip 20.3 Is Out Featuring a New Dependency Resolver

PYTHON SOFTWARE FOUNDATION • Shared by Sumana Harihareswara


Why Does Python Detect the Symbol “²” as a Digit?

TIL that .isdigit() works with superscripts

Weird Scoping Behavior in Python


Python Jobs

Advanced Python Engineer (Newport Beach, CA, USA)

Research Affiliates

Python Developer / Software Engineer (Berlin, Germany)

Thermondo GmbH

Application Support Engineer (Newark, NJ, USA)

Access Staffing

Senior Backend Engineer (Remote)


More Python Jobs >>>

Articles & Tutorials

Building a Numerical Integration Tool in Python From Scratch

What do you do when you come a cross an integral that you know has a solution but can’t be solved with SciPy? Build your own numerical integration tool, of course!

Teaching Python and Finding Resources for Students

One of the best ways to learn something well is to teach it. Kelly Schuster-Paredes and Sean Tibor teach middle school students how to code. On this episode Kelly and Sean talk about the art and science of teaching Python, as well as the learning resources they use with their students.

Use Distributed Tracing in Python Apps with OpenTelemetry


Join Ted Young in this hands-on code walkthrough covering OpenTelemetry installation and instrumentation in Python, and how to ensure your organization delivers the best value from distributed tracing →

The Unholy Way of Using Virtual Environments

If you’ve used virtual environments before, you may have created a venv/ folder inside the root directory of your project. This is standard, but has some downsides. Have you every thought about reversing this and putting your project inside your venv/ folder?
BHUPESH VARSHNEY • Shared by Bhupesh Varshney opinion

Writing iTerm2 Python Scripts

iTerm2 has a plugin system that allows you to write Python scripts that terminal programs can take advantage of. Learn how to do this by writing two fun scripts: on that automatically sets iTerm2 to dark mode and one that plays a sound in your terminal.

np.linspace(): Create Evenly or Non-Evenly Spaced Arrays

In this tutorial, you’ll learn how to use NumPy’s np.linspace() effectively to create an evenly or non-evenly spaced range of numbers. You’ll explore several practical examples of the function’s many uses in numerical applications.

Spend Less Time Debugging, and More Time Building with Scout APM

Scout APM uses tracing logic that ties bottlenecks to source code to give you the performance insights you need in less than 4 minutes! Start your free 14-day trial today and Scout will donate $5 to the OSS of your choice when you deploy.
SCOUT APM sponsor

Ruby on Rails vs. Django in 2020 and Beyond

Ruby on Rails and Django are two of the best and most popular web development frameworks out there. How do you choose the right one for your new project in 2020? Which should you choose for your new project?
CATALIN IONESCU • Shared by Catalin

Introducing FARM Stack: FastAPI, React, and MongoDB

There’s a new webdev stack in town: the FARM stack. It’s sort of like MERN because it uses React and MongoDB, but it replaces Node.js and Express with Python and FastAPI.

Building a Pandoc Filter in Python That Turns CSV Data Into Formatted Tables

Learn how to create nicely formatted Markdown tables from CSV data using the powerful pandoc tool.

Projects & Code

traceback_with_variables: Adds Variables to Python Tracebacks


Packet-Sniffer: A Pure-Python Network Packet Sniffing Tool


vpype: The Swiss-Army-Knife Command-Line Tool for Vector Graphics


Cyberbrain: Python Debugging, Redefined

GITHUB.COM/LAIKE9M • Shared by laike9m

Spylls: Pure Python Spell-Checker

GITHUB.COM/ZVEROK • Shared by Victor Shepelev

Microdic: High Performance Typed Hash Table Library for Python

GITHUB.COM/TOUQIR14 • Shared by Touqir Sajed


Real Python Office Hours (Virtual)

December 2, 2020

Pyjamas Conf 2020 (Online)

December 5 to December 7, 2020

PyCode Conference 2020 (Online)

December 11 to December 13, 2020

Happy Pythoning!
This was PyCoder’s Weekly Issue #449.
View in Browser »


[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

December 01, 2020 07:30 PM UTC


Webinar: “Let’s build a fast, modern Python API with FastAPI” with Michael Kennedy

Want to build rich, modern, Pythonic REST services? Friend-of-the-webinar (1, 2, 3, 4, 5) Michael Kennedy joins us to discuss FastAPI, the "high performance, easy to learn, fast to code, ready for production" web framework. He will be showing examples from his new Modern APIs with FastAPI training course.

Speaking To You

Michael Kennedy is the host of Talk Python to Me and co-host of Python Bytes podcasts. He is also the founder of Talk Python training and a Python Software Foundation fellow. Michael has a PyCharm course and is co-author of the book Effective PyCharm. Michael has been working in the developer field for more than 20 years and has spoken at numerous conferences.

December 01, 2020 05:22 PM UTC

Real Python

How Python Manages Memory

Ever wonder how Python handles your data behind the scenes? How are your variables stored in memory? When do they get deleted?

In this course, we’re going to do a deep dive into the internals of Python to understand how it handles memory management.

By the end of this course, you’ll:

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 01, 2020 02:00 PM UTC

Stack Abuse

Simple NLP in Python with TextBlob: N-Grams Detection


The constant growth of data on the Internet creates a demand for a tool that could process textual information in a faster way with no effort from the ordinary user.

Moreover, it's highly important that this instrument of text analysis could implement solutions for both low and high-level NLP tasks such as counting word frequencies, calculating sentiment analysis of the texts or detecting patterns in relationships between words.

TextBlob is a great lightweight library for a wide variety of NLP tasks.

In this tutorial we will shed some light how to perform N-Grams Detection in Python using TextBlob.

What are N-Grams?

N-grams represent a continuous sequence of N elements from a given set of texts. In broad terms, such items do not necessarily stand for strings of words, they also can be phonemes, syllables or letters, depending on what you'd like to accomplish.

However, in Natural Language Processing it is more commonly referring to N-grams as strings of words, where n stands for an amount of words that you are looking for.

The following types of N-grams are usually distinguished:

N-grams found its primary application in an area of probabilistic language models. As they estimate the probability of the next item in a word sequence.

This approach for language modeling assumes a tight relationship between the position of each element in a string, calculating the occurrence of the next word with respect to the previous one. In particular, the N-gram model determines the probability as follows - N-1.

For instance, a trigram model (with N = 3) will predict the next word in a string based on the preceding two words as N-1 = 2.

The other cases of implementation of N-grams models in the industry can be detection of plagiarism, where N-grams obtained from two different texts are compared with each other to figure out the degree of similarity of the analysed documents.

N-gram Detecion in Python Using TextBlob

Analysis of a Sentence

To start out detecting the N-grams in Python, you will first have to install the TexBlob package. Note that this library is applicable for both Python 2 and Python 3.

We'll also want to download the required text corpora for it to work with:

$ pip install -U textblob 
$ python -m textblob.download_corpora

Once the environment is set up, you are ready to load the package and compute N-grams in a sample sentence. In the beginning, we will look at N-grams in the quote of M.Mullenweg: Technology is best when it brings people together.

Let's get started:

from textblob import TextBlob

# Sample sentence for N-gram detection
sentence = "Technology is best when it brings people together"

We've created a sentence string containing the sentence we want to analyze. We've then passed that string to the TextBlob constructor, injecting it into the TextBlob instance that we'll run operations on:

ngram_object = TextBlob(sentence)

Now, let's run N-gram detection. For starters, let's do 2-gram detection. This is specifiec in the argument list of the ngrams() function call:

ngrams = ngram_object.ngrams(n=2) # Computing Bigrams

The ngrams() function returns a list of tuples of n successive words. In our sentence, a bigram model will give us the following set of strings:

[WordList(['Technology', 'is']), 
WordList(['is', 'best']), 
WordList(['best', 'when']), 
WordList(['when', 'it']), 
WordList(['it', 'brings']), 
WordList(['brings', 'people']), 
WordList(['people', 'together'])]

Document Analysis

Despite the simple nature of this Python library, TextBlob also provides a range of advanced features for analysis. More often than not, we aren't working with single sentences for N-grams detection. It's much more common to work with documents, articles or larger corporas.

In our next example, we will use an article from the CNBC news portal regarding Bill Gates.

Let's create a text document and call it something along the lines of Input.txt for the next analysis:

import sys

# Opening and reading the `Input.txt` file
corpus = open("Input.txt").read()

Then, as usual, we'll instantiate a TextBlob instance, by passing the corpus to the constructor, and run the ngrams() function:

ngram_object = TextBlob(corpus)
trigrams = ngram_object.ngrams(n=3) # Computing Trigrams

This will print out the Trigrams of the content we've provided. However, note that the output can differ depending on the approach you apply to handle punctuation marks:

[WordList(['Bill', 'Gates', 'says']), 
WordList(['Gates', 'says', 'that']),
WordList(['says', 'that', 'antitrust']), 
WordList(['that', 'antitrust', 'regulators']),
WordList(['antitrust', 'regulators', 'should'])

In comparison, Bigram analysis for the given article will provide us a different list:

ngram_object = TextBlob(corpus)
Bigram = ngram_object.ngrams(n=) # Computing Bigrams

A snippet from the output:

[WordList(['Bill', 'Gates']),
WordList(['Gates', 'says']),
WordList(['says', 'that']),
WordList(['that', 'antitrust']),
WordList(['antitrust', 'regulators'])


N-Grams detection is a simple and common task in a lot of NLP projects. In this article, we've gone over how to perform N-Gram detection in Python using TextBlob.

December 01, 2020 01:30 PM UTC


Announcing PyCon US 2021

We are so excited to present PyCon US 2021 as a virtual event you won’t want to miss!

With the number of COVID-19 cases steadily increasing and the uncertainty around large in-person gatherings, we made the decision to go virtual, with the safety of our community being our number one priority. We will all miss meeting in-person and seeing one another in the hallway or in the expo hall, but we’re looking forward to making 2021 a year to remember in a different way.

Planning PyCon US has taken us to a whole new world of virtual options. Our staff and volunteers are working hard to bring you a great event for you to enjoy. As well as multiple days of informative talks and tutorials and exciting news from our sponsors, you’ll have ways to join in on open space conversations or a hallway track, and maybe even a virtual 5K run. This can all be done from the comfort of your favorite chair – with the exception of the 5K!

We can imagine that the anticipation has been increasing for the PyCon US 2021 website to launch. Well, the time has come! Presenting to you:, the new look for the new event!

The conference will be held on the same dates originally scheduled, May 12-15, 2021. Sprints will be held May 16-18, 2021.

Be sure to create an account on the website and opt-in for PyCon News to receive email updates and announcements. We’ll also continue updating the website with details, launches, and timelines as we finalize our plans. Watch for Call for Proposals to be launching in the next couple weeks. Even though we are having a virtual event, volunteer support will be needed but in different ways, more details and volunteer sign-up forms will be available soon.

The Python Software Foundation supports the global Python community with revenue from PyCon US every year. The financial impact of cancelling PyCon US 2020 in-person and now the need to hold 2021 virtually will cause a potential revenue loss of about $1,200,000.00 for the Python Software Foundation. As in previous years for in-person events, we will need to charge a nominal fee for tickets to attend PyCon US 2021 to reduce this revenue loss. As always, we’ll have financial assistance available for people who need it to attend. 

The PSF has unveiled a ‘new’ Sponsorship program in order to offer organizations the ability to continue community support across many platforms including PyCon US. The program allows for a variety of customization options. You can find more details by visiting the PSF Sponsorship page.

We look forward to sharing more details for PyCon US 2021 as they become finalized.

December 01, 2020 12:08 PM UTC

Python Pool

Numpy Tile in Python With Examples

Numpy has a function that should remind you of a tile floor. In fact, it’s called Numpy Tile. Np Tile is a pretty significant function that allows you to take a matrix and tile it as many times as you want. So let’s get into this cool Numpy Tile function in Python.

Numpy tile (np.tile) in Python simply repeats the numbers of elements present in an array. Suppose you have a numpy array [4,5,6,7], then np.tile([4,5,6,7], 3) will duplicate the elements three times and make a new array. We already know numpy always returns an array even if you give it a list.

Syntax of Numpy Tile

numpy.tile(A, reps)

Numpy.Tile Parameters


Here the parameter ‘A’ will be of array_like type. Parameter ‘A’ is used for taking the input as an array.


The reps parameter allows you to indicate how to repeat the input array. That includes the number of times to repeat it, but also the construction of this tiled output.

Return Type

c: ndarray

The tiled output array.

Examples of Numpy Tile Function

NumPy tile is not difficult to comprehend, but it may be a bit confusing at first. That is why we’re going, to begin with, straightforward examples and then gradually increase the sophistication. In the latter cases, you will start to get more of a feeling of how numpy.tile functions.

Example 1: Basic Numpy Tile Example

In this basic example we will simply multiply an array using the tile() function.

import numpy as np
arr = np.array([0, 1, 2])
c = np.tile(arr, 2)


[0 1 2 0 1 2]


In the above straightforward example, firstly, we have imported the numpy. After that, we initialized and declared a numpy array with the values ‘[0, 1, 2]’. Subsequently, we used the np.tile() function or method to multiply the array. Here in this example, the parameters are ‘(arr, 2)’. So, the array will be repeated two times, as we can see in the output.

Example 2: Numpy Tile Vertically

In this case, we are going to have a simple 1-dimensional input and tile downwards. Basically, we are likely to see to the input just like a row of information and replicate the row.

import numpy as np
arr = np.array([0, 1, 2])
reps = (3,1)
c = np.tile(arr, reps)


[[0 1 2]
[0 1 2]
[0 1 2]]


In the above example, we have first imported the np module. After that, like example 1, we have initialized and assigned an array to the variable ‘arr.’ In this example, we have added an extra line ‘reps = (3,1)’. Here we are creating tile in the vertical direction. So, we need to make the column of the matrix as 1. And for the row, we can increase the value as much as required.

So, after formatting the rows and columns of the matrix, we simply used the function ‘np.tile()’. As we done in the example 1. After that, we simply printed the return value and got our desired result.

Example 3: Tiling Input Array Both Vertically And Horizontally

Let’s move to the third example, In this example we will tile a 1-d array to a matrix. The tile will repeat the elements of the array in both Horizontal and Vertical direction. This will also form a 1-d Matrix.

import numpy as np
arr = np.array([0, 1, 2])
reps = (3,2)
c = np.tile(arr, reps)


[[0 1 2 0 1 2]
 [0 1 2 0 1 2]
 [0 1 2 0 1 2]]


In the above example, we have first imported the required np module. After that, we declared and initialized a numpy array. Following the declaration, we stored the value ‘(3,2)’ in reps. Here we are creating tiles in the horizontal and vertical direction. So, we have taken the row = 3 and column = 2.

Subsequently, we passed the arguments array and reps in the np.tile function. At last, we printed the return value and got our result.

Example 4: Numpy Tile Example in 2-D Array

Enough of the 1-D array examples, in the last example we will work with the 2-D array. And how to operate with this type of array.

So, let’s directly move to the example.

import numpy as np
arr = np.array([[0, 1], [2, 3]])
reps = (2,2)
c = np.tile(arr, reps)


[[0 1 0 1]
 [2 3 2 3]
 [0 1 0 1]
 [2 3 2 3]]


In the above example, we have done the initial step similar to examples 1, 2, 3. But here, instead of initializing a 1-D array. We initialized a 2-D array with values [[0, 1], [2, 3]]. Following the initialization, we stored (2, 2) in the ‘repsvariable. Which will be our repetition parameter for np.tile() function. After that, we used the function and tiled the 2-d array according to our requirement.

Applications on Numpy Tile


In conclusion, we can say the tile() function can be used to build an array by replicating the number of instances given by reps.
And we can use it for multidimensional arrays. In all honesty, it is a bit more complex to imagine and consider higher dimensions, therefore for the sake of simplicity, so I am not likely to describe much about it. Just realize that np.tile does operate in over two dimensions.

However, if you have any doubts or questions do let me know in the comment section below. I will try to help you as soon as possible.

Happy Pythoning!

The post Numpy Tile in Python With Examples appeared first on Python Pool.

December 01, 2020 11:52 AM UTC

Ian Ozsvald

Skinny Pandas Riding on a Rocket at PyDataGlobal 2020

On November 11th we saw the most ambitious ever PyData conference – PyData Global 2020 was a combination of world-wide PyData groups putting on a huge event to both build our international community and to leverage the on-line only conferences that we need to run during Covid 19.

The conference brought together almost 2,000 attendees from 65 countries with 165 speakers over 5 days on a 5-track schedule. All speaker videos had to be uploaded in advance so they could be checked and then provided ahead-of-time to attendees. You can see the full program here, the topic list was very solid since the selection committee had the best of the international community uploading their proposals.

The volunteer organising committee felt that giving attendees a chance to watch all the speakers at their leisure took away constraints of time zones – but we wanted to avoid the common end result of “watching a webinar” that has plagued many other conferences this year. Our solution included timed (and repeated) “watch parties” so you could gather to watch the video simultaneously with others, and then share discussion in chat rooms. The volunteer organising committee also worked hard to build a “virtual 2D world” with – you walk around a virtual conference space (including the speakers’ rooms, an expo hall, parks, a bar, a helpdesk and more). Volunteer Jesper Dramsch made a very cool virtual tour of “how you can attend PyData Global” which has a great demo of how Gather works – it is worth a quick watch. Other conferences should take note.

Through Gather you could “attend” the keynote and speaker rooms during a watch-party and actually see other attendees around you, you could talk to them and you could watch the video being played. You genuinely got a sense that you were attending an event with others, that’s the first time I’ve really felt that in 2020 and I’ve presented at 7 events this year prior to PyDataGlobal (and frankly some of those other events felt pretty lonely – presenting to a blank screen and getting no feedback…that’s not very fulfilling!).

I spoke on “Skinny Pandas Riding on a Rocket” – a culmination of ideas covered in earlier talks with a focus on getting more into Pandas so you don’t have to learn new technologies and see Vaex, Dask and SQLite in action if you do need to scale up your Pythonic data science.

I also organised another “Executives at PyData” session aimed at getting decision makers and team leaders into a (virtual) room for an hour to discuss pressing issues. Given 6 iterations of my “Successful Data Science Projects” training course in London over the last 1.5 years I know of many issues that repeatedly come up that plague decision makers on data science teams. We got to cover a set of issues and talk on solutions that are known to work. I have a fuller write-up to follow.

The conference also enabled a “pay what you can” model for those attending outside of a corporate ticket, this brought in a much wider audience that could normally attend a PyData conference. The goal of the non-profit NumFOCUS (who back the PyData global events) is to fund open source so the goal is always to raise more money and to provide a high quality educational and networking experience. For this on-line global event we figured it made sense to open out the community to even more folk – the “pay what you can” model is regarded as a success (this is the first time we’ve done it!) and has given us some interesting attendee insights to think on.

There are definitely some lessons to learn, notably the on-boarding process was complex (3 systems had to be activated) – the volunteer crew wrote very clear instructions but nonetheless it was a more involved process than we wanted. This will be improved in the future.

I extend my thanks to the wider volunteer organising committee and to NumFOCUS for making this happen!

Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.

The post Skinny Pandas Riding on a Rocket at PyDataGlobal 2020 appeared first on Entrepreneurial Geekiness.

December 01, 2020 11:14 AM UTC

Python Software Foundation

Help the PSF raise $60,000 USD by December 31st!

Help us raise $60,000 USD by December 31st!

Python continues to be a popular and accessible language in the education sector. More and more institutions around the world are introducing students to the opportunities that Python presents. Your support can spread that reach even further.

The theme of our fundraiser this year is geared toward education. We're excited to collaborate with authors, trainers, and companies that offer their services all over the world to raise money for the PSF. In addition, it will help grow a diverse and international community of Python programmers.

Pythonistas can get discounted offers on products and services to level up their Python knowledge, and help the PSF raise funds at the same time! Visit the fundraiser home page to see more details or click on the logos below to see individual offers.

Participating Companies

No purchase is necessary to participate in the fundraiser. You can donate directly here so that 100% of your donation goes to the PSF.

Financial Impacts of 2020 and 2021

COVID-19 has changed all aspects of our lives and is reshaping our future. PyCon US typically generates over 65% of the PSF’s revenue. With PyCon US 2020 and 2021 happening virtually, the PSF may lose $1.2 million USD of expected revenue for those two years.

In 2019, the strategic plan for the PSF shifted towards supporting CPython sustainability with plans to hire three full time Pythonistas to address maintenance, R&D, and education. With lower expected income from PyCon US for two years, we need sponsorship support to make these plans a reality. Other 2020 plans put on hold include improving support and continuing more grants program funding.

The PSF is researching ways we can diversify our revenue streams, but cannot replace the near-term loss of $1.2 million USD. Your support is more important than ever! Read more about the PSF has been up to here.

December 01, 2020 10:46 AM UTC

Tryton News

Newsletter December 2020

Development restarted straight after release 5.8, as you can see with these changes.

Changes for the User

You can now deactivate complaint types. This is useful when the company has stopped using certain complaint types.

The aged balance report now supports more units of time (“week” and “year”) for the terms. Also the terms are updated to sensible standard values when the unit is changed.

The commission date is now based on the invoice date or the payment date.

We have unified the PYSON format in all the clients. They do not necessarily generate exactly the same strings but they can be copied between clients.

On small screens the tabs on the list view could take too much space on the screen. Now they are forced into a single line with a scrollbar.
Also on small screens we no longer display the next/previous buttons so there is now more space available for useful information.

We added a relate from parties to their drop shipments like we have for other shipments.
The effective date of the drop shipment can now be set manually. This is useful if adding them afterwards.

We added a button on the product category that lets you add/remove lots of products from it easily. This is useful when a new category is created and you already have lots of products in the system. Using this you no longer need to edit each product in turn, but can instead add all the selected products in one go.

When invoicing projects based on a time-sheet, it is common that you only want to invoice up to a particular date (for example the end of the month). We’ve added to these projects a date field that limits which time-sheet lines get selected when creating the invoice.

Changes for the Developer

Tryton now makes sure that char fields do not contain white spaces characters, except for normal spaces. It can be confusing for users when they are searching, as the other white spaces characters are not distinct and web browsers replace those white spaces by normal spaces.

The caches for customer payment methods (Stripe and Braintree) no longer depend on the context. This increases the cache hit ratio for these values and so avoids unnecessary network requests for those services.

The server no longer sets the extra_files attribute for the werkzeug server if it is not running in developer mode. This reduces the startup time by a small amount.

1 post - 1 participant

Read full topic

December 01, 2020 09:00 AM UTC

Talk Python to Me

#293 Learning how to learn as a developer

As software developers, we live in a world of uncertainty and flux. Do you need to build a new web app? Well maybe using Django makes the most sense if you've been doing it for a long time. There is Flask, but it's more mix and match being a microframework. But you've also heard that async and await are game changers and FastAPI might be the right choice. <br/> <br/> Whatever it is you're building, there is constant pressure to stay on top of a moving target. Learning is not something you do in school then get a job as a developer. No, it a constant and critical part of your career. That's why we all need to be good, very good, at it. <br/> <br/> Matt Harrison is back on Talk Python to talk to us about some tips, tricks, and even science about learning as software developers.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Matt on Twitter</b>: <a href="" target="_blank" rel="noopener">@__mharrison__</a><br/> <b>Matt's Learning Course (use code TALKPYTHON20 for 20% off)</b>: <a href="" target="_blank" rel="noopener"></a><br/> <br/> <b>Friends of the show</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Streamlit</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Jupyter LSP</b>: <a href="" target="_blank" rel="noopener"></a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href=''>Brilliant</a><br> <a href=''>Linode</a><br> <a href=''>Talk Python Training</a>

December 01, 2020 08:00 AM UTC

Python Pool

Numpy Squeeze in Python With Examples

Hello programmers, in this article, we will discuss the Numpy squeeze function in Python. The squeeze () function is used when we want to remove single-dimensional entries from the shape of an array. Whenever we want to change the shape of a three-dimensional array to a two-dimensional array, we make use of the squeeze() function in NumPy. Hence, the squeeze() function returns the input array with the subset of the dimension having a length equal to one removed from the array. Before we cite examples to show the working of numpy.squeeze() function, let me just brief you about the syntax, parameters, and return type of the same.

Syntax of Numpy squeeze

numpy.squeeze(a, axis=None)

Parameters of Numpy Squeeze:

Return type of Squeeze

This squeeze() function returns an output array similar to input array input, but with all or a subset of the dimensions of length 1 removed.

Example of Squeeze Function in Python

import numpy as np
a = np.array([[[0], [2], [4]]])

b = np.squeeze(a).shape


(1, 3, 1)


The above example is a very basic implementation of the squeeze function. An array ‘a’ is created returning the shape as (1, 3, 1). On passing ‘a’ to the squeeze() function, its shape is returned as (3) i.e., all the dimensions of length 1 is removed.

Numpy Squeeze for axis = 0

import numpy as np
a = np.arange(9).reshape(1, 3, 3)  
print ("Input array : ", a)   
b = np.squeeze(a , axis = 0)  
print ("output array : ", b)   
print("The shapes of Input and Output array : ")  
print(a.shape, b.shape) 


Input array :  [[[0 1 2]
  [3 4 5]
  [6 7 8]]]
output array :  [[0 1 2]
 [3 4 5]
 [6 7 8]]
The shapes of Input and Output array : 
(1, 3, 3) (3, 3)


In the above example, an array s defined using numpy.arrange() function of shape specified as (1, 3, 3). When this array is passed to the squeeze() function, the input array is returned having the same dimension and number of elements. Just the dimension of length 1 is removed. Hence the shape is retuned as (3,3). The axis parameter is not any, it is set to 0, which means the axis being squeezed is not of length 1.

Matrix squeeze

import numpy as np 
# make matrix with numpy 
a = np.matrix('[[4], [8]]') 
# applying matrix.squeeze() method 
b = a.squeeze() 


[[ 4 8]]


With the help of Numpy matrix.squeeze() method, we are able to squeeze the size of a matrix. In this method, the Nx1 size of the input matrix is given out as a 1xN size of the output matrix. In the above example, a numpy matrix is defined using the np.matrix function. And then the numpy squeeze function is used to squeeze the matrix and give the output as [[ 4 8 ]] from the originally created matrix i.e., [ [4], [8] ].


In this article, we have seen the example and implementation of Numpy Squeeze in Python. We have also seen the use of Squeeze in a matrix as well. Refer to this article for any queries related to the Squeeze function.

The post Numpy Squeeze in Python With Examples appeared first on Python Pool.

December 01, 2020 06:41 AM UTC

Django Weblog

Django bugfix release: 3.1.4

Today we've issued the 3.1.4 bugfix release.

The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.

December 01, 2020 06:04 AM UTC


Open Sourcing The Anvil Full Stack Python Web App Platform - Episode 291

Building a complete web application requires expertise in a wide range of disciplines. As a result it is often the work of a whole team of engineers to get a new project from idea to production. Meredydd Luff and his co-founder built the Anvil platform to make it possible to build full stack applications entirely in Python. In this episode he explains why they released the application server as open source, how you can use it to run your own projects for free, and why developer tooling is the sweet spot for an open source business model. He also shares his vision for how the end-to-end experience of building for the web should look, and some of the innovative projects and companies that were made possible by the reduced friction that the Anvil platform provides. Give it a listen today to gain some perspective on what it could be like to build a web app.


Building a complete web application requires expertise in a wide range of disciplines. As a result it is often the work of a whole team of engineers to get a new project from idea to production. Meredydd Luff and his co-founder built the Anvil platform to make it possible to build full stack applications entirely in Python. In this episode he explains why they released the application server as open source, how you can use it to run your own projects for free, and why developer tooling is the sweet spot for an open source business model. He also shares his vision for how the end-to-end experience of building for the web should look, and some of the innovative projects and companies that were made possible by the reduced friction that the Anvil platform provides. Give it a listen today to gain some perspective on what it could be like to build a web app.


  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Do you want to get better at Python? Now is an excellent time to take an online course. Whether you’re just learning Python or you’re looking for deep dives on topics like APIs, memory mangement, async and await, and more, our friends at Talk Python Training have a top-notch course for you. If you’re just getting started, be sure to check out the Python for Absolute Beginners course. It’s like the first year of computer science that you never took compressed into 10 fun hours of Python coding and problem solving. Go to today and get 10% off the course that will help you find your next level. That’s, and don’t forget to thank them for supporting the show.
  • Python has become the default language for working with data, whether as a data scientist, data engineer, data analyst, or machine learning engineer. Springboard has launched their School of Data to help you get a career in the field through a comprehensive set of programs that are 100% online and tailored to fit your busy schedule. With a network of expert mentors who are available to coach you during weekly 1:1 video calls, a tuition-back guarantee that means you don’t pay until you get a job, resume preparation, and interview assistance there’s no reason to wait. Springboard is offering up to 20 scholarships of $500 towards the tuition cost, exclusively to listeners of this show. Go to today to learn more and give your career a boost to the next level.
  • Your host as usual is Tobias Macey and today I’m interviewing Meredydd Luff about the process and motivations for releasing the Anvil platform as open source


  • Introductions
  • How did you get introduced to Python?
  • Can you start by giving an overview of what Anvil is and some of the story behind it?
    • What is new or different in Anvil since we last spoke in June of 2019?
  • What are the most common or most impressive use cases for Anvil that you have seen?
    • On your website you mention Anvil being used for deploying models and productionizing notebooks. How does Anvil help in those use cases?
  • How much of the adoption of Anvil do you attribute to the use of Skulpt and providing a way to write Python for the browser?
    • What are some of the complications that users might run into when trying to integrate with the broader Javascript ecosystem?
  • How does the release of the Anvil App Server affect your business model?
    • How does the workflow for users of the Anvil platform change if they decide to run their own instance?
    • What is involved in getting it deployed to production?
  • What other tools or companies did you look to for positive and negative examples of how to run a successful business based on open source?
  • What was your motivation for open sourcing the core runtime of Anvil?
    • What was involved in getting the code cleaned up and ready for a public release?
  • What are the other ways that your business relies on or contributes to the open source ecosystem?
  • What do you see as the primary threats to open source business models?
  • What are some of the most interesting, unexpected, or challenging lessons that you have learned while building and growing Anvil?
  • What do you have planned for the future of the platform and business?

Keep In Touch


Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers
  • Join the community in the new Zulip chat workspace at


The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

December 01, 2020 12:25 AM UTC

Daniel Bader

How to Use Python’s Print() Without Adding an Extra New Line

How to Use Python’s Print() Without Adding an Extra New Line

Here’s how you can avoid superfluous newlines when using the print function or statement in Python 2.x and 3.x

Let’s say you’re working on a Python script that prints out progress updates to the console while it does its work in order to keep the user informed.

Maybe you’ll want the output to look something like this:

Processing files

With another . dot printed every time a file has been processed. Now how would you implement this in Python?

If you try the following, you’ll get a botched output:

print('Processing files')

for i in range(5):

Processing files

As you can see, the . dot characters are printed on a new line each instead of one long consecutive line. How do you get Python to print them all on one line without newlines after each character?

Python 2.x and 3.x – sys.stdout.write()

import sys

print('Processing files')

for i in range(5):


If you want to use sys.stdout.write for printing the other messages as well, you’ll need to make sure to add the newlines manually yourself. You can do this by adding a \n linefeed special character to your message strings, like so:

import sys

sys.stdout.write('Processing files\n')

for i in range(5):


Depending on your terminal setup you might not see those dots printed immediately if you’re working with a bit of a delay between printing each characters. If this happens you can call the sys.stdout.flush() function to ensure any pending characters are written to the console.

Python 3.x – print() function

Python 2.x – print statement

Let’s say you wanted to print the following string of numbers in Python:

1, 2, 3, 4, 5
>>> for i in range(5):
...    print(i + ', ')

132 down vote accepted In Python 3.x, you can use the end argument to the print() function to prevent a newline character from being printed:

print(“Nope, that is not a two. That is a”, end=”“) In Python 2.x, you can use a trailing comma:

print “this should be”, print “on the same line” You don’t need this to simply print a variable, though:

print “Nope, that is not a two. That is a”, x Note that the trailing comma still results in a space being printed at the end of the line, i.e. it’s equivalent to using end=” ” in Python 3. To suppress the space character as well, you can either use

from future import print_function to get access to the Python 3 print function or use sys.stdout.write().

[ Improve Your Python with Dan's 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 01, 2020 12:00 AM UTC

What Can You Do With Python?

What Can You Do With Python?

(short blurb)

According to StackOverflow, Python “has a solid claim to being the fastest growing major programming language in the world.”. It is clearly a good time to be a Python developer, as more and more fields are finding Python more useful than ever before. This is not surprising at all, as Python is simple, clear and easy to learn, making it useful in fields outside of Computer Science. The thriving Python ecosystem and its flexibility have made it an incredibly extensible language which you can apply in any field. If there’s something you need to do in Python, there’s probably a framework for it.

So whether you’re a new learner looking for inspiration or an advanced developer checking Python out, here’s what Python is (mostly) being used for these days.

Apps and Games

While an “app” is a very broad term, for the most part, we’re referring to apps as you would see on a mobile phone. While Python is not the language of choice for mobile programming right now, it’s not impossible to use it to make mobile apps. The main challenge with mobile apps is to create an intuitive and easy to use interface that makes good use of touch. Multi-platform frameworks like Kivy allow you to focus on making your UI and UX design as neat and effective as possible and make the code simpler. Kivy can be used to make Android and iOS apps with Python, and even use those apps on your PC as well.

If you want to make a more traditional Windows or Linux applications instead, you can use one of many GUI Frameworks such as PyGTK to write cross-platform GUI applications. Calibre, a widely used desktop app used to organise, manage and read ebooks, is coded largely in Python. Python’s multi-platform nature makes it easy to write code that can be easily used on computers running different operating systems and architectures, and you can spend your time working on program logic instead of porting your program everywhere it needs to be.

The aforementioned Kivy framework can also be used to write video games as well, but the real king in this field (at least for Python) is PyGame. PyGame is a popular framework that can be used to create 2D and 3D games, and there are many, many libraries built to extend PyGame and make complicated games. PyGame is an easy way to understand the basics of game design without being bogged down by the boilerplate code making games is known for. However, for the most part, if you’re looking to write an app or a game, you might be better off not using Python at all.

On the Web

Python has for some time now been a big player in the field of server-side languages. The frameworks Django and Pylons are popular choices to write server programs that generate HTML pages and manage the data models and databases that the website needs. Django, a framework for “perfectionists with deadlines”, is also very fast to learn and deploy.

Both Django and Pylons can be used for full-scale websites and serious workloads. The popular website Reddit (which ranks 8th on Alexa’s worldwide ranking at the time of writing) uses Pylons and serves millions of users every day. Another Python (micro) framework known as Flask is used in popular websites like Pinterest and LinkedIn, and interest in it has been growing over time.

Prominent web companies, foremost among them Google, have been very enthusiastic in using Python. Guido van Rossum, the creator of Python, worked for Google, and the company today lists Python as one of the three official languages they use. Python is seeing greater use in the enterprise scene overall, as its flexibility and ease of use make it easy for rapid prototyping and proof of concepts.

Machine Learning and Scientific Computing

But the scene where Python has made the biggest impact, and in the process grown the most is that of Machine Learning. Machine Learning is the technique of making computers “learn” to perform complicated tasks such as reading text within images, predicting fluctuations in the economy, drive cars without human intervention or even write music. It’s a cutting-edge field that has been growing rapidly as it holds the potential to transform many industries entirely.

Machine Learning requires handling large swathes of data and performing many complicated calculations on them especially matrix multiplications, which are computationally very expensive. While languages like C++ may be faster than Python, Python is incredibly easy to pick up for people who don’t need more than a passing knowledge of programming and want to focus on the mathematics of Machine Learning instead. Python also makes reading and writing data easy to do without any hassle, unlike most other languages, and libraries like numpy have made performing highly optimised numeric calculations in Python simple.

Python’s rising popularity in Machine Learning is also due to the thriving Machine Learning community that uses Python and the availability of many Machine Learning frameworks. Pandas is a data analysis framework for Python and is one of the most used frameworks in Python. With a standard library like numpy that makes it possible to perform MATLAB-like calculations in python and a framework like Pandas that allows you to handle and analyse data quickly, you have all the benefits of domain-specific languages like MATLAB and R without having to give up on the flexibility and interoperability of Python.

Scientific computing is a field that uses the fast computational capabilities of computers to solve complex problems in science. Cutting edge fields like Computational Fluid Dynamics and widely used software in simulation are examples of usage in scientific computing. For a long time, MATLAB was the most popular language in this field, but its proprietary nature and limited scope makes it a less useful language overall. Python, on the other hand, is extensible, easy to learn and use and interoperable with many other frameworks and languages. Python has been used to detect exoplanets in planetary systems far from Earth, and as the need for computing in science grows so will Python.

Machine Learning, Data Science and Scientific Computing are thriving fields, and no one language completely rules the roost here. However, if you don’t want to restrict yourself to these fields and leave the window open to get into other dimensions of computing, Python is the more obvious choice.

Power User

Perhaps the most interesting and fun aspect of Python for day to day use is the ability to quickly write scripts to automate boring, repetitive tasks. The hit book “Automate the Boring Stuff” uses to Python to teach people to make the most out of their computers by doing things like renaming batches of files, crawling through the internet and downloading files, resizing images and so on.

The best part about using Python to write such scripts is that it’s easy to read and write. Instead of thinking about how your algorithm will turn into code, Python’s simplicity makes it easy for the focus to be on program logic instead of the arcane specifics of the language. Linux users who are proficient with shell scripting can synergise with Python to make incredibly powerful routines that can make computing a breeze.

In this field, too, there are many libraries and frameworks that make Python the optimum choice. Beautiful Soup can help you parse HTML and get what you need out of a webpage without having to read it, so you can build a script to, say, get your grade from your school’s website automatically without having to open the page. Beautiful Soup and other such frameworks can also be used to write programs as complicated as search engines.


As with any other language, Python does what you want it to do. However, the nature of Python makes it uniquely suitable for the things we’ve seen - rapid prototyping, extensibility, and quick automation. While you think about this list and what Python is used for, don’t limit yourself to this - if you think you can make use of Python in an interesting way elsewhere, by all means, do so. Who knows, maybe ten years down the line, we’ll be talking about what you did with Python?

[ Improve Your Python with Dan's 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

December 01, 2020 12:00 AM UTC

LAAC Technology

Float vs Decimal in Python

Table of Contents


Both the float and decimal types store numerical values in Python, and at the beginning, choosing when to use each can be confusing. Python’s decimal documentation is a good starting point to learn when to use decimals. Generally, decimals exist in Python to solve the precision issues of floats.


Use floats when convenience and speed matter. A float gives you an approximation of the number you declare. For example, if I print 0.1 with 18 decimals places, I don’t actually get 0.1 but instead an approximation.

>>> print(f"{0.1:.18f}")

Similarly, when doing operations, such as addition with floats, you get an approximation, which can lead confusing code, such as the following.

>>> .1 + .1 + .1 == .3
>>> .1 + .1 + .1

Intuitively, the addition makes sense, and at a glance, you expect the statement to be true. However, because of the float approximation it turns out to be false. This demonstrates one of the big issues with floats which is the lack of reliable equality testing. To fix this equality test without the use of decimals we need to use rounding.

>>> round(.1 + .1 + .1, 10) == round(.3, 10)
>>> round(.1 + .1 + .1, 10)

In this case, we round the floats to prevent any precision issues. If you find yourself using floats and rounding frequently in your codebase, this indicates that it’s time to use decimals.


Use decimals when precision matters, such as with financial calculations. But realistically, I try to always use decimals. The performance difference between float and decimal, with Python 3, is not outlandish, and in my experience, the precision benefits of a decimal outweigh the performance benefits of a float.

Let’s look at the previous examples with decimals instead of floats.

>>> from decimal import Decimal
>>> print(f"{Decimal('0.1'):.18f}")
>>> Decimal('.1') + Decimal('.1') + Decimal('.1') == Decimal('.3')

Using decimals in these examples prevents the subtle bugs introduced by floats. If you notice, the decimals use strings for initialization. Once again, using floats causes precision issues.

>>> from decimal import Decimal
>>> Decimal(0.01) == Decimal("0.01")

In this example, we expect these decimals to be equal, but, because of the precision issues with floats, this decimal equality test returns false. If we look at each of these decimals, we’ll see why.

>>> Decimal(0.01)
>>> Decimal("0.01")

The decimal declared as a float is not technically 0.01, which results in an equality test of false. All decimals should be initialized using strings to prevent precision issues. If decimals aren’t initialized with strings, we lose some of the precision benefits of decimals and create subtle bugs.

Final Thoughts

I just use decimals. Even though decimals aren’t as convenient due to the extra imports, and aren’t as performant, the benefits of preventing subtle bugs introduced by a float’s precision issues outweigh the downsides. Decimals come with their own subtle issues, but initializing them with strings prevents most of those issues.

December 01, 2020 12:00 AM UTC

November 30, 2020

Stefan Scherfke

Typed Settings

There are already several settings libraries like Dynaconf, Environ Config, or Pydantic – just to name a few. I have written a new one: Typed Settings.

What makes it different?

Settings are defined as type-hinted, immutable (frozen) attrs classes. Values are automatically converted to the proper type when they are loaded. Apart from simple data types, Typed Settings supports datetimes, enums, nested attrs classes and various container types (like lists). The auto-converter can be extended to handle additional types.

Settings can be loaded from multiple config files. Config files can contain settings for multiple apps (like pyproject.toml). Different deployment environments use different config files (this is in contrast to Dynaconf, where a single config file specifies the settings for all environments in different sections). Currently, only TOML is supported. Support for YAML or .env may follow later.

Value interpolation as in Dynaconf is not yet supported, but planned.

Search paths for config files have to be explicitly stated – either directly in the app or via an environment variable.

Environment variables can also be used to override settings. The prefix is customizable and this feature can also be disabled.

Finally, Typed Settings can generate Click options for command line applications. You CLI function will receive all options nicely packed together as a single instance of your settings class.

Specialized secrets stores like HashiCorp Vault are not (yet?) supported.

Invalid values or undefined options in config files raise an error instead of being silently ignored. Config files can optionally be marked as mandatory and an error will be raised if such a file cannot be found.

To aid with debugging, Typed Settings uses Python’s logging module to log config files that are being loaded (or that cannot be found) as well as looked up env vars.

Everything is thoroughly tested, the test coverage is at 100%.

An Example

Here is a very simple example that demonstrates how you can load settings from a statically defined config file and from environment variables:

import typed_settings as ts

class Settings:
    option_one: str
    option_two: int

settings = ts.load_settings(
    config_files=[""],  # Paths can also be set via env var
# settings.toml
option_one = "value"
Settings(option_one="value", option_two=2)

The README and documentation contain more examples.

Project Status

The recently released version 0.9 contains all features that are planed for version 1.0.0. Additional features are already on the roadmap.

What’s missing for the first stable release is mainly documentation as well as more real life testing. I already use Typed Settings for a few projects in our company and will perspectively try to replace our old settings system with it.

November 30, 2020 09:14 PM UTC

Python Morsels

Keyword-Only Function Arguments

Related article:


Let's define a function that accepts a keyword-only argument.

Accepting Multiple Positional Argument

This greet function accepts any number of positional arguments:

>>> def greet(*names):
...     for name in names:
...         print("Hello", name)

If we give it some names, it's going to print out Hello, and then the name, for each of those names:

>>> greet("Trey", "Jo", "Ian")
Hello Trey
Hello Jo
Hello Ian

It does this through the * operator, which is capturing all the positional arguments given to this function.

Positional and Keyword-Only Argument

If we wanted to allow the greeting (Hello) to be customized we could accept a greeting argument:

>>> def greet(*names, greeting):
...     for name in names:
...         print(greeting, name)

We might try to call this new greet function like this:

>>> greet("Trey", "Hi")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: greet() missing 1 required keyword-only argument: 'greeting'

But that gives us an error. The error says that greet is missing one required keyword-only argument greeting.

That error is saying is that greeting is a required argument because it doesn't have a default value and it must be specified as a keyword argument when we call this function.

So if we want to customize greeting, we can pass it in as a keyword argument:

>>> greet("Trey", greeting="Hi")
Hi Trey
>>> greet("Trey", greeting="Hello")
Hello Trey

We probably want greeting to actually have a default value of Hello. We can do that by specifying a default value for the greeting argument:

>>> def greet(*names, greeting="Hello"):
...     for name in names:
...         print(greeting, name)
>>> greet("Trey", "Jo")
Hello Trey
Hello Jo

Because greeting is after that *names in our function definition, Python sees greeting as a keyword-only argument: an argument that can only be provided as a keyword argument when this function is called.

It can only be given by its name like this:

>>> greet("Trey", "Jo", greeting="Hi")
Hi Trey
Hi Jo

Keyword-Only Arguments in Built-in Functions

This is actually something you'll see in some of Python's built-in functions. For example, the print function accepts any number of positional arguments, as well as four optional keyword-only arguments: sep, end, file, and flush:

>>> help(print)
Help on built-in function print in module builtins:

    print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)

Note that the documentation for print doesn't use the * syntax, but that ... is print's way of indicating that it accepts any number of values and then all of the arguments after that must be keyword arguments.

If we look at the documentation for greet, you'll see how keyword-only arguments usually show up in documentation:

>>> help(greet)
Help on function greet in module __main__:

greet(*names, greeting='Hello')

Everything after that * (greeting in this case), can only be specified as a keyword argument.

Keyword-Only Arguments without Capturing All Positional Arguments

It is also possible to make a function that doesn't capture any number of positional arguments, but does have some keyword-only arguments. The syntax for this is really weird.

Let's make a multiply function that accepts x and y arguments:

>>> def multiply(*, x, y):
...     return x * y

That lone * before x and y means that they must be specified as keyword arguments.

So, if we were to try to call multiply with two positional arguments, we'll get an error:

>>> multiply(1, 2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: multiply() takes 0 positional arguments but 2 were given

To call this function, we have to specify x and y as keyword arguments:

>>> multiply(x=1, y=2)

If we call this function with nothing you'll see an error message similar to what we saw before about required keyword-only arguments:

>>> multiply()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: multiply() missing 2 required keyword-only arguments: 'x' and 'y'

Keyword-Only Arguments in the Standard Library

You'll actually sometimes see this * thing on its own within the Python standard library. For example in thechown function in the os module (used for changing the ownership of a file) uses the a lone * to specify keyword-only arguments:

chown(path, uid, gid, *, dir_fd=None, follow_symlinks=True)
    Change the owner and group id of path to the numeric uid and gid.

The chown function documentation shows path, uid, gid, and then a * (which isn't an argument itself), and then dir_fd and follow_symlinks. That lone * is a way of noting that everything after that point is a keyword-only argument.

The last two arguments, dir_fd and follow_symlinks can only be specified by their name when the chown function is called.


So, whenever you see a function that uses * to capture any number of positional arguments (e.g. *args in the function definition), note that any arguments defined after that * capturing can only be specified as a keyword argument (they're keyword-only arguments).

Also if you see a function that has an * on its own with a comma after it, that means that every argument after that point, is a keyword only argument it must be specified by its name when that function is called.

November 30, 2020 04:00 PM UTC

Stack Abuse

Matplotlib Bar Plot - Tutorial and Examples


Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it's the go-to library for most.

In this tutorial, we'll take a look at how to plot a bar plot in Matplotlib.

Bar graphs display numerical quantities on one axis and categorical variables on the other, letting you see how many occurrences there are for the different categories.

Bar charts can be used for visualizing a time series, as well as just categorical data.

Plot a Bar Plot in Matplotlib

Plotting a Bar Plot in Matplotlib is as easy as calling the bar() function on the PyPlot instance, and passing in the categorical and continuous variables that we'd like to visualize.

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3], y)

Here, we've got a few categorical variables in a list - A, B and C. We've also got a couple of continuous variables in another list - 1, 5 and 3. The relationship between these two is then visualized in a Bar Plot by passing these two lists to

This results in a clean and simple bar graph:

basic bar plot in matplotlib

Plot a Horizontal Bar Plot in Matplotlib

Oftentimes, we might want to plot a Bar Plot horizontally, instead of vertically. This is easily achieveable by switching the call with the plt.barh() call:

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3]

plt.barh(x, y)

This results in a horizontally-oriented Bar Plot:

horizontal bar plot in matplotlib

Change Bar Plot Color in Matplotlib

Changing the color of the bars themselves is as easy as setting the color argument with a list of colors. If you have more bars than colors in the list, they'll start being applied from the first color again:

import matplotlib.pyplot as plt

x = ['A', 'B', 'C']
y = [1, 5, 3], y, color=['red', 'blue', 'green'])

Now, we've got a nicely colored Bar Plot:

change bar plot color in matplotlib

Of course, you can also use the shorthand versions or even HTML codes:, y, color=['red', 'blue', 'green']), y, color=['r', 'b', 'g']), y, color=['#ff0000', '#00ff00', '#0000ff'])

Or you can even put a single scalar value, to apply it to all bars:, y, color='green')

change bar plot color in matplotlib

Bar Plot with Error Bars in Matplotlib

When you're plotting mean values of lists, which is a common application for Bar Plots, you'll have some error space. It's very useful to plot error bars to let other observers, and yourself, know how truthful these means are and which deviation is expected.

For this, let's make a dataset with some values, calculate their means and standard deviations with Numpy and plot them with error bars:

import matplotlib.pyplot as plt
import numpy as np

x = np.array([4, 5, 6, 3, 6, 5, 7, 3, 4, 5])
y = np.array([3, 4, 1, 3, 2, 3, 3, 1, 2, 3])
z = np.array([6, 9, 8, 7, 9, 8, 9, 6, 8, 7])

x_mean = np.mean(x)
y_mean = np.mean(y)
z_mean = np.mean(z)

x_deviation = np.std(x)
y_deviation = np.std(y)
z_deviation = np.std(z)

bars = [x_mean, y_mean, z_mean]
bar_categories = ['X', 'Y', 'Z']
error_bars = [x_deviation, y_deviation, z_deviation], bars, yerr=error_bars)

Here, we've created three fake datasets with several values each. We'll visualize the mean values of each of these lists. However, since means, as well as averages can give the false sense of accuracy, we'll also calculate the standard deviation of these datasets so that we can add those as error bars.

Using Numpy's mean() and std() functions, this is a breeze. Then, we've packed the bar values into a bars list, the bar names for a nice user experience into bar_categories and finally - the standard deviation values into an error_bars list.

To visualize this, we call the regular bar() function, passing in the bar_categories (categorical values) and bars (continuous values), alongside the yerr argument.

Since we're plotting vertically, we're using the yerr arguement. If we were plotting horizontally, we'd use the xerr argument. Here, we've provided the information about the error bars.

This ultimately results in:

bar plot with error bars in matplotlib

Plot Stacked Bar Plot in Matplotlib

Finally, let's plot a Stacked Bar Plot. Stacked Bar Plots are really useful if you have groups of variables, but instead of plotting them one next to the other, you'd like to plot them one on top of the other.

For this, we'll again have groups of data. Then, we'll calculate their standard deviation for error bars.

Finally, we'll need an index range to plot these variables on top of each other, while maintaining their relative order. This index will essentially be a range of numbers the length of all the groups we've got.

To stack a bar on another one, you use the bottom argument. You specify what's on the bottom of that bar. To plot x beneath y, you'd set x as the bottom of y.

For more than one group, you'll want to add the values together before plotting, otherwise, the Bar Plot won't add up. We'll use Numpy's np.add().tolist() to add the elements of two lists and produce a list back:

import matplotlib.pyplot as plt
import numpy as np

# Groups of data, first values are plotted on top of each other
# Second values are plotted on top of each other, etc
x = [1, 3, 2]
y = [2, 3, 3]
z = [7, 6, 8]

# Standard deviation rates for error bars
x_deviation = np.std(x)
y_deviation = np.std(y)
z_deviation = np.std(z)

bars = [x, y, z]
ind = np.arange(len(bars))
bar_categories = ['X', 'Y', 'Z'];
bar_width = 0.5
bar_padding = np.add(x, y).tolist(), x, yerr=x_deviation, width=bar_width), y, yerr=y_deviation, bottom=x, width=bar_width), z, yerr=z_deviation, bottom=bar_padding, width=bar_width)

plt.xticks(ind, bar_categories)
plt.xlabel("Stacked Bar Plot")

Running this code results in:

stacked bar plot in matplotlib


In this tutorial, we've gone over several ways to plot a bar plot using Matplotlib and Python. We've also covered how to calculate and add error bars, as well as stack bars on top of each other.

If you're interested in Data Visualization and don't know where to start, make sure to check out our book on Data Visualization in Python.

Data Visualization in Python, a book for beginner to intermediate Python developers, will guide you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair.

Data Visualization in Python

Understand your data better with visualizations! With over 275+ pages, you'll learn the ins and outs of visualizing data in Python with popular libraries like Matplotlib, Seaborn, Bokeh, and more.

November 30, 2020 02:30 PM UTC

Real Python

np.linspace(): Create Evenly or Non-Evenly Spaced Arrays

When you’re working with numerical applications using NumPy, you often need to create an array of numbers. In many cases you want the numbers to be evenly spaced, but there are also times when you may need non-evenly spaced numbers. One of the key tools you can use in both situations is np.linspace().

In its basic form, np.linspace() can seem relatively straightforward to use. However, it’s an essential part of the numerical programming toolkit. It’s both very versatile and powerful. In this tutorial, you’ll find out how to use this function effectively.

In this tutorial, you’ll learn how to:

  • Create an evenly or non-evenly spaced range of numbers
  • Decide when to use np.linspace() instead of alternative tools
  • Use the required and optional input parameters
  • Create arrays with two or more dimensions
  • Represent mathematical functions in discrete form

This tutorial assumes you’re already familiar with the basics of NumPy and the ndarray data type. You’ll start by learning about various ways of creating a range of numbers in Python. Then you’ll take a closer look at all the ways of using np.linspace() and how you can use it effectively in your programs.

Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.

Creating Ranges of Numbers With Even Spacing

There are several ways in which you can create a range of evenly spaced numbers in Python. np.linspace() allows you to do this and to customize the range to fit your specific needs, but it’s not the only way to create a range of numbers. In the next section, you’ll learn how to use np.linspace() before comparing it with other ways of creating ranges of evenly spaced numbers.

Using np.linspace()

np.linspace() has two required parameters, start and stop, which you can use to set the beginning and end of the range:

>>> import numpy as np
>>> np.linspace(1, 10)
array([ 1.        ,  1.18367347,  1.36734694,  1.55102041,  1.73469388,
        1.91836735,  2.10204082,  2.28571429,  2.46938776,  2.65306122,
        2.83673469,  3.02040816,  3.20408163,  3.3877551 ,  3.57142857,
        3.75510204,  3.93877551,  4.12244898,  4.30612245,  4.48979592,
        4.67346939,  4.85714286,  5.04081633,  5.2244898 ,  5.40816327,
        5.59183673,  5.7755102 ,  5.95918367,  6.14285714,  6.32653061,
        6.51020408,  6.69387755,  6.87755102,  7.06122449,  7.24489796,
        7.42857143,  7.6122449 ,  7.79591837,  7.97959184,  8.16326531,
        8.34693878,  8.53061224,  8.71428571,  8.89795918,  9.08163265,
        9.26530612,  9.44897959,  9.63265306,  9.81632653, 10.        ])

This code returns an ndarray with equally spaced intervals between the start and stop values. This is a vector space, also called a linear space, which is where the name linspace comes from.

Note that the value 10 is included in the output array. The function returns a closed range, one that includes the endpoint, by default. This is contrary to what you might expect from Python, in which the end of a range usually isn’t included. This break with convention isn’t an oversight. You’ll see later on that this is usually what you want when using this function.

The array in the example above is of length 50, which is the default number. In most cases, you’ll want to set your own number of values in the array. You can do so with the optional parameter num:

>>> np.linspace(1, 10, num=10)
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])

The output array in this instance contains 10 equally spaced values between 1 and 10, which is just the numbers from 1 to 10. Here’s another example:

>>> np.linspace(-10, 10, 25)
array([-10.        ,  -9.16666667,  -8.33333333,  -7.5       ,
        -6.66666667,  -5.83333333,  -5.        ,  -4.16666667,
        -3.33333333,  -2.5       ,  -1.66666667,  -0.83333333,
         0.        ,   0.83333333,   1.66666667,   2.5       ,
         3.33333333,   4.16666667,   5.        ,   5.83333333,
         6.66666667,   7.5       ,   8.33333333,   9.16666667,
        10.        ])

In the example above, you create a linear space with 25 values between -10 and 10. You use the num parameter as a positional argument, without explicitly mentioning its name in the function call. This is the form you’re likely to use most often.

Using range() and List Comprehensions

Let’s take a step back and look at what other tools you could use to create an evenly spaced range of numbers. The most straightforward option that Python offers is the built-in range(). The function call range(10) returns an object that produces the sequence from 0 to 9, which is an evenly spaced range of numbers.

For many numerical applications, the fact that range() is limited to integers is too restrictive. Of the examples shown above, only np.linspace(1, 10, 10) can be accomplished with range():

>>> list(range(1, 11))
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The values returned by range(), when converted explicitly into a list, are the same as those returned by the NumPy version, except that they’re integers instead of floats.

You can still use range() with list comprehensions to create non-integer ranges:

>>> step = 20 / 24  # Divide the range into 24 intervals
>>> [-10 + step*interval for interval in range(25)]
[-10.0, -9.166666666666666, -8.333333333333334, -7.5,
 -6.666666666666666, -5.833333333333333, -5.0, -4.166666666666666,
 -3.333333333333333, -2.5, -1.666666666666666, -0.8333333333333321,
 0.0, 0.8333333333333339, 1.6666666666666679, 2.5,
 3.333333333333334, 4.166666666666668, 5.0, 5.833333333333334,
 6.666666666666668, 7.5, 8.333333333333336, 9.166666666666668, 10.0]

Read the full article at »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 30, 2020 02:00 PM UTC

Python Software Foundation

Releasing pip 20.3, featuring new dependency resolver

On behalf of the Python Packaging Authority and the pip team, I am pleased to announce that we have just released pip 20.3, a new version of pip. You can install it by running python -m pip install --upgrade pip.

This is an important and disruptive release -- we explained why in a blog post last year. We've even made a video about it.


  • DISRUPTION: Switch to the new dependency resolver by default. Watch out for changes in handling editable installs, constraints files, and more:

  • DEPRECATION: Deprecate support for Python 3.5 (to be removed in pip 21.0).

  • DEPRECATION: pip freeze will stop filtering the pip, setuptools, distribute and wheel packages from pip freeze output in a future version. To keep the previous behavior, users should use the new --exclude option.

  • Substantial improvements in new resolver for performance, output and error messages, avoiding infinite loops, and support for constraints files.

  • Support for PEP 600: Future manylinux Platform Tags for Portable Linux Built Distributions.

  • Documentation improvements: Resolver migration guide, quickstart guide, and new documentation theme.

  • Add support for MacOS Big Sur compatibility tags.

The new resolver is now on by default. It is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please see our guide on how to test and migrate, and how to report issues. You can use the deprecated (old) resolver, using the flag --use-deprecated=legacy-resolver, until we remove it in the pip 21.0 release in January 2021.

You can find more details (including deprecations and removals) in the changelog.

Coming soon: end of Python 2.7 support

We aim to release pip 21.0 in January 2021, per our release cadence. At that time, pip will stop supporting Python 2.7 and will therefore stop supporting Python 2 entirely.

For more info or to contribute:

We run this project as transparently as possible, so you can:

Thank you

Thanks to our contractors on this project: Simply Secure (specifically Georgia Bullen, Bernard Tyers, Nicole Harris, Ngọc Triệu, and Karissa McKelvey), Changeset Consulting (Sumana Harihareswara), Atos (Paul F. Moore), Tzu-ping Chung, Pradyun Gedam, and Ilan Schnell. Thanks also to Ernest W. Durbin III at the Python Software Foundation for liaising with the project.
This award continues our relationship with Mozilla, which supported Python packaging tools with a Mozilla Open Source Support Award in 2017 for Warehouse. Thank you, Mozilla! (MOSS has a number of types of awards, which are open to different sorts of open source/free software projects. If your project will seek financial support in 2021, do check the MOSS website to see if you qualify.)

This is new funding from the Chan Zuckerberg Initiative. This project is being made possible in part by a grant from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. Thank you, CZI! (If your free software/open source project is seeking funding and is used by researchers, check the Joint Roadmap for Open Science Tools Rapid Response Fund and consider applying.)
The funding for pip's overhaul will end at the end of 2020; if your organization wants to help continue improvements in Python packaging, please join the sponsorship program.

As with all pip releases, a significant amount of the work was contributed by pip's user community. Huge thanks to all who have contributed, whether through code, documentation, issue reports and/or discussion. Your help keeps pip improving, and is hugely appreciated. Thank you to the pip and PyPA maintainers, to the PSF and the Packaging WG, and to all the contributors and volunteers who work on or use Python packaging tools.
-Sumana Harihareswara, pip project manager

November 30, 2020 12:55 PM UTC