Skip to content. | Skip to navigation

Personal tools
Log in
You are here: Home

Open Source Posts

Shaun M. Thomas: PG Phriday: Bodacious Benchmarks

From Planet PostgreSQL. Published on Sep 23, 2016.

When it comes to putting Postgres through its paces, we often turn to benchmarks to absolutely bury it under a torrent of oppressive activity. It’s a great way to obtain maximum performance metrics and also observe how Postgres reacts and breaks down under such pressure. But these kinds of tests aren’t really practical, are they? After all, many such simulated workloads are nothing but bragging rights measured against previous Postgres releases, or for hardware comparisons. But while functionality beyond defaults is often overlooked, tools like pgbench are actually critical to the development process.

I’ve used pgbench frequently in the past for examples in these articles. It’s much like Apache’s own ab for bombarding a server with various workloads, except pgbench must bootstrap by creating a set of test tables. Or does it? What if we have our own existing tables from a project that we want to measure, either for a hardware migration, or for scalability concerns?

Let’s see how we can utilize pgbench with our trusty sensor_log table. It’s not really designed to experience any updates, so we need something else as well. Let’s say the sensor log is an endpoint capture for a very busy sensor table that is only updated when sensors detect a change in the environment. That should provide us with plenty of opportunities for contention!

  location       VARCHAR NOT NULL,
  reading        BIGINT NOT NULL,
  modified_date  TIMESTAMP NOT NULL
INSERT INTO sensor (location, reading, modified_date)
SELECT, % 100,
                    - ( % 60 || 's')::INTERVAL
  FROM generate_series(1, 5000000) s(id);
CREATE INDEX idx_sensor_location
    ON sensor (location);
ANALYZE sensor;
CREATE TABLE sensor_log (
  sensor_id     BIGINT NOT NULL,
  location      VARCHAR NOT NULL,
  reading       BIGINT NOT NULL,
  reading_date  TIMESTAMP NOT NULL
CREATE UNIQUE INDEX udx_sensor_log_sensor_id_reading_date
    ON sensor_log (reading_date, sensor_id);
CREATE OR REPLACE FUNCTION f_enforce_modified()
  NEW.modified_date = now();
CREATE TRIGGER t_sensor_timestamp_b_iu
    ON sensor
       EXECUTE PROCEDURE f_enforce_modified();
CREATE OR REPLACE FUNCTION f_capture_sensor_changes()
  INSERT INTO sensor_log (sensor_id, location, reading, reading_date)
CREATE TRIGGER t_sensor_changes_a_iu
    ON sensor
       EXECUTE PROCEDURE f_capture_sensor_changes();

We’ve bootstrapped the sensor table with a relatively substantial amount of data. Five million rows isn’t really much these days, but it’s more than enough for demonstration purposes on a laptop or VM. With that said, there are a few things we can test with this basic stable structure. But what to analyze?

Let’s begin with a basic custom script that just emulates a simple update schedule with sensor readings coming in constantly from any random location. It’s a good place to start and allows us to tweak further. Note that this script only works with 9.6 and above, but previous iterations have an equivalent.

\SET sid random(1, 100000 * :scale)
\SET val random(1, 100)
SELECT * FROM sensor
 WHERE sensor_id = :sid;
UPDATE sensor SET reading = :val
 WHERE sensor_id = :sid;
SELECT MAX(reading) FROM sensor_log
 WHERE sensor_id = :sid
   AND reading_date > CURRENT_DATE - INTERVAL '5 minutes';

This script does a few things for us. Taken as a whole, we just grab a random entry from the sensor table, capture its current reading, and see if that affected the aggregate high over the last five minutes. This is something an application might do as part of an analysis cycle. The real magic starts when we actually execute the script with pgbench.

This is actually pretty cool, and full of valuable information. We’re also going to utilize a trick so we don’t taint our existing data. It’s not something recommended for production systems, but it’s perfect for QA or development environments.

createdb sensor_test -T sensors
pgbench --scale 50 --report-latencies --no-vacuum \
        --file sensor-bench-1.sql --time 30 sensor_test
transaction type: sensor-bench-1.sql
scaling factor: 50
query mode: simple
number of clients: 1
number of threads: 1
duration: 30 s
number of transactions actually processed: 9200
latency average: 3.261 ms
tps = 306.636514 (including connections establishing)
tps = 306.672074 (excluding connections establishing)
script statistics:
 - statement latencies in milliseconds:
         0.003  \set sid random(1, 100000 * :scale)
         0.001  \set val random(1, 100)
         0.291  SELECT * FROM sensor
         1.245  UPDATE sensor SET reading = :val
         1.713  SELECT max(reading) FROM sensor_log

The first command created a new database named sensor_test based on the sensors database that contained both of our sample tables, and includes any tables and data that already existed. This means we can run as many tests as we want, throw everything away, and start fresh as many times as we need. It takes a bit of time to copy data, so there is probably a limit to where this becomes difficult to justify, but small and medium data sets fit this model just fine.

The pgbench command itself deserves some explanation, though we did defer to the long-format parameters to help convey what we’re actually doing.

Let’s start with the scale. By default, pgbench uses 100,000 rows as the base, and our script retains that because why not? With a scale of 50, that’s our five-million rows. What’s interesting about this is that we can actually increase possible contention by reducing the scale so it focuses modifications on a smaller portion of the data. We’ll probably find this usuful later.

We disable vacuuming because pgbench only knows to vacuum its own tables, so we disable that while using custom scripts. The script statistics at the end of the report is why we included the latency report. With this information, we can focus on the part of the script causing any slowdowns. Assuming we base our scripts on real application samples, this is an invaluable source of performance metrics.

Aside from the custom file we’re using for the test itself, we also set the test duration to thirty seconds. For any real kind of analysis, we’d want to use a much larger value, on the order of tens of minutes or even hours. That would give us a much better idea of average latencies after accounting for other system activity, including write throughput to the base data files during checkpoints.

Let’s see what happens if we crank down the scale to create a “hot spot” in our table, and increase the client concurrency to force more contention:

dropdb sensor_test
createdb sensor_test -T sensors
pgbench --client 4 --report-latencies --no-vacuum \
        --log --aggregate-interval 10 \
        --file sensor-bench-1.sql --time 30 sensor_test
transaction type: sensor-bench-1.sql
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 30 s
number of transactions actually processed: 32425
latency average: 3.701 ms
tps = 1080.246903 (including connections establishing)
tps = 1080.360307 (excluding connections establishing)
script statistics:
 - statement latencies in milliseconds:
         0.002  \set sid random(1, 100000 * :scale)
         0.001  \set val random(1, 100)
         0.654  SELECT * FROM sensor
         1.702  UPDATE sensor SET reading = :val
         1.332  SELECT max(reading) FROM sensor_log
cat $(ls -t pgbench_log.* | head -n 1)
1474658287 13068 36026916 122319506636 1060 17761
1474658297 10252 39970824 203678243092 1568 32993
1474658307 8371 39968669 270707139663 1910 82191

There isn’t a whole lot that has changed here, but we can see a couple important bits of information. First, the update statement is notably slower than before. This is a 2-CPU VM, so that’s expected, though incoming activity eclipsing CPU count on a system is somewhat common. Despite this, the aggregate maximum did not get slower, and in fact is actually slightly faster. Clearly the update is the issue, which is expected since we saddled it with two triggers.

That last bit of output is the result of including output logging and tweaking our aggregate interval to summarize instead of reporting the results of every transaction. This data gets logged based on the current PID of the pgbench command, so we cheated a bit and used cat to print the file for the most recent benchmark. Each column has a meaning, but the last two represent minimum and maximum latency in microseconds. Based on this, we can see that latency has some pretty wild swings from a mere 1ms to over 80ms.

This tells us a couple things, actually. Our previous models never included the sensor table, just sensor_log. We can always pull the most recent reading from a sensor log, so having a lot of updates and two triggers on the base table is a lot of unnecessary overhead. What if we flipped the model slightly and emulated the current state with a view instead?

CREATE TABLE sensor_log (
  sensor_id     SERIAL NOT NULL,
  location      VARCHAR NOT NULL,
  reading       BIGINT NOT NULL,
  reading_date  TIMESTAMP NOT NULL DEFAULT now()
INSERT INTO sensor_log (location, reading, reading_date)
SELECT, % 100,
                    - ( % 60 || 's')::INTERVAL
  FROM generate_series(1, 5000000) s(id);
CREATE UNIQUE INDEX udx_sensor_log_sensor_id_reading_date
    ON sensor_log (sensor_id, reading_date DESC);
ANALYZE sensor_log;
SELECT DISTINCT ON (sensor_id) *
  FROM sensor_log
 ORDER BY sensor_id, reading_date DESC;

This comes with a corresponding update to our script:

\SET sid random(1, 100000 * :scale)
\SET val random(1, 100)
SELECT * FROM v_sensor
 WHERE sensor_id = :sid;
INSERT INTO sensor_log (sensor_id, location, reading)
VALUES (:sid, :sid, :val);
SELECT MAX(reading) FROM sensor_log
 WHERE sensor_id = :sid
   AND reading_date > CURRENT_DATE - INTERVAL '5 minutes';

And finally, we run the altered benchmark:

dropdb sensor_test
createdb sensor_test -T sensors
pgbench --client 4 --report-latencies --no-vacuum \
        --file sensor-bench-2.sql --time 30 sensor_test
transaction type: sensor-bench-2.sql
scaling factor: 1
query mode: simple
number of clients: 4
number of threads: 1
duration: 30 s
number of transactions actually processed: 52091
latency average: 2.304 ms
tps = 1736.124362 (including connections establishing)
tps = 1736.281647 (excluding connections establishing)
script statistics:
 - statement latencies in milliseconds:
         0.002  \set sid random(1, 100000 * :scale)
         0.001  \set val random(1, 100)
         0.463  SELECT * FROM v_sensor
         1.347  INSERT INTO sensor_log (sensor_id, location, reading)
         0.480  SELECT max(reading) FROM sensor_log

Maybe it comes as no surprise, but these modifications improved performance by over 70%! A good DBA probably would have said something about the sensor table for the same reasons we listed, but it’s better to have numbers to prove it. It’s certainly logical enough to keep a table with current values and keep an audit log for changes, but we found a better model if we need to increase throughput.

Keep in mind that this is a very simple example. In a real cluster, our benchmark script would likely be much more involved, and we’d probably have several more of them for various read/write percentage scenarios. This is the kind of tooling necessary to really squeeze the database for every last drop of performance, or otherwise reveal points of contention or excessive latency swings.

If you’re not already using it, pgbench is a powerful addition to any database developer or DBA arsenal. It’s much more than its defaults might suggest!

Affan Salman: Integrating Cassandra, Spark, PostgreSQL and Hadoop as a Hybrid Data Platform – Part 2

From Planet PostgreSQL. Published on Sep 23, 2016.

In the previous post, we provided a business and architectural
background for the Postgres FDWs that we are developing for Spark,
Hadoop and Cassandra. In particular, we highlighted the key benefits of
bringing Cassandra and PostgreSQL together.

With this post, we will start taking a more technical look at the
Cassandra FDW.

The C* FDW speaks natively with Cassandra on two levels; it:

  • uses the binary CQL protocol instead of the legacy Thrift protocol.
  • directly relies on the DataStax Native C++ driver for Cassandra.

The DataStax C++ driver is performant and feature-rich; various load
balancing and routing options are available and configurable. We are
already making use of some of these features and plan to provide more of
these to our users.

While exploring Cassandra as a Postgres user, the defaults such as
automatic inclusion of the ALLOW FILTERING clause are useful as they
allow gradual familiarity; especially useful in small development
environments. Our intent is to support tuning for large environments
but to default to a configuration geared toward existing PostgreSQL

At this point, let us consider whether we are introducing a new SPOF by
using PostgreSQL with a Cassandra system. We believe not; a PostgreSQL
node at the edge of a Cassandra cluster – as a transactional or open-SQL
end point – is not at all the same as a central master node critical to
the operation of an entire cluster. We see some trade-offs but mostly
we see benefits of bringing PostgreSQL to Cassandra in this way as we
intend to elucidate through this series.

In the next post, we will show you how to get started with the Cassandra

Ernst-Georg Schmid: pgchem::tigress vs. OpenBabel 2.4.0

From Planet PostgreSQL. Published on Sep 22, 2016.

OpenBabel 2.4.0 is released, the build process worked flawlessly, 100% tests passed.

Now I have to see if the Tigress still likes it...

Well, pgchem::tigress builds against OpenBabel 2.4.0 without errors, but will it work?

Yes, pgchem::tigress works with OpenBabel 2.4.0 without any notable alterations, except changing:


in the Makefile to:


As always with .0 releases, subtle issues might be lurking below the surface, though.

Bruce Momjian: Postgres 9.6 Features

From Planet PostgreSQL. Published on Sep 21, 2016.

I have written a presentation covering the important features in Postgres 9.6 and some of the features we hope for in Postgres 10.

Devrim GÜNDÜZ: Installing pgadmin4 to Red Hat, CentOS, and Fedora

From Planet PostgreSQL. Published on Sep 21, 2016.

After about 100 hours of packaging and testing work, pgadmin4 RPMs hit PostgreSQL 9.4, 9.5 and 9.6 repos, for RHEL/CentOS 7 and Fedora 23, 24!

First of all, I'd like to write down the list of packages that entered git repo for pgadmin4 dependency: Continue reading "Installing pgadmin4 to Red Hat, CentOS, and Fedora"

Craig Ringer: BDR is coming to PostgreSQL 9.6

From Planet PostgreSQL. Published on Sep 20, 2016.

I’m pleased to say that Postgres-BDR is on its way to PostgreSQL 9.6, and even better, it works without a patched PostgreSQL.

BDR has always been an extension, but on 9.4 it required a heavily patched PostgreSQL, one that isn’t fully on-disk-format compatible with stock community PostgreSQL 9.4. The goal all along has been to allow it to run as an extension on an unmodified PostgreSQL … and now we’re there.

The years of effort we at 2ndQuadrant have put into getting the series of patches from BDR into PostgreSQL core have paid off. As of PostgreSQL 9.6, the only major patch that Postgres-BDR on 9.4 has that PostgreSQL core doesn’t, is the sequence access method patch that powers global sequences.

This means that Postgres-BDR on 9.6 will not support global sequences, at least not the same way they exist in 9.4. The 9.6 version will incorporate a different approach to handling sequences on distributed systems, and in the process address some issues that arose when using global sequences in production.

Since Postgres-BDR on 9.6 is now passing regression tests, I expect to have an alpha release out not long after the release of PostgreSQL 9.6 itself. Keep an eye out!

There’s also a pglogical 1.2.0 update coming to coincide with the release of PostgreSQL 9.6.

At the same time, Petr has submitted another revision of logical replication for core PostgreSQL to the 10.0 development series.

Along with updating Postgres-BDR in concurrence with 9.6, there is work in progress to enhance Postgres-BDR’s HA capabilities. I’ll have more to say on that in an upcoming post.

We’ve been busy here at 2ndQuadrant.

Fail Open and Layer Policy

By Will Larson from Django community aggregator: Community blog posts. Published on Sep 20, 2016.

Around 2009, the Dynamo paper materialized into Cassandra. Cassandra escaped Facebook in the fashion of the time: an abrupt code bomb errupting into existence with little ongoing maintenance. Maintaining early versions of Cassandra was itself an explosive experience, and those who ran early versions developed a shared joke that Cassandra was a trojan horse released to blot out progress by an entire generation of Silicon Valley startups.

Working at Digg and running Cassandra 0.6, we didn’t laugh much.

While operationally it was a bit of a challenge, it introduced me to what felt like a very novel idea at the time: Cassandra and similarly designed NoSQL databases were bringing scalability to the masses by only providing operations that worked well at scale. Sure, later they released CQL and that rigid enforcement faded, but the idea that you can drive correct user behavior by radically restricting choice was a very powerful one.

A couple jobs later, I joined a team which was exploring the logical extremes of this concept; we had extremely rigid service provisioning and configuration tools, which only allowed a very specific shape of service run in a very specific way (colloquially, The Right Way). Our customers loved that our tooling always ensured they did things in a maintainable, easy to scale way, and that it didn’t require them to waste time trying a variety of tools, they could just get to work!

Just kidding. They revolted.

There was an initial response to dig in and explain why our customers should do it our way and not their way, but that was ~surprisingly ineffective, even in cases where the new way felt demonstratively worse. Sensing the tides had turned against us, we eventually relented and built a very flexible and buzzword compliant solution to service provisioning, allowing teams to provision whatever kind of service with whatever kind of programming language they wanted.

All was blissful for some time, with teams making their own decisions about technology. In particular this approach did a great job of aligning choice and responsibility, where teams who chose to use a non-standard technology ended up absorbing most of the additional complexity from doing so.

Well-aligned and well-designed, we were riding high, but oddly enough our bliss didn’t last forever. Pretty soon another right way graduated to The Right Way–which had relatively in common with its nominal predecessor– and our tooling suddenly had a new bug: it was too flexible.

It took me a while to figure out what to take away from those experiences, but these days I summarize my take away as:

Design systems which fail open and layer policy on top.

In this case, failing open means to default to allowing any behavior, even if you find it undesirable. This might be allowing a user to use unsupported programming languages, store too much data, or perform unindexed queries.

Then layering policies on top means adding filters which enforce designed behavior. Following the above example, that would be rejecting programming languages or libraries you find undesirable, users storing too much data, or queries without proper indexes.

The key insight for me is that a sufficiently generic implementation can last forever, but intentional restrictions tend to evolve rapidly over time; if infrastructure maintainers want to avoid rewriting their systems every year or two, then we need to be able to tweak policies to enforce restrictions while independently maintaining and improving the underlying capabilities. (I sometimes also describe this concept as “self-service with guard-rails”, for cases when these layers are more about providing informational norms than about enforcing restrictions.)

Like most rules unencumbered by nuance, I haven’t found this to be universally applicable, but I have found it useful in reducing the rate that tools transition into technical debt. The next time your iterating on your developer tooling, give it a whirl and see how it feels.

Martín Marqués: Those darn Large Objects

From Planet PostgreSQL. Published on Sep 20, 2016.


PostgreSQL gives developers the chance of choosing between two possible storage facilities for large binary data: Bytea and LargeObjects.

Large Objects have been around for a long time, and PostgreSQL has a smart way of storing large binary data. It does so by splitting it into chunks of LOBLKSIZE (a forth of BLCKSZ). That way the tuples from pg_largeobject don’t spill on the toast table.

On the other hand bytea stores the binary data directly in the tuple, which may lead to poor performance depending on how your schema looks.

This sounds great if you have an intelligent interface for dealing with the manipulation of these binary files, specially if update modify just a small portion of the whole binary file.

But normally we don’t bother writing code that takes advantage of this, and instead we write again of the whole binary data.

One of the things that I believe make people adopt large objects are the functions available for importing and exporting files directly from the database server to it’s filesystem. There’s a con to this: if the application is on a different server, you’ll need more code to move the file to the location where it’s needed.

A problem you might face

The past days I had to examine a database used to store information of user sessions from a Java CAS system. I found there were almost 100 million large objects in the database, not very big ones.

I went over the user tables checking the fields that had an oid field, and then I cross-referencing the values in those fields with the pg_largeobject_metadata table. I found that 96% of those large objects where orphan ones. Those are large objects which weren’t referenced by any tuple from the user tables.

Further investigation concluded that Hibernate didn’t take care of purging the largeobjects it created when deleting or updating tuples with oid fields. So it was generating a great amount of bloat which could not be clean up by vacuuming, but had to be purged from pg_largeobjects table manually.

In the particular case of the CAS database, this query served to identify the largeobjects still in use:

SELECT unnest(array[expiration_policy,
       FROM public.ticketgrantingticket
SELECT unnest(array[expiration_policy, 
       FROM public.serviceticket

The query can be used to exclude from the list of large objects which ones to remove. Something like this:

SELECT lo_unlink(pg_largeobject_metadata.oid)
       FROM pg_largeobject_metadata
       WHERE pg_largeobject_metadata.oid NOT IN (
             SELECT unnest(array[expiration_policy,
             FROM public.ticketgrantingticket
             SELECT unnest(array[expiration_policy, 
             FROM public.serviceticket


Large objects have their issues, just like other types of data (especially when using types to store large binary data). It’s up to the developers and database administrators to take advantage of the pros and mitigate the cons.

We gave a possible query to perform the clean-up, but there’s also a nice extension which cleans up the orphaned large objects with triggers: Large Object Manager

Some people might prefer running a purge query during quiet hours instead of executing a trigger on every UPDATE and DELETE. On systems with very, very low UPDATE and/or DELETE rate, a trigger over each table that has an oid field, seems a more elegant solution. And any performance loss for having to execute the trigger function would be superfluous.

In any case, large objects still have great fans, most likely because of the internal functions provided to import and export the binary data directly to the local filesystem. With bytea you’d normally use more memory at the application tier. It’s a very common procedure to read the binary field completely into a variable an then process it.

I might write something about using bytea which I used in one of my past developments in a future blog post.

Umair Shahid: My experience with the Indonesia PUG meetup

From Planet PostgreSQL. Published on Sep 20, 2016.

I was in Jakarta a couple of weeks ago and there happened to be a meetup of the Indonesia PUG in Bandung while I was there. Because it is just a 2 hour, rather picturesque drive, from Jakarta, I thought it was too good of an opportunity to miss. So I showed up.



The meetup was hosted by Julyanto Sutandang of Equnix Business Solutions and the conversation was mostly centered around convincing the local technology industry about PostgreSQL in comparison to Oracle. We got into fairly detailed discussions on the typical challenges of moving an Oracle production database to PostgreSQL. I especially love talking about hierarchical queries – Oracle’s CONNECT BY PRIOR and PostgreSQL’s WITH RECURSIVE.

It was very interesting to find out how popular BDR – the Multi Master Replication solution from 2ndQuadrant – was in Indonesia. I got a lot of questions about the technology (which, admittedly, I am not an expert at) and its roadmap (which I was able to answer more confidently). Not only is BDR already being used in production at multiple locations but many corporates, including large banks, are actively doing POCs using the technology.

We also got talking about ACID compliance, and how that figures into the whole NoSQL equation. We talked about the major NoSQL-like features of PostgreSQL: JSON/JSONB, HSTORE, XML, etc. and the fact that they are all ACID compliant, unlike exclusively NoSQL data stores. The main takeaway was that NoSQL Databases and RDBMS complement each other, they can’t replace each other. NoSQL is good for POCs and massive data writes at high speeds, but if you hold your data valuable and want to derive analytical value from it, you have to eventually move it to an RDBMS. Of course, our database of choice is PostgreSQL :-)



Oh, and did I mention that Julyanto insisted on stuffing me full of traditional Indonesian food? ;-)


David Rader: Improve PostgreSQL on Windows performance by 100%

From Planet PostgreSQL. Published on Sep 20, 2016.

It sounds like click-bait, or one of those late night TV promotions – “Improve your database performance by 100% – by changing just this one setting!” But in this case, it’s true – you can drastically improve PostgreSQL on Windows performance by changing one configuration setting – and we made this the default in our Postgres by BigSQL distribution for 9.2 thru 9.6.

tl;dr – if you have high query load, change “update_process_title” to ‘off’ on Windows, and get 100% more throughput.

Improve postgresql performance by turning off update_process_title

Performance Improvement by turning off update_process_title

Most Postgres DBA’s already know that they need to tune settings for shared buffers, WAL segments, checkpoints, etc, to get the best performance from their database. If you are running PostgreSQL on Windows, there’s another setting that you need to look at, specifically “update_process_title”. Changing this setting from “on” to “off” can improve throughput on a heavy query load by close to 100%

We ran a series of benchmark tests in our performance lab and you can see the dramatic improvement in the graphs displayed. We tested PostgreSQL 9.5 on a 16-core Windows server with fast SSD drives using a standard pgbench run in both read-only and read-write modes. Scaling from 4 to 40 clients shows a plateau in throughput (measured by TPS) after 8 clients when the setting is set to “on”. Changing the update_process_title setting to “off” allows PostgreSQL to continue to scale throughput, showing increasing TPS up to 40 clients. The throughput at 32 read-only clients increases from 20K TPS to 58K TPS (180% higher) and at 40 clients continues to climb to 76K TPS (270% higher).

Improvement in read-write transactions turning off update_process_title

Improvement in read-write transactions turning off update_process_title

This performance gain is seen for both read-only and read-write workloads. With 32 clients, the write throughput increases from 2,700 TPS to 7,700 TPS (180% higher) and at 40 clients continues to climb to 8,200 (200% higher).

The update_process_title setting controls whether or not Postgres will update the process description that you see when querying the system list of running commands based on the current SQL statement being processed. On Linux this is done using ps, on Windows it requires the ProcessExplorer tool. Updating the process description becomes a bottleneck on Windows, and limits the throughput even on a high-end server. Not many Windows admins actually use this information on a regular basis, so unless you are actively debugging a slow or long running process using this process information, you should leave this turned off.

Takayuki Tsunakawa originally tracked down this bottleneck and created a patch for PostgreSQL 9.6 that has changed the default to be ‘off’ on Windows. We have made the same setting change in BigSQL distributions of version 9.2 thru 9.5 as well as 9.6. So even if you’re not ready to move to the new 9.6 version, when you install Postgres by BigSQL on Windows you are getting the best performance out-of-the-box.


Federico Campoli: Mighty morphin power elephant

From Planet PostgreSQL. Published on Sep 19, 2016.

Back in the 2013 I started playing with sqlalchemy to create a simple extractor from heterogeneous systems to be pushed in postgresql.
I decided to give the project a name which recalled the transformation and I called pg_chameleon.

To be honest I didn't like sqlalchemy.  Like any other ORM adds an interface to the data layer with a mental approach to the data itself. I lost the interest to developing a migrator very soon, and after all there are thousands of similar tools thousands of times better than mine (e.g. the awesome pgloader)

However recently I revamped the project after discovering a python library capable to read the mysql replication protocol. In few weeks I cleaned all the sqlalchemy stuff, rebuilt the metadata extraction using the information_schema and finally I had an usable tool to replicate the data across the two systems.

I've also changed the license from GPL to the 2 clause BSD.

The tool requires testing. I'm absolutely sure is full of bugs and issues, but it seems to work quite nice.

Some key aspects:

  • Is developed in python 2.7. Bear with me, I'll build a port to python 3.x when/if the project will get to an initial  release.
  • I use tabs (4 space tabs). Bear with me again. I tried to use spaces and I almost thrown my laptop out of the window
  • is not working. I'll fix this as soon as I'll do a release.
  • Yes, the sql part use the "hungarian notation" and the keywords are uppercase with strange indentation on the statements .  
  • The DDL are not yet replicated. I'm thinking to a clever approach to the problem.

That's it. If you want to test it please do and try to break the tool :)

The tool is on github here:

How to Create a Password Reset View

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Sep 19, 2016.

For this short tutorial we will be using the django.contrib.auth views to add a password reset functionality to your Django application. The process of reseting passwords involves sending emails. For that matter we will be using console email backend to debug and check if everything is working. In the end of this tutorial I will also provide resources to properly configure a prodution-quality email server.


Basically all you need is to have django.contrib.auth in your INSTALLED_APPS and a email service properly configurated (for production). During the development we can use file/console email backend.


EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'  # During development only


We need 4 different views:

  1. password_reset: Form where the user submit the email address
  2. password_reset_done: Page displayed to the user after submitting the email form. Usually with instructions to open the email account, look in the spam folder etc. And asking for the user to click on the link he will receive.
  3. password_reset_confirm: The link that was emailed to the user. This view will validate the token and display a password form if the token is valid or an error message if the token is invalid (e.g. was already used or expired).
  4. password_reset_complete: Page displayed to the user after the password was successfully changed.

from django.contrib.auth import views as auth_views

urlpatterns = [
    url(r'^password_reset/$', auth_views.password_reset, name='password_reset'),
    url(r'^password_reset/done/$', auth_views.password_reset_done, name='password_reset_done'),
        auth_views.password_reset_confirm, name='password_reset_confirm'),
    url(r'^reset/done/$', auth_views.password_reset_complete, name='password_reset_complete'),

Or you can simply include all auth views:

urlpatterns = [
    url('^', include('django.contrib.auth.urls')),

After including the routes in the project’s url conf, now it is a matter of creating the templates. You won’t need to mess with views.

For convenience and to avoid adding extra parameter, create a folder named registration inside your templates folder.

List of required templates:

  • registration/password_reset_form.html
  • registration/password_reset_subject.txt
  • registration/password_reset_email.html
  • registration/password_reset_done.html
  • registration/password_reset_confirm.html
  • registration/password_reset_complete.html


{% extends 'base.html' %}

{% block content %}
  <h3>Forgot password</h3>
  <form method="post">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Submit</button>
{% endblock %}


TestSite password reset

(It’s just a one line file with the subject of the email that will be sent to the user).

  • registration/password_reset_email.html
{% autoescape off %}
To initiate the password reset process for your {{ user.get_username }} TestSite Account,
click the link below:

{{ protocol }}://{{ domain }}{% url 'password_reset_confirm' uidb64=uid token=token %}

If clicking the link above doesn't work, please copy and paste the URL in a new browser
window instead.

The TestSite Team
{% endautoescape %}


{% extends 'base.html' %}

{% block content %}
    We've emailed you instructions for setting your password, if an account exists with the email you entered.
    You should receive them shortly.
    If you don't receive an email, please make sure you've entered the address you registered with,
    and check your spam folder.
{% endblock %}


{% extends 'base.html' %}

{% block content %}
  {% if validlink %}
    <h3>Change password</h3>
    <form method="post">
      {% csrf_token %}
      {{ form.as_p }}
      <button type="submit">Change password</button>
  {% else %}
      The password reset link was invalid, possibly because it has already been used.
      Please request a new password reset.
  {% endif %}
{% endblock %}


{% extends 'base.html' %}

{% block content %}
    Your password has been set. You may go ahead and <a href="{% url 'signin' %}">sign in</a> now.
{% endblock %}

Testing the Views


Password Reset


Password Reset

The email sent using registration/password_reset_subject.txt and registration/password_reset_email.html.

Password Reset


Password Reset


Password Reset

Configuring a SMTP Email Service

First remove the EMAIL_BACKEND from your, since it defaults to SMTP Email Backend.

Now add the information from your email provider:

EMAIL_HOST_USER = 'testsite_app'
EMAIL_HOST_PASSWORD = 'mys3cr3tp4ssw0rd'
DEFAULT_FROM_EMAIL = 'TestSite Team <>'

There are many transactional email services out there. SendGrid, MailGun, Mandrill. If you want to learn more about how to configure a production-quality email service, I wrote a very detailed post about how to configure SendGrid using Django:

How to Send Email in a Django App

Greg Sabino Mullane: pg_healer: repairing Postgres problems automatically

From Planet PostgreSQL. Published on Sep 19, 2016.

Sometimes, the elephant gets hurt - inducing database errors! Data corruption is a fact of life in working with computers, and Postgres is not immune. With the addition of the "data checksums" feature, detecting such corruption is now much easier. But detection is not enough - what happens after the corruption is detected? What if Postgres could fix the problem all by itself - what if we could give the elephant a mutant healing power?!?

Now we can. I wrote an extension named pg_healer that does just that - detects corruption issues, and automatically repairs them. Let's see how it works with a demonstration. For this, we will be purposefully corrupting the "pgbench_branches" table, part of the venerable pgbench utility.

For the initial setup, we will create a new Postgres cluster and install the pgbench schema. The all-important checksum feature needs to be enabled when we initdb, and we will use a non-standard port for testing:

$ initdb --data-checksums dojo
The files belonging to this database system will be owned by user "greg".
Data page checksums are enabled.

creating directory dojo ... ok
creating subdirectories ... ok
$ echo port=9999 >> dojo/postgresql.conf
$ pg_ctl start -D dojo -l log.dojo.txt
server starting
$ createdb -p 9999 $USER
$ pgbench -p 9999 -i
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 100000 tuples (100%) done (elapsed 0.35 s, remaining 0.00 s)
set primary keys...

Next, we install the pg_healer extension. As it needs to access some low-level hooks, we need to load it on startup, by adding a line to the postgresql.conf file:

$ git clone git://
Cloning into 'pg_healer'...
$ cd pg_healer
$ make install
gcc -Wall -Wmissing-prototypes ... -c -o pg_healer.o pg_healer.c
gcc -Wall -Wmissing-prototypes ... -shared -o pg_healer.o
$ echo "shared_preload_libraries = 'pg_healer'" >> dojo/postgresql.conf
$ pg_ctl restart -D dojo -l log.dojo.txt
waiting for server to shut down.... done
server stopped
server starting
## Make sure the extension has loaded cleanly.
## If it did not, the log file would complain
$ tail -2 log.dojo.txt
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

Now for the fun part. We want to purposefully corrupt the file containing the data for the pgbench_branches file, in simulation of a failing hard drive or other really serious problem. The type of problem that normally causes the DBA to get paged in the middle of the night. Before we do that, we want to take a peek at the contents of that table, and then find out which actual disk files contain the table:

$ psql -p 9999 -c "select * from pgbench_branches"
 bid | bbalance | filler 
   1 |        0 | 
(1 row)

$ psql -p 9999 -Atc "select format('%s/%s',

## That file is too cumbersome to keep typing out, so:
$ ln -s /home/greg/pg_healer/dojo/base/16384/198461 myrelfile

Let's throw a deadly shuriken right into the middle of it!

## Here is what the file looks like in its original uncorrupted form
## (checksum is in red):
$ xxd -a -g1 -u myrelfile
00000000: 00 00 00 00 30 69 BC 37 74 66 04 00 1C 00 E0 1F  ....0i.7tf......
00000010: 00 20 04 20 00 00 00 00 E0 9F 40 00 00 00 00 00  . . ......@.....
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fe0: F7 0B 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001ff0: 01 00 03 00 01 09 18 03 01 00 00 00 00 00 00 00  ................

## Good ol' dd is the right tool for the job here:
$ echo -n "Shuriken!" | dd conv=notrunc oflag=seek_bytes seek=4000 bs=9 count=1 of=myrelfile
1+0 records in
1+0 records out
9 bytes (9 B) copied, 0.000156565 s, 57.5 kB/s

## Take a peek inside the file to make sure the shuriken got embedded deeply:
$ xxd -a -g1 -u myrelfile
00000000: 00 00 00 00 30 69 BC 37 74 66 04 00 1C 00 E0 1F  ....0i.7tf......
00000010: 00 20 04 20 00 00 00 00 E0 9F 40 00 00 00 00 00  . . ......@.....
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00000fa0: 53 68 75 72 69 6B 65 6E 21 00 00 00 00 00 00 00  Shuriken!.......
00000fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fe0: F7 0B 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001ff0: 01 00 03 00 01 09 18 03 01 00 00 00 00 00 00 00  ................

These shurikens are not so deadly, but quite yummy!
(photograph by kahvikisu)

Now that we've messed up the file, watch closely at what happens when we try to read from it. We are going to do this three times. The first time, the table will still be in the shared buffer cache, and thus will show no error. The second time, the table will be read from the disk and throw an error. At this point, pg_healer will see the error and repair it. The final read will pull from the completely healed table:

$ psql -p 9999 -c "select * from pgbench_branches"
 bid | bbalance | filler 
   1 |        0 | 
(1 row)

## This will force the table out of shared_buffers, so that the next
## time it is accessed, Postgres must read from the disk:
$ psql -p 9999 -qtc "select pg_healer_remove_from_buffer('pgbench_branches')"

$ psql -p 9999 -c "select * from pgbench_branches"
WARNING:  page verification failed, calculated checksum 9478 but expected 26228
INFO:  File has been healed: base/16384/198461 (intrinsic healing)
ERROR:  invalid page in block 0 of relation base/16384/198461

## Mutant healing power was activated. Observe:
$ psql -p 9999 -c "select * from pgbench_accounts"
 bid | bbalance | filler 
   1 |        0 | 
(1 row)

(The pg_healer extension catches the error from the WARNING, but the ERROR is already queued up to be displayed, which is why it appears after the INFO.)

The corruption we created before changed the "free space" of the Postgres "page" structure. There are multiple ways pg_healer can fix things: this demonstrates one of the "intrinsic" fixes, which require no external knowledge to fix. Corruption can occur anywhere on the page, of course, including inside your data (as opposed to the meta-data or free space). One of the methods of fixing this is for pg_healer to use another copy of the table to try and repair the original table.

While eventually pg_healer will be able to reach out to replicas for a copy of the (non-corrupted) table data it needs, a simpler method is to simply create a good copy inside the data directory. There is a helper function that does just that, by copying the important files to a new directory. Details on how this is kept refreshed will be covered later; for now, let's see it in action and observe how it can help Postgres heal itself from more serious corruption problems:

$ psql -p 9999 -c 'create extension pg_healer'
$ psql -p 9999 -qc 'checkpoint'
$ psql -p 9999 -c 'select pg_healer_cauldron()'

Rather than free space, let's corrupt something a little more important: the line pointers, which indicate where, inside the page, that each tuple (aka table row) is located. Extremely critical information, that is about to get blown away with another deadly shuriken!

$ echo -n "Shuriken!" | dd conv=notrunc oflag=seek_bytes seek=20 bs=9 count=1 of=myrelfile
1+0 records in
1+0 records out
9 bytes (9 B) copied, 9.3577e-05 s, 96.2 kB/s
$ xxd -a -g1 -u myrelfile
00000000: 00 00 00 00 30 69 BC 37 74 66 04 00 1C 00 E0 1F  ....0i.7tf......
00000010: 00 20 04 20 53 68 75 72 69 6B 65 6E 21 00 00 00  . . Shuriken!...
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fe0: F7 0B 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001ff0: 01 00 03 00 01 09 18 03 01 00 00 00 00 00 00 00  ................

$ psql -p 9999 -qtc "select pg_healer_remove_from_buffer('pgbench_branches')"

$ psql -p 9999 -c "select * from pgbench_branches"
WARNING:  page verification failed, calculated checksum 8393 but expected 26228
INFO:  File has been healed: base/16384/198461 (external checksum match)
ERROR:  invalid page in block 0 of relation base/16384/198461

$ psql -p 9999 -c "select * from pgbench_branches"
 bid | bbalance | filler 
   1 |        0 | 
(1 row)

## Has the shuriken really been removed?
$ xxd -a -g1 -u myrelfile
00000000: 00 00 00 00 30 69 BC 37 74 66 04 00 1C 00 E0 1F  ....0i.7tf......
00000010: 00 20 04 20 00 00 00 00 E0 9F 40 00 00 00 00 00  . . ......@.....
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fe0: F7 0B 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001ff0: 01 00 03 00 01 09 18 03 01 00 00 00 00 00 00 00  ................

Once again, pg_healer has repaired the file. This time, however, it reached out to a version of the file outside the data directory, copied the old page data to the new page data, and then used the checksum to confirm that the changes were correct. This method only works, however, if the original file and the copy have the same checksum - which means that no changes have been made since the copy was made via pg_healer_cauldron(). As this is not always possible, there is a third method pg_healer can use, which is to examine things row by row and to try and repair the damage.

For this final demo, we are going to change the table by adding a new row, which ensures that the checksums against the copy will no longer match. After that, we are going to add some corruption to one of the table rows (aka tuples), and see if pg_healer is able to repair the table:

$ psql -p 9999 -qtc 'insert into pgbench_branches values (2,12345)'
$ psql -p 9999 -qc 'checkpoint'

## Throw a shuriken right into an active row!
$ echo -n "Shuriken!" | dd conv=notrunc oflag=seek_bytes seek=8180 bs=9 count=1 of=myrelfile
1+0 records in
1+0 records out
9 bytes (9 B) copied, 0.000110317 s, 81.6 kB/s

## If you look close, you will notice the checksum (in red) has also changed:
$ xxd -a -g1 -u myrelfile
00000000: 00 00 00 00 70 B0 8E 38 A4 8E 00 00 20 00 C0 1F  ....p..8.... ...
00000010: 00 20 04 20 00 00 00 00 E0 9F 40 00 C0 9F 40 00  . . ......@...@.
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fc0: 05 0C 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001fd0: 02 00 03 00 01 08 18 03 02 00 00 00 39 30 00 00  ............90..
00001fe0: F7 0B 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00001ff0: 01 00 03 00 53 68 75 72 69 6B 65 6E 21 00 00 00  ....Shuriken!...

$ psql -p 9999 -qtc "select pg_healer_remove_from_buffer('pgbench_branches')"

$ psql -p 9999 -c "select * from pgbench_branches"
WARNING:  page verification failed, calculated checksum 56115 but expected 36516
INFO:  File has been healed: base/16384/198461 (external tuple healing)
ERROR:  invalid page in block 0 of relation base/16384/198461

$ psql -p 9999 -c "select * from pgbench_branches"
 bid | bbalance | filler 
   1 |        0 | 
   2 |    12345 | 
(2 rows)

There are still some rough edges, but for a proof of concept it works quite nicely. While reacting to corruption errors as they appear is nice, in the future I would like it to be more proactive, and run as a background process that scans the database for any problems and fixes them. Ideally, it should be able to handle a wider class of table corruption problems, as well as problems in indexes, free space maps, system catalogs, etc. Please jump in and lend a hand - the project is on github as pg_healer.

Data corruption is a fact of life DBAs must confront, be it from failing hard drives, cosmic rays, or other reason. While the detection of such errors was greatly improved in Postgres 9.3 with the --data-checksums argument to initdb (which ought to default on!), it's time to not just detect, but heal!

Thoughts on Envoy's Design

By Will Larson from Django community aggregator: Community blog posts. Published on Sep 18, 2016.

A couple days ago Lyft released Envoy, which is a pretty exciting take on a layer 7 proxy. This is an area that I’ve been thinking about a fair amount, both in terms of rolling out widespread quality of service, as well as request routing in heavily polygot environments.

Before getting into my thoughts, let’s take a quick recap of common load balancing strategies.

Load balancing through an AWS ELB.

Historically, the most common approach to load balancing was to use a centralized load balancer of some kind and to route all traffic to it. These LBs can be done at a per-service level like ELBs, or can be very sophisticated hardware like F5s.

Increasingly these seem to be going out of style.

In part due to their cost and complexity, but my take is that their lack of popularity is more driven by a lack of flexiblity and control. As a programmer, it’s relatively intuitive to programmatically manage HAProxy, in a way that most centralized load balancers don’t offer. (And it’s not even that easy to manage HAProxy programmatically, although it’s making steady progress.)

Load balancing through a per-process mechanism like Finagle.

On the other end of the continuum, there is per-process load balancing like Finagle, where each one of your processes maintains its own health-checking and routing tables. This tends to allow a truly epic level of customization and control, but requires that you solve the fairly intimidating domain of library management.

For companies who are operating in one language or runtime, typically the JVM, and who are operating from a monorepo, the challenges of library management are pretty tractable. For most everyone else–those with three or more languages or with service per repo–these problems require very significant ecosystem investment.

Some companies who are devoted to supporting many language and many repositories (like Netflix, Uber) are slowly building the infrastructure to make it possible, but in general this isn’t a problem you want to take on, which has been the prime motivation to develop the sidecar approach.

Load balancing through a sidecar.

The sidecar model is to have a process running per server, and to have all processes on a server communicate through that local process. Many companies–including any company that ever hired a Digg systems engineer–have been running HAProxy in this mode for quite a while, and it gives a good combination of allowing flexibility without requiring that you solve the library management problem.

One of the downsides of the sidecar router has traditionally been poor support for anything other than HTTP/1.0 and HTTP/1.1, and very few options which are both high performance and highly customizable (HAProxy has amazing performance but becomes fairly brittle as you try to do more sophisticated things).

That is where Envoy fits in.

Ok! So, let’s start with the stuff that Envoy does really well.

First, the docs are simply excellent. They do a wonderful job in particular of explaining the high-level architecture and giving a sense of the design constraints which steered them towards that architecture. I imagine it was a hard discussion to decide to delay releasing the project until the documentation was polished to this state, but I’m really glad they did.

The approach to supporting ratelimiting is also great. So many systems require adopting their ecosystem of supporting tools (Kafka, requiring that you also roll out Zookeeper, is a good example of this), but Envoy defines a GRPC interface that a ratelimiting server needs to provide, and thus abstracts itself from the details of how ratelimiting is implemented. I’d love to see more systems take this approach to allow greater interoperability. (Very long term, I look forward to a future where we have shared standard interfaces for common functionality like ratelimiting, updating routing tables, login, tracing, and so on.)

Using asynchronous DNS resolution is a very clever design decision as well. Synchronous DNS resolution came be slow, but also means if your DNS resolution becomes unavailable (which has absolutely happened to me a number of times, mostly with bind saturing its CPU core) then you’ll continue routing to the last known response. This approach also allows you to avoid learning the intricacies of your various languages’ DNS resolvers, which turn out to be incorrect or unavailable in excitingly unique ways.

Hot restart using a unix domain socket is interesting as well, in particular how containers have become a first-class design constraint. Instead of something hacked on later, “How will this work with container deployment?” is asked at the start of new system design!

For a long time it felt like Thrift was the front-running IDL, but lately it’s hard to argue with GRPC: it has good language support and the decision to built on HTTP/2 means it has been able to introduce streaming bidirectional communication as a foundational building block for services. It’s very exciting to see Envoy betting on GRPC in a way which will reduce the overhead of adopting GRPC and HTTP/2 for companies which are already running sidecar routers.

All of those excellent things aside, I think what’s most interesting is the possibility of a broader ecosystem developing on top of the HTTP and TCP filter mechanisms. The approach seems sufficiently generalized that someone will be able to add more protocols and to customize behavior to their needs (to trade out the centralized ratelimiting for a local ratelimiter or the local circuit breaker for a centralized circuit breaker, etc).

You can imagine an ecosystem with MySQL and PostgreSQL protocol aware TCP filters, a more comprehensive suite of back pressure tooling for latency injection, global circuit breaking along the lines of Hystrix (would be pretty amazing to integrate with Hystrix’s dashboard versus needing to recreate something similar), and even the ability to manually block specific IPs to provide easier protection from badly behaving clients (certainly not at the level of a DDoS, rather a runaway script or some such).

In terms of a specific example of a gap in the initial offering, my experience is that distributed healthchecking–even with local caching of health check results–is too expensive at some scale (as a shot in the dark, let’s say somewhere around the scale of 10,000 servers with 10 services per server). Providing a centralized healthcheck cache would be an extremely valuable extension to Envoy for very large-scale deployments. (Many people will never run into this issue, either due to not needing that many servers and services, or because you scope your failure zones to something smaller. It’s probably true that a proactive engineering team with enough resources would never need this functionality. If someone does want to take a stab at this, the RAFT protocol has some pretty nice thoughts around decentralized health checking.)

One other quick thought is that I don’t see any coverage of the concept of checking identity in addition to checking health. Bad deploys and bad configuration can sometimes end up with a service running in an unexpected place, and in the worst case that unexpected place is the expected place for some unrelated process. Ideally your load balancer can ask your service to identify itself (both some kind of unique identifier along with its version), helping avoid some very specific and very confusing to debug scenarios.

Altogether, it feels like discovery and routing is becoming a very busy space as the most interesting ideas from Twitter and Google become mainstream, and I’m very excited to see Envoy make it’s mark.

As always, I’m very interested to hear your thoughts about load balancing, proxying and reliability! Send me an email (email is in the top-right column), or tweet me @lethain.

Regina Obe: PostGIS 2.3.0rc1 Released

From Planet PostgreSQL. Published on Sep 18, 2016.

PostGIS 2.3.0rc1 is feature complete, so we’re looking for testing and feedback! Best served with PostgreSQL 9.6rc1 and pgRouting 2.3.0-rc1

Please give this release candidate a try and report back any issues you encounter. New things since 2.3.0beta1 release

Please report bugs that you find in this release.

Important / Breaking Changes

  • 3466, Casting from box3d to geometry now returns a 3D geometry (Julien Rouhaud of Dalibo)

  • 3604, pgcommon/ orders CFLAGS incorrectly leading to wrong liblwgeom.h (Greg Troxel)

  • 3396, ST_EstimatedExtent, now throws WARNING instead of ERROR (Regina Obe)

    New Features and Performance Enhancements

  • Add support for custom TOC in (Christoph Moench-Tegeder)

  • Add support for negative indexing in STPointN and STSetPoint (Rémi Cura)
  • Numerous new function additions and enhancements: New Functions and Enhancements

  • 3549, Support PgSQL 9.6 parallel query mode, as far as possible (Paul Ramsey, Regina Obe)

  • 3557, Geometry function costs based on query stats (Paul Norman)
  • 3591, Add support for BRIN indexes (Giuseppe Broccolo of 2nd Quadrant, Julien Rouhaud and Ronan Dunklau of Dalibo)
  • 3496, Make postgis non-relocateable (for extension install), schema qualify calls in functions (Regina Obe) Should resolve once and for all for extensions #3494, #3486, #3076

  • 3547, Update tiger geocoder to support TIGER 2016 and use http or ftp (Regina Obe)

See the full list of changes in the news file and please report bugs that you find in the release. Binary packages will appear in repositories over the coming weeks as packagers roll out builds.

View all closed tickets for 2.3.0.

Vladimir Borodin: One more time about collation in PostgreSQL

From Planet PostgreSQL. Published on Sep 18, 2016.

It’s been a long time since my last post. It’s time to write something useful :)

When people start working with PostgreSQL they sometimes make mistakes which are really difficult to fix later. For example during initdb of your first DB you don’t really understand whether you need checksums for data or not. Especially that by default they are turned off and documentation says that they “may incur a noticeable performance penalty”.

And when you already have several hundred databases with a few hundred terabytes of data on different hardware or (even worse) in different virtualization systems, you do understand that you are ready to pay some performance for identification of silent data corruption. But the problem is that you can’t easily turn checksums on. It is one of the things that is adjusted only once while invoking initdb command. In the bright future we hope for logical replication but until that moment the only way is pg_dump, initdb, pg_restore that is with downtime.

And if checksums may be not useful for you (e.g. you have perfect hardware and OS without bugs), lc_collate is important for everyone. And now I will prove it.

Sort order

Suppose you have installed PostgreSQL from packages or built it from sources and initialized DB by yourself. Most probably, in the modern world of victorious UTF-8 you would see something like that:

d0uble ~ $ psql -l
                               List of databases
   Name    | Owner  | Encoding |   Collate   |    Ctype    | Access privileges
 postgres  | d0uble | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | d0uble | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/d0uble        +
           |        |          |             |             | d0uble=CTc/d0uble
 template1 | d0uble | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/d0uble        +
           |        |          |             |             | d0uble=CTc/d0uble
(3 rows)

d0uble ~ $

If you don’t specify explicitly, initdb will take settings for columns 3-5 from operating system. And most likely you would think that everything is fine if you see UTF-8 there. However, in some cases you may be surprised. Look at the following query result on linux box:

linux> SELECT name FROM unnest(ARRAY[
    'MYNAME', ' my_name', 'my-image.jpg', 'my-third-image.jpg'
]) name ORDER BY name;
(4 rows)


Such sort order seems really weird. And this despite the fact that the client connected to DB with quite adequate settings:

linux> SELECT name, setting FROM pg_settings WHERE category ~ 'Locale';
            name            |      setting
 client_encoding            | UTF8
 DateStyle                  | ISO, MDY
 default_text_search_config | pg_catalog.english
 extra_float_digits         | 0
 IntervalStyle              | postgres
 lc_collate                 | en_US.UTF-8
 lc_ctype                   | en_US.UTF-8
 lc_messages                | en_US.UTF-8
 lc_monetary                | en_US.UTF-8
 lc_numeric                 | en_US.UTF-8
 lc_time                    | en_US.UTF-8
 server_encoding            | UTF8
 TimeZone                   | Europe/Moscow
 timezone_abbreviations     | Default
(14 rows)


The result doesn’t depend on distro — at least it is the same on RHEL 6 and Ubuntu 14.04. Even more strange is the fact that the same query with the same server and client settings on Mac OS X gives another result:

macos> SELECT name FROM unnest(ARRAY[
    'MYNAME', ' my_name', 'my-image.jpg', 'my-third-image.jpg'
]) name ORDER BY name;
(4 rows)


At first glance, linux is seriously broken in this place. But the problem is that the result which depends on OS is very bad result. Fortunately, we discoved it during testing — tests on developer’s macbook were fine, but on testing linux-server not.

The reason is that PostgreSQL takes collation from OS and surprisingly UTF-8 may be different ¯\_(ツ)_/¯ While searching you could find a lot of threads about different sort order in Linux and Mac OS X ( 1, 2, 3, 4, 5, 6).

Opinions are different about the question “who is to blame?” but we can confidently say that Mac OS X exactly doesn’t account all regional specifics. It can be seen by links above or i.e. on the following example for Russian language:

macos> SELECT name FROM unnest(ARRAY[
    'а', 'д', 'е', 'ё', 'ж', 'я'
]) name ORDER BY name;
(6 rows)


Meanwhile Linux handles this request reasonably from my point of view. And even previous query result may be explained — linux ignores whitespaces and symbols -, _ while sorting. I.e. thinking a little the broken OS is Mac OS X.

After all we moved our tests to docker to be independant from OS characteristics but there are other ways to get the same results in different operating systems. The easiest one is to use LC_COLLATE = C because it is the only collation which is distributed with PostgreSQL and doesn’t depend on OS (see documentation).

linux> SELECT name FROM unnest(ARRAY[
    'MYNAME', ' my_name', 'my-image.jpg', 'my-third-image.jpg'
]) name ORDER BY name COLLATE "C";
(4 rows)


You can see that is such case results are the same for both OS. But it is also easy to see that they are the same as in Mac OS X so also with problems for multibyte encodings, e.g.:

linux> SELECT name FROM unnest(ARRAY[
    'а', 'д', 'е', 'ё', 'ж', 'я'
]) name ORDER BY name COLLATE "C";
(6 rows)


Not worth while to think that sort result with LC_COLLATE=en_US.UTF-8 in Mac OS X always would be the same as with LC_COLLATE=C in any OS. You can certainly be sure only in the fact that collation C guarantees the same result everywhere because it is provided with PostgreSQL and doesn’t depend on OS.

Meanwhile from a purely narrow-minded point of ordinary user view it seems odd not to account whitespaces and other non-alphanumeric characters while sorting, but these rules have been invented, standardized and not for me to change them. However, in the original problem these rules were invalid so we moved to C collation.

Prefix queries

The fact that postgres relies on glibc in sorting has some more nuances which is to say some more. For example let’s create the following table with two text fields and insert into it a million of random rows:

linux> CREATE TABLE sort_test (
    a text,
    b text COLLATE "C");
linux> INSERT INTO sort_test SELECT md5(n::text), md5(n::text)
    FROM generate_series(1, 1000000) n;
INSERT 0 1000000
linux> CREATE INDEX ON sort_test USING btree (a);
linux> CREATE INDEX ON sort_test USING btree (b);
linux> ANALYZE sort_test;
linux> SELECT * FROM sort_test LIMIT 2;
                a                 |                b
 c4ca4238a0b923820dcc509a6f75849b | c4ca4238a0b923820dcc509a6f75849b
 c81e728d9d4c2f636f067f89cc14862c | c81e728d9d4c2f636f067f89cc14862c
(2 rows)


First field is created with default collation (en_US.UTF-8 in my example) while the second one is with collation C, the values are the same in both columns. Let’s see plans for queries by prefix of each field:

linux> explain SELECT * FROM sort_test WHERE a LIKE 'c4ca4238a0%';
                           QUERY PLAN
 Seq Scan on sort_test  (cost=0.00..24846.00 rows=100 width=66)
   Filter: (a ~~ 'c4ca4238a0%'::text)
(2 rows)

linux> explain SELECT * FROM sort_test WHERE b LIKE 'c4ca4238a0%';
                                     QUERY PLAN
 Index Scan using sort_test_b_idx on sort_test  (cost=0.42..8.45 rows=100 width=66)
   Index Cond: ((b >= 'c4ca4238a0'::text) AND (b < 'c4ca4238a1'::text))
   Filter: (b ~~ 'c4ca4238a0%'::text)
(3 rows)


It’s easy to see that PostgreSQL uses index only for seconf query. The reason can be seen in EXPLAIN output (see Index Cond) — in the second case PostgreSQL knows the order of characters and converts index search condition from b LIKE 'c4ca4238a0%' to b >= 'c4ca4238a0' AND b < 'c4ca4238a1' (and just then postgres will filter received results by original condition) and these two operations are well covered by B-Tree.

You can see that such query cost with collation C is approximately 2500 times less.

Abbreviated keys

One of really good optimizations which appeared in PostgreSQL 9.5 was so called abbreviated keys. The best thing to read about it is the post of optimization’s author, Peter Geoghegan. In short it greatly accelerated sorting of text fields and creating indexes on them. Some examples may be seen here.

Unfortunately, in 9.5.2 this optimization was turned off for all collations except C. The reason was glibc bug (as we remember PostgreSQL relies on glibc for all collations except C) in which result indexes could be inconsistent.

Instead of a conclusion

In the original issue after all we started using lc_collate = C, because the data may be in different languages and this collation seems to be the best choice for that. Yes, it won’t consider some corner cases in each language but it would be good enough for all others.

Meanwhile it is really sad that there is no silver bullet and when all your data is e.g. in Russian you have to choose between performance and correct sorting order with accounting Russian language specifics.

Luca Ferrari: PGDay.IT 2016: it's time for you to speak!

From Planet PostgreSQL. Published on Sep 18, 2016.

As you probably already know the Call For Papers for the PGDay.IT 2016 is now open. Please see the details here and send your contribution following the instructions. The organizing committee will review each proposal in order to deliver a great program for the tenth edition of the italian PostgreSQL based conference.

Szymon Lipiński: Executing Custom SQL in Django Migrations

From Planet PostgreSQL. Published on Sep 17, 2016.

Since version 1.7, Django has natively supported database migrations similar to Rails migrations. The biggest difference fundamentally between the two is the way the migrations are created: Rails migrations are written by hand, specifying changes you want made to the database, while Django migrations are usually automatically generated to mirror the database schema in its current state.

Usually, Django’s automatic schema detection works quite nicely, but occasionally you will have to write some custom migration that Django can’t properly generate, such as a functional index in PostgreSQL.

Creating an empty migration

To create a custom migration, it’s easiest to start by generating an empty migration. In this example, it’ll be for an application called blog:

$ ./ makemigrations blog --empty -n create_custom_index
Migrations for 'blog':

This generates a file at blog/migrations/ that will look something like this:

# -*- coding: utf-8 -*-                                                                                                                                                                                             
# Generated by Django 1.9.4 on 2016-09-17 17:35                                                                                                                                                                     
from __future__ import unicode_literals                                                                                                                                                                             
from django.db import migrations                                                                                                                                                                                    
class Migration(migrations.Migration):                                                                                                                                                                              
    dependencies = [                                                                                                                                                                                                
        ('blog', '0001_initial'),                                                                                                                                                                                   
    operations = [                                                                                                                                                                                                  

Adding Custom SQL to a Migration

The best way to run custom SQL in a migration is through the migration.RunSQL operation. RunSQL allows you to write code for migrating forwards and backwards—that is, applying migrations and unapplying them. In this example, the first string in RunSQL is the forward SQL, the second is the reverse SQL.

# -*- coding: utf-8 -*-                                                                                                                                                                                             
# Generated by Django 1.9.4 on 2016-09-17 17:35                                                                                                                                                                     
from __future__ import unicode_literals                                                                                                                                                                             
from django.db import migrations                                                                                                                                                                                    
class Migration(migrations.Migration):                                                                                                                                                                              
    dependencies = [                                                                                                                                                                                                
        ('blog', '0001_initial'),                                                                                                                                                                                   
    operations = [                                                                                                                                                                                                  
            "CREATE INDEX i_active_posts ON posts(id) WHERE active",                                                                                                                                         
            "DROP INDEX i_active_posts"                                                                                                                                                                             

Unless you’re using Postgres for your database, you’ll need to install the sqlparse library, which allows Django to break the SQL strings into individual statements.

Running the Migrations

Running your migrations is easy:

$ ./ migrate
Operations to perform:
  Apply all migrations: blog, sessions, auth, contenttypes, admin
Running migrations:
  Rendering model states... DONE
  Applying blog.0002_create_custom_index... OK

Unapplying migrations is also simple. Just provide the name of the app to migrate and the id of the migration you want to go to, or “zero” to reverse all migrations on that app:

$./ migrate blog 0001
Operations to perform:
  Target specific migration: 0001_initial, from blog
Running migrations:
  Rendering model states... DONE
  Unapplying blog.0002_create_custom_index... OK

Hand-written migrations can be used for many other operations, including data migrations. Full documentation for migrations can be found in the Django documentation.

(This post originally covered South migrations and was updated by Phin Jensen to illustrate the now-native Django migrations.)

Shaun M. Thomas: PG Phriday: Working Together

From Planet PostgreSQL. Published on Sep 16, 2016.

There seem to be quite a few popular Postgres conferences peppering the globe these days. This year, Simon Riggs of 2ndQuadrant gave the sponsored keynote at Postgres Open. I’m not entirely sure it was intentional since it wasn’t the title of his presentation, but he uttered the words “working together to make Postgres better for everyone” at one point. The phrase “Working Together” really stood out, because that’s a significant part of what makes Postgres so great. It resonated acutely with the impetus behind the Unconference track that remains a regular fixture at PGCon.

Then Simon dropped another bomb that shocked everyone in attendance. Everywhere I looked were visions of absolute disbelief and awe. He suggested that somewhere in the twisting catacombs of 2ndQuadrant was a beta version of WordPress running on Postgres instead of MySQL. Considering its roots in PHP before there were readily available database abstraction layers, and the fact many extensions call the MySQL functions outright, this is clearly an impossibility. This is clearly a cruel hoax, and I demand proof that this particular unicorn exists for reasons completely unrelated to exorcising MySQL from my own website.

Perhaps primarily accidental, but I seem to be a regular fixture at Postgres Open since it began in 2011. Most of this is because it started in Chicago, which made it extremely convenient to attend for the first four years of its existence. Now I’m just operating on pure momentum. So what did I learn this time around? I’ve been doing this too long to learn much regarding Postgres, and short of diving into writing extensions in C, that isn’t going to change. But maybe I can offer a useful perspective on the conference itself and the talks I visited.

PGLogical – the logical replication for PostgreSQL

First on the agenda was a talk on PGLogical by Petr Jelinek, yet another 2ndQuadrant contribution. It works by interpreting the Postgres WAL stream and decoding activity into generic SQL statements that can be replayed on a remote database. Because of this decoupling from the binary contents of the transaction log, there are numerous advantages and newly available replication options:

  • Multiple data sources can transmit content to a single aggregate table. This is great for data warehouses and centralized reporting systems.
  • Stripped of version-specific binary details, replication can span Postgres versions, and thus be used to link disparate instances, or facilitate upgrades.
  • Replication is normally an all-or-nothing exercise. In parsed form, replication can be filtered by intent or target.
  • Interpreting WAL contents removes trigger overhead common to logical replication solutions.

Unfortunately, PGLogical does retain some of the limitations of being content-driven. DDL is not included for example, so table modifications must be explicitly deplicated on targets, or passed through a companion function that will pass the changes on to replicas. This particular limitation is something I don’t quite understand. Postgres is ACID compliant, and as such, table modifications must be reflected in the transaction log somewhere. I wonder what makes them resistant to decoding automatically.

Either way, extensions like this help deprecate existing solutions like Slony, Londiste, or Bucardo. They had their time, but it’s time to move on.

A look at the Elephants trunk – PostgreSQL 9.6

In this talk, Magnus Hagander spent some time ruminating on the upcoming 9.6 features he thought were particularly noteworthy. And of course, I acted as a secondary filter. Many of these deserve much closer attention, but that’s for a future article. So what’s new in 9.6?

  • Tired of “idle in transaction” connections blocking things up? Well there’s finally a timeout to kill them. Huzzah!
  • VACUUM now skips already “frozen” pages. This makes it much faster on primarily read-based tables, because unchanged data isn’t checked again. Data warehouses will rejoice.
  • Database replicas will now actually show which primary they’re following. This used to only be available by manually checking recovery.conf.
  • Synchronous replication will now allow multiple synchronous standby servers. This means it’s possible to create a pool of five replicas, and allow any two of them to acknowledge writes, for example. This is not only safer in the long run, but much less likely to disrupt the primary unless several replicas become unavailable. I might actually start using the feature now!
  • There are a lot of lock-based, date/time, allocation, and memory management performance improvements. This includes tweaks to kernel page flushing that should dramatically reduce overwhelming storage with write spikes.
  • The Postgres foreign data wrapper can now push JOIN and ORDER BY to the remote database. Previously, data would be fetched locally and then joined and sorted. This should make an incredible difference to foreign query performance.
  • And of course, who could forget parallelism of various operations? It’s still pretty new and somewhat experimental, but now that 9.6 is a release candidate, it should see a lot more field testing.

The Elephant Meets the Whale – Bringing PostgreSQL to Production on Docker

Phil Vacca may not be a Docker expert, but he was certainly thorough in examining the “can we” versus “should we” debate regarding deploying Postgres on Docker containers. Docker is one of those things I didn’t know much about aside from its existence and general usage patterns. But once Phil related a Docker container to a glorified chroot environment, everything clicked.

It turns out that Postgres being a data-driven service makes it much harder to run in a container. There’s another session that goes into more depth in this regard, but short of overloading the Postgres container itself, there must be an entire constellation of related containers churning along with it. Each one of these will need a slew of configuration file overlays, environment variables, extension layers, permissions, and volume mappings.

I get really uncomfortable around so many moving parts, or otherwise fragile implementations. It would seem that, short of including a heavy stack management infrastructure, Postgres and Docker are not a good production environment match. This is a quickly evolving area though, so that statement is likely to expire soon.

Perhaps unrelated, but can anyone tell me why everyone is chasing Docker instead of LXC? A versioned, privately directed, arbitrarily controlled, quickly deprecated moving target seems like a bad thing to depend on. Well, unless you’re the developers of Docker. I must be missing something, here.

All the Big Data Sciency stuff you never knew Postgres could do

If there’s ever a notorious speaker, it’s Jim Nasby. This talk actually seemed a tame compared to his previous forays. Though he did write a data type designed to handle Python NumPy ndarray data structures, so it’s definitely topical! In other news, Postgres now supports a Python NumPy data type.

This talk actually featured a pretty good summary of all the different ways to use Postgres to manage scientific or complex analysis:

  • JSON & associated functionality
  • FDWs
  • Extensible types (such as numpy ndarray)
  • Various replication features, including logical replicaction and BDR
  • Parallel queries
  • Distributed approaches like citusdata and Postgres-XL
  • Column store approach (cstore_fdw)
  • CPU-driven support (PGStrom)
  • A REST API in PostgREST
  • ToroDB, a MongoDB API on top of Postgres. It basically transforms JSON into multiple Postgres tables. Theoretically this means access is available via JSON or SQL. Though MongoDB sharding isn’t supported yet, so scalability is limited.
  • MADlib, a machine learning algo system via SQL. This one is especially intriguing and I need to run it past our business intelligence team. If our data can support this kind of analysis, we have several TB of data to dump on it.

Just more support that Postgres is becoming more of a data middleware than simply a database engine.

Locked Up: Advances in Postgres Data Encryption

Vibhor Kumar brings us pgcrypto, the presentation. Beyond just covering the basics of using the encryption extension, he provided some encouraging performance results of various approaches such as:

  • Encrypting the data itself with pgcrypto functions.
  • Using disk-based encryption.
  • Both of these with and without SSL.

What’s particularly interesting here, and something I never considered, is that column-based encryption completely breaks indexes. In retrospect, this should have been fairly obvious. Once data is transformed, indexes only operate on the resulting encoded information, making them effectively meaningless. Short of intricate and magical mathematical analysis of data leakage through the encryption model, this is unfixable. Which suggests that the real conclusion of this talk is: use disk-based encryption and force SSL when data must be protected. It’s hard to argue with that.

Bitemporal Data Model: making it happened in Postgres

This is one of the only sessions with two presenters. Chad Slaughter and Henrietta Dombrovskaya usage of the Postgres interval type to track historical events is compelling and insightful. It turns out, simply having an event log for bookkeeping is problematic because that makes obtaining the current summary convoluted. So they combined an event log with intervals to track modification history for a rolling summary. It basically boils down to this:

  • Start with an effective interval to reflect when data should be included in reports.
  • Add an assertive interval to track when the modifications were asserted, or applied.

This effectively decouples modifications from presentation, yet retains both. Through the use of intervals, there are definite boundaries reports can target, and through the assertive interval, we can immediately tell how long the modification was valid through the single data point.

Maintaining this does require a lot of custom enforcement mechanics, though. Postgres constraints can’t incorporate that kind of logic automatically, so it falls to triggers and helper functions to step in. Still, the concept of a composite current and historical relation is an interesting theory, and definitely worthy of further exploration.

Administering Postgres using Docker and Openshift

Jeff McCormick over at Crunchy Data gave the second Docker talk I alluded to earlier. A pretty good summary of the presentation boils down to their Docker stack. While Phil presented the why, here we got to see how. Give the rather unwieldy nature of the suite itself, there really needs to be a management element. Apparently either OpenShift or another Kubernetes-based system can fill that role, but they’re hardly free in themselves.

That’s a ton of overhead to “just” run Postgres. Of course, that kind of setup is more of a dynamic allocation model. Given a reliable backing storage source, Postgres could reliably churn along in such an abstraction layer. I like that it prevents relying on administration of the underlying OS. Platform as a Service is something I’d like to see more in practice. It’s just not something I’m going to touch with a ten foot pole; I’ll let our infrastructure department worry about coordinating such a monstrosity.

Amazon RDS for PostgreSQL: New Features and Lessons Learned

I hate to say Grant McAlister presented an hour-long commercial for Amazon RDS, but I’d be lying if I didn’t. Though as one of the primary architects, he’d probably be remiss if he didn’t.

What I came away with, is that RDS has come a long way recently. There are a lot more available metrics and tracking, performance levels, resource credit balancing, and Postgres extensions. Filesystem and snapshot encryption are standard, and encrypted snapshots can even be shared, provided they have an associated custom key. Heck, they’ve even integrated logical replication already.

For organizations that don’t need custom everything and can budget potentially paying per-minute pricing, it seems like a pretty good approach.

Elephant Herd as a Service: Managing Hundreds of Postgres Instances

My own talk was on ElepHaaS, a Postgres instance wrangler if there ever was one. I designed it to manage over 100 instances spread across multiple data centers, environments, and servers. At one point, it served a crucial role in testing our disaster recovery readiness. All I had to do was click a checkbox, hit “go”, wait a couple minutes, and about a dozen herds had failed over to the most up-to-date replica. The tool allowed me to repurpose the old primary instances as new replicas, and repeat the process in reverse. All without SSHing to each individual server and carefully invoking the process.

After the presentation, Sean Chittenden (of Groupon fame) brought up Consul as a way to make the ele_tools service companion better at Postgres instance auto-discovery. Or maybe replace it entirely. I won’t know until I spend some time looking at what it does and how it works. Either way, it seems to be a nice way of making ElepHaaS better.

ElepHaaS could scale to thousands of instances, but we don’t have that many. Someone does, though. Pull requests welcome!

Big Data with PostgreSQL

Again with Simon Riggs, eh? Sure! This time around, the focus was on handling very large data sources. During this, he brought up a major shortcoming of sharding data on a primary key: secondary keys. In essence, related data in separate tables is likely to be sharded on different columns, forcing non-local data federation that adversely affects performance due to cross-communication requirements. This is why so many existing solutions require N-N node communication topographies.

There are basically two ways to circumvent this:

  1. Find a way to keep associated data together on the secondary key by following the primary as a principle hash. This doesn’t always work however, as the principle shard element may require further subdivision for larger secondary distributions.
  2. Implement a feedback system that pre-fetches applicable data into a front-end layer for local aggregation. There were a few slides showing pgpredict and 2UDA partially filling these roles.

And then of course, there was the requisite hawking of Postgres-XL. We’re still trying to get this one to work, but just recently ran into some kind of bug that corrupted a couple of system catalog tables in our development environment. That’s going to be fun to fix! But I’m still pushing it, because this approach really is the way to go in the long run for extremely large warehouses. Simon emphasized that they want to move as much of the Postgres-XL patches to Postgres core as possible, and then make the remaining functionality available as an extension. Maintaining such a divergence long-term isn’t really viable.

Oh, and he also mentioned that Postgres has a 32TB per table size limit. I immediately ran to our 50TB system and frantically checked the biggest tables before remembering we implemented partitioning specifically to avoid maintenance issues caused by vast monolithic tables. That of course, solved this problem as well. Otherwise, our largest table would have been 8TB. Not bad! These limitations are obviously documented, but it’s easy to forget.

Non-Relational Postgres

As the last talk for the conference, Bruce Momjian spent some time covering non-relational data types supported by Postgres. Starting with arrays, he continued through all of the greats like ranges, geometric types, XML, JSON and JSONB, composite row types, and even characters, since they can be interpreted any number of ways depending on tokenization contexts.

One particularly interesting point he raised was that range types index better for data that often follows a start/end dating model. This is because Postgres only really uses one index at a time per sub-clause. In addition, independent columns also have separately tabulated statistics. When these statistics are combined, they can lead to underestimations of row counts, which can lead to poor performance due to inefficient query plans.

That’s an important point that applies to many of the composite non-relational types. Keeping related data together in many cases is an optimization, even when a single column contains multiple data points. So it’s not just to reduce query complexity!

Bruce always keeps his presentations (including this one) on his website. I highly recommend perusing any that look interesting.

Closing Notes

As always, most of the presenters eventually link their slide decks on the Postgres Wiki. Though I personally wish the Postgres Open site itself eventually published them on the abstract pages for each talk. In either case, videos of all of these will eventually be on YouTube, so for any I didn’t cover, or sounded good enough to see first-hand, keep an eye out for them.

Of the big names that normally attend, Robert Haas and Josh Berkus were conspicuously absent. For now, I’ll just blame the newly formed Postgres Vision for scheduling too closely to Postgres Open. It’s not the first conference to cannibalize another for speakers, and it won’t be the last.

In the end, I think this was a pretty good summary of Postsgres past, present, and future. As always, I lament the general lack of end-to-end solution demonstrations like some of those we’ve seen in the past. That’s the risk of being a Postgres advocate: everyone wants to talk about some cool new method they’ve developed, but there isn’t a lot of real-world application.

Either way, there was definitely enough material to inspire a bit of intellectual exploration of database techniques. If you couldn’t attend, there are a couple more Postgres conferences this year, and a slew for 2017. Don’t be a stranger!

Julien Rouhaud: Minimizing tuple overhead

From Planet PostgreSQL. Published on Sep 16, 2016.

I hear quite often people being disappointed on how much space PostgreSQL is wasting for each row it stores. I’ll try to show here some tricks to minimize this effect, to allow more efficient storage.

What overhead?

If you don’t have tables with more than few hundred of million of rows, it’s likely that you didn’t have an issue with this.

For each row stored, postgres will store aditionnal data for its own need. This is documented here. The documentation says:

Field Type Length Description
t_xmin TransactionId 4 bytes insert XID stamp
t_xmax TransactionId 4 bytes delete XID stamp
t_cid CommandId 4 bytes insert and/or delete CID stamp (overlays with t_xvac)
t_xvac TransactionId 4 bytes XID for VACUUM operation moving a row version
t_ctid ItemPointerData 6 bytes current TID of this or newer row version
t_infomask2 uint16 2 bytes number of attributes, plus various flag bits
t_infomask uint16 2 bytes various flag bits
t_hoff uint8 1 byte offset to user data

Which is 23 bytes on most architectures (you have either t_cid or t_xvac).

You can see part of these fields in hidden column present on any table by adding them in the SELECT part of a query, or look for negative attribute number in pg_attribute catalog:

# \d test
     Table "public.test"
 Column |  Type   | Modifiers
 id     | integer |

# SELECT xmin, xmax, id FROM test LIMIT 1;
 xmin | xmax | id
 1361 |    0 |  1

# SELECT attname, attnum, atttypid::regtype, attlen
FROM pg_class c
JOIN pg_attribute a ON a.attrelid = c.oid
WHERE relname = 'test'
ORDER BY attnum;
 attname  | attnum | atttypid | attlen
 tableoid |     -7 | oid      |      4
 cmax     |     -6 | cid      |      4
 xmax     |     -5 | xid      |      4
 cmin     |     -4 | cid      |      4
 xmin     |     -3 | xid      |      4
 ctid     |     -1 | tid      |      6
 id       |      1 | integer  |      4

If you compare to the previous table, you can see than not all of these columns are not stored on disk. Obviously PostgreSQL doesn’t store the table’s oid in each row. It’s added after, while constructing a tuple.

If you want more technical details, you should read take a look at htup_detail.c, starting with TupleHeaderData struct.

How costly is it?

As the overhead is fixed, it’ll become more and more neglictable as the row size grows. If you only store a single int column (4 bytes), each row will need:

23B + 4B = 27B

So, it’s 85% overhead, pretty horrible.

On the other hand, if you store 5 integer, 3 bigint and 2 text columns (let’s say ~80B average), you’ll have:

23B + 5*4B + 3*8B + 2*80B = 227B

That’s “only” 10% overhead.

So, how to minimize this overhead

The idea is to store the same data with less records. How to do that? Aggregating data in arrays. The more records you put in a single array, the less overhead you have. And if you aggregate enough data, you can benefit from transparent compression thanks to the TOAST mechanism

Let’s try with a single 1 integer column table containing 10M rows:

# CREATE TABLE raw_1 (id integer);

# INSERT INTO raw_1 SELECT generate_series(1,10000000);

# CREATE INDEX ON raw_1 (id);

The user data should need 10M * 4B, ie. around 38MB, while this table will consume 348MB. Inserting the data takes around 23.

NOTE: If you do the maths, you’ll find out that the overhead is slighty more than 32B, not 23B. This is because each block also has some overhead, NULL handling and alignement issue. If you want more information on this, I recommand to see this presentation

Let’s compare with aggregated versions of the same data:

# CREATE TABLE agg_1 (id integer[]);

# INSERT INTO raw_1 SELECT array_agg(i)
FROM generate_series(1,10000000) i
GROUP BY i % 2000000;

# CREATE INDEX ON raw_1 (id);

This will insert 5 elements per row. I’ve done the same test with 20, 100, 200 and 1000 elements per row. Results are below:

NOTE: The size for 1000 element per row is a little higher than lower value. This is because it’s the only one which is big enough to be TOAST-ed, but not big enough to be compressed. We can see a little TOAST overhead here.

So far so good, we can see quite good improvements, both in size and INSERT time even for very small arrays. Let’s see the impact to retrieve rows. I’ll try to retrieve all the rows, then only one row with an index scan (for the tests I’ve used EXPLAIN ANALYZE to minimize the time to represent the data in psql):

# SELECT id FROM raw_1;

# CREATE INDEX ON raw_1 (id);

# SELECT * FROM raw_1 WHERE id = 500;

To properly index this array, we need a GIN index. To get a the values from aggregated data, we need to unnest() the arrays, and to be a little more creative to get a single record:

# SELECT unnest(id) AS id FROM agg_1;

# CREATE INDEX ON agg_1 USING gin (id);

# WITH s(id) AS (
    SELECT unnest(id)
    FROM agg_1
    WHERE id && array[500]
SELECT id FROM s WHERE id = 500;

Here’s the chart comparing index creation time and index size:

The GIN index is a little more than twice the btree index, if I add the table size, total size is almost the same as without aggregation. That’s not a big issue since this example is naive, we’ll see later how to avoid using GIN index to keep total size low. Also index is way slower to build, meaning that INSERT will also be slower.

Here’s the chart comparing index creation time and index size, time to retrieve all rows and a single row:

Getting all the rows is probably not an interesting example, but as soon as array contains enough elements it starts to be faster than original table. We see that getting only one element is much more faster than with the btree index, thanks to GIN efficiency. It’s not tested here, but since only btree index are sorted, if you need to get a lot of data sorted, using a GIN index will require an extra sort which will be way slower than a btree index scan.

A more realistic example

Now that we’ve seen the basics, let’s see how to go further: aggregating more than one columns and avoid to use too much disk space with a GIN index. For this, I’ll present how PoWA stores it’s data.

For each datasource collected, two tables are used: the historic and aggregated one, and the current one. These tables store data in a custom type instead of plain columns. Let’s see the tables related to pg_stat_statements:

The custom type, basically all the counters present in pg_stat_statements and the timestamp associated to this record:

powa=# \d powa_statements_history_record
   Composite type "public.powa_statements_history_record"
       Column        |           Type           | Modifiers
 ts                  | timestamp with time zone |
 calls               | bigint                   |
 total_time          | double precision         |
 rows                | bigint                   |
 shared_blks_hit     | bigint                   |
 shared_blks_read    | bigint                   |
 shared_blks_dirtied | bigint                   |
 shared_blks_written | bigint                   |
 local_blks_hit      | bigint                   |
 local_blks_read     | bigint                   |
 local_blks_dirtied  | bigint                   |
 local_blks_written  | bigint                   |
 temp_blks_read      | bigint                   |
 temp_blks_written   | bigint                   |
 blk_read_time       | double precision         |
 blk_write_time      | double precision         |

The aggregated table store the pg_stat_statement unique identifier (queryid, dbid, userid), and a record of counters:

powa=# \d powa_statements_history_current
    Table "public.powa_statements_history_current"
 Column  |              Type              | Modifiers
 queryid | bigint                         | not null
 dbid    | oid                            | not null
 userid  | oid                            | not null
 record  | powa_statements_history_record | not null

The aggregated table contains the same unique identifier, an array of records and some special fields:

powa=# \d powa_statements_history
            Table "public.powa_statements_history"
     Column     |               Type               | Modifiers
 queryid        | bigint                           | not null
 dbid           | oid                              | not null
 userid         | oid                              | not null
 coalesce_range | tstzrange                        | not null
 records        | powa_statements_history_record[] | not null
 mins_in_range  | powa_statements_history_record   | not null
 maxs_in_range  | powa_statements_history_record   | not null
    "powa_statements_history_query_ts" gist (queryid, coalesce_range)

We also store the timestamp range (coalesce_range) for all aggregated counters in a row, and the minimum and maximum values of each counter in two dedicated records. These extra fields doesn’t consume too much space, and allows very efficient indexing and computation, based on the access pattern of the related application.

This table is used to know how much ressource a query consumed on a given time range. The GiST index won’t be too big since it only indexes one row per X aggregated counters, and will find efficiently the rows matching a given queryid and time range.

Then, computing the resources consumed can be done efficiently, since the pg_stat_statements counters are strictly monotonic. The algorithm would be:

  • if the row time range is entirely contained in the asked time range, we only need to compute delta of summary record: maxs_in_range.counter - mins_in_range.counter
  • if not (meaning only two rows for each queryid) we unnest the array, filter out records that aren’t in the asked time range, keep first and last value and compute for each counter the maximum minus the minimum. The unnest will only

NOTE: Actually, PoWA interface always unnest all records overlapping the asked time interval, since the interface is designed to show these counters evolution on a relatively small time range, but with a great precision. Hopefuly, unnesting the records is not that expensive, especially compared to the disk space saved.

And here’s the size needed for the aggregated and non aggregated values. For this I let PoWA generate 12.331.366 records (configuring a snapshot every 5 seconds for some hours, default aggregation of 100 records per row), and used a btree index on (queryid, ((record).ts) to simulate the index present on the aggregated table:

Pretty efficient, right?


There are some limitations with aggregating records. If you do this, you can’t enforce constraints such as foreign keys or unique constraints. The use is therefore non-relationnal data, such as counters or metadata.


Using custom types also allows some nice things, like defining custom operators. For instance, the release 3.1.0 of powa will provide two operators for each custom type defined:

  • the - operator, to get difference between two record
  • the / operator, to get the difference per second

You’ll therfore be able to do this kind of queries:

# SELECT (record - lag(record) over()).*
FROM from powa_statements_history_current
WHERE queryid = 3589441560 AND dbid = 16384;
      intvl      | calls  |    total_time    |  rows  | ...
-----------------+--------+------------------+--------+ ...
 <NULL>          | <NULL> |           <NULL> | <NULL> | ...
 00:00:05.004611 |   5753 | 20.5570000000005 |   5753 | ...
 00:00:05.004569 |   1879 | 6.40500000000047 |   1879 | ...
 00:00:05.00477  |  14369 | 48.9060000000006 |  14369 | ...
 00:00:05.00418  |      0 |                0 |      0 | ...

# SELECT (record / lag(record) over()).*
FROM powa_statements_history_current
WHERE queryid = 3589441560 AND dbid = 16384;

  sec   | calls_per_sec | runtime_per_sec  | rows_per_sec | ...
--------+---------------+------------------+--------------+ ...
 <NULL> |        <NULL> |           <NULL> |       <NULL> | ...
      5 |        1150.6 |  4.1114000000001 |       1150.6 | ...
      5 |         375.8 | 1.28100000000009 |        375.8 | ...
      5 |        2873.8 | 9.78120000000011 |       2873.8 | ...

If you’re interested on how to implement such operators, you can look at PoWA implementation.


You now know the basics to work around the per tuple overhead. Depending on your needs and your data specifities, you should find a way to aggregate your data and add some extra column to keep nice performance.

Minimizing tuple overhead was originally published by Julien Rouhaud at rjuju's home on September 16, 2016.

Ernst-Georg Schmid: Blending gene sequence variation data into PostgreSQL

From Planet PostgreSQL. Published on Sep 15, 2016.

VCF, the Variant Call Format, is a clever idea. Instead of storing all genetic information of a sampled genome, it only stores the delta against some reference genome. This squeezes a lot of redundancy out of the data and thus occupies a lot less storage space.

Unfortunately, VCF is also a unwieldy format. Only a part is fixed, with metadata in the header describing fields in the body which in turn describe the actual data format of the samples.

This makes VCF especially hard to grasp for systems like Hadoop that work on chunked files spread over many compute nodes. With VCF, every chunk has to carry a copy of the header to make sense of the data it carries.

Formats like ADAM are under development that tackle this (any many more) problems, but when I was given the task to make VCF files accessible to a database last year, I took the direct route:

Since there have been already quite powerful tools and libraries to work with VCF files around for some time, I used Multicorn and PyVCF and wrote a foreign data wrapper for PostgreSQL that understands the VCF format.

After it was finished, I realized that it had been done before...

However, my implementation is different in some parts. Most notably it abstracts more from the actual storage of the VCF files and it works with vanilla PyVCF and does not need any special modifications.

A few days ago, I was granted permission to release the code into the wild by the people who paid for it in the first place, so now you have the freedom of choice. ;-)

Ladies and Gentlemen, please take a look at just another multicorn based foreign data wrapper for VCF files for PostgreSQL, the one and only truly integrarelational DBMS.

Denish Patel: Advanced Postgres Monitoring!

From Planet PostgreSQL. Published on Sep 15, 2016.

Today, I presented at PgOpen 2016 (Postgres conference) in Dallas, TX.

Here is the slide deck :

Caktus Group @ PyData Carolinas 2016

By Caktus Consulting Group from Django community aggregator: Community blog posts. Published on Sep 14, 2016.

Tomorrow marks the official beginning of PyData Carolinas 2016 (though technically, the tutorials started today). This is the first time PyData has hosted a conference in our area. We’re especially proud of the way local leaders and members of meetups like TriPython, TechGirlz, Girl Develop It RDU, and PyLadies have worked in tandem to put this event together for the Python community.

Caktus will be at PyData tomorrow and Friday as a Silver sponsor. We’re glad to be in the company of esteemed sponsoring organizations like IBM, RENCI, Continuum Analytics, and the Python Software Foundation.

Come see us at the following PyData events and talks!

Wednesday, September 14th

7:00PM - 8:00PM
Evening Social at the 21c Museum Hotel
Join us after the tutorials with a social hosted by the Durham Convention Center. More details here.

Thursday, September 15th

8:30AM - 5:30PM
Caktus Booth
We’ll have a booth with giveaways for everyone plus a raffle. We’ll also have a display of OpenDataPolicingNC, a project Caktus CTO Colin Copeland helped lead; it received a White House nod for Code for Durham.

11:30AM - 12:10PM
Reach More People: SMS Data Collection with RapidPro (Room 1)
Erin Mullaney (Caktus) and Rebecca Muraya (TransLoc) will share how to use RapidPro, an open source SMS survey data collection app developed by UNICEF, to collect data. They’ll also show you how to use RapidPro’s API to create your own data visualizations.

11:30AM - 12:10PM
Python, C, C++, and Fortran Relationship Status: It’s Not That Complicated (Room 2)
Philip Semanchuk, a Caktus contractor, gives an overview of your many options for getting Python to call and exchange data with code written in a compiled language. The goal is to make attendees aware of choices they may not know they have, and when to prefer one over another.

6:30PM - 8:30PM
Drinks & Data (The Rickhouse, Durham)
We're looking forward to this event, hosted by MaxPoint. It overlooks the park where the Durham Bulls play.

Friday, September 16th

8:30AM - 5:30PM
Caktus Booth
Do stop on by to say hello! We’d love to learn more about the different projects you’re working on.

10:40AM - 11:20PM
Identifying Racial Bias in Policing Practices: Open Data Policing (Room 2)
Colin Copeland, Caktus Co-founder and CTO, Co-chief of Code for Durham, and a 2015 Triangle Business Journal 40 Under 40 awardee, will give a talk on His efforts were recognized via an invitation to the White House during Obama’s Police Data Initiative celebration. North Carolina developers and civil rights advocates used demographic data from nearly 20,000,000 unique NC traffic stops to create a digital tool for identifying race-based policing practices.

11:30AM - 12:10 PM
You Belong with Me: Scraping Taylor Swift Lyrics with Python and Celery (Room 1)
Mark Lavin, Caktus Technical Director, and Rebecca Conley, Caktus developer, will demonstrate the use of Celery in an application to extract all of the lyrics of Taylor Swift from the internet. Expect laughter and fun gifs.

Raffle drawing for a copy of Lightweight Django (O’Reilly)
We’ll contact the winner just in time for the book signing. Lightweight Django is co-authored by Caktus Technical Director Mark Lavin.

12:45 - 1:10PM
Book signing of Lightweight Django (O’Reilly) with Mark Lavin
Line up early! We only have limited copies to give away. Each time we’ve done a book signing, the line has been far longer than copies available. For those who aren’t able to get a copy of the book, we’ll have coupon cards for a discount from O’Reilly.

Can’t join us?

If you can’t join us at PyData Carolinas and there’s a talk of ours you want to see, we’ll have the slides available after the conference. You can also follow us on Twitter during PyData itself: @caktusgroup.

US PostgreSQL Association: PgConf US 2017 CFP now open!

From Planet PostgreSQL. Published on Sep 14, 2016.

The CFP for PGConf US 2017 is now open. You can read all about it and submit here. This coming conference is set to be the largest we have had. As we continue to grow we continue to add options for our attendees including a Job Fair and an additional day.

Affan Salman: Integrating Cassandra, Spark, PostgreSQL and Hadoop as a Hybrid Data Platform

From Planet PostgreSQL. Published on Sep 14, 2016.

Today many organizations struggle to keep up with their database
requirements, for example, to:

  • store and analyze high-velocity and rapidly-growing data such as logs,
    package tracking events, sensor readings and machine-generated
  • ensure 24/7 availability of customer-facing websites, services and
    apps even when a subset of their data centers, servers or data are
  • support fast-growing internet-scale businesses by adding relatively
    inexpensive data servers rather than requiring million-dollar
    investments in high-end servers and storage.

Our industry is increasingly producing and exploring various Open Source
systems to provide solutions for requirements like these. However, many
such systems intending to offer degrees of Scalability and
Availability choose architectures that impose inherent limitations.

Many of these architectures have a node or a collection of nodes that
are treated as special. Think Master-Slave, NameNode-DataNode and so
forth. While each of these models serves a different set of use cases,
a common attribute across them is that they have a SPOF (Single Point
of Failure). Even when they offer some level of multiplicity to deal
with the SPOF issue, the problems continue: these special nodes can
become bottlenecks for the operations that only they are allowed to
carry out. Capacity Planning, Backup and Recovery, Fault
, Disaster Recovery and similar areas of operation all
become more complex. Moreover, the non-special nodes are typically
underutilized or entirely passive. Many of these architectures make it
virtually impossible to achieve peta-scale, multi-thousand-node clusters
with linear growth and failure tolerance atop today’s
dynamically-orchestrated infrastructure.

Enter Cassandra – A peer-to-peer, multi-datacenter active-active,
peta-scale, fault-tolerant distributed database system. Nowadays, it is
hard not to have heard of this excellent system as its user-base
continues to grow. The key to realize is that its peer-to-peer
architecture is the basis for its SPOF-free operation with the
understanding that failures are the norm in clustered environments.
Cassandra is also well known for lowering the latency relative to many
other big data systems. It is in use by over 1500 organizations
including Netflix, eBay, Instagram and CERN. To get an idea of the
scale, Apple’s production deployment has been well known in the
Cassandra community to comprise 75,000 nodes storing over 10 PB but in
September last year at the Cassandra Summit, their deployment was
reported to have exceeded 100,000 nodes.

We are great believers in Cassandra and Spark and are building a hybrid
data platform bringing the benefits of these systems to PostgreSQL. We
also hope that the benefits of the PostgreSQL platform will have a wider
reach through this. Our distribution, Postgres by BigSQL, provides easy
access to these two systems through our FDW extensions CassandraFDW and
HadoopFDW. The HadoopFDW extension provides not just access to Hadoop
but also to Spark which uses the same underlying network protocol and
SQL parser.

The combined array of advanced features that these two FDWs support is
impressive: write support (INSERT/UPDATE/DELETE), predicate pushdown,
IMPORT FOREIGN SCHEMA, and JOIN pushdown. We believe that of all the
externally-maintained FDWs, these two FDW extensions represent the
cutting-edge in terms of the PostgreSQL FDW technology as an
implementation of SQL/MED for big data systems.

With that context, we will focus on the CassandraFDW in the next blog
post in this series.

Michael Paquier: Postgres 10 highlight - recovery_target_lsn

From Planet PostgreSQL. Published on Sep 12, 2016.

When performing point-in-time recovery, Postgres offers a variety of ways to stop recovery, or WAL replay at a given point using different ways of estimating the stop point:

  • Timestamp, with recovery_target_time.
  • Name, with recovery_target_name, which is a recovery target defined by a user with pg_create_restore_point().
  • XID, with recovery_target_xid, a transaction ID that will make recovery go up to the point where the transaction where this ID was assigned has been committed.
  • ‘immediate’, which is a special case using recovery_target = ‘immediate’. Using that the recovery will stop when a consistent state has been reached by the server.

The replay position can as well be influenced by recovery_target_inclusive, which is true by default (list of recovery parameters is here).

Today’s post is about a new recovery target type, that has been added in Postgres 10 by this commit:

commit: 35250b6ad7a8ece5cfe54c0316c180df19f36c13
author: Simon Riggs <>
date: Sat, 3 Sep 2016 17:48:01 +0100
New recovery target recovery_target_lsn

Michael Paquier

An LSN (logical sequence number) is a position in a WAL stream, in short a set of locations to know where a record is inserted, like ‘0/7000290’. So with this new parameter what one is able to do is to set at a record-level up to where recovery has to run. This is really helpful in many cases, but the most common one is where for example WAL has been corrupted up to a given record and a user would like to replay data as much as possible. With this parameter there is no need to do a deep analysis of the WAL segments to look at which transaction ID or time the target needs to be set: just setting it to a record is fine. And one can even look at such a LSN position via the SQL interface with for example pg_current_xlog_location() that would give the current LSN position that a server is using.

Let’s take a small example with this cluster from which a base backup has already been taken (important to be able to replay forward):

=# CREATE TABLE data_to_recover(id int);
=# INSERT INTO data_to_recover VALUES (generate_series(1, 100));
INSERT 0 100
=# SELECT pg_current_xlog_location();
(1 row)

In this case the data inserted into the cluster has used WAL up to the LSN position ‘0/152F080’. And now let’s insert a bit more data:

=# INSERT INTO data_to_recover VALUES (generate_series(101, 200));
INSERT 0 100
=# SELECT pg_current_xlog_location();
(1 row)

And this adds a bit more data, consuming a couple of extra records. Now let’s do recovery up to where the first 100 tuples have been inserted, with a recovery.conf containing the following (be sure that the last WAL segment has been archived):

recovery_target_lsn = '0/3019838'
restore_command = 'cp /path/to/archive/%f %p'

After PITR completes, the logs will then show somthing like the following entry (and then recovery pauses):

LOG:  recovery stopping after WAL position (LSN) "0/3019838"

And by logging into this node, there are indeed only 100 tuples:

=# SELECT count(*) FROM data_to_recover;
(1 row)

Hopefully this will find its set of users, personally that is a powerful tool.

Django Shortcuts

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Sep 12, 2016.

This module is a collection of helper classes generally used in view functions/classes. All the shortcuts are available in the module django.shortcuts.



def render(request, template_name, context=None, content_type=None, status=None, using=None)

Shortcut for:

content = loader.render_to_string(template_name, context, request, using=using)
return HttpResponse(content, content_type, status)

There is also render_to_response, the only difference is that it does not pass the request to the context.



def redirect(to, *args, **kwargs):

Returns an HttpResponseRedirect (or HttpResponsePermanentRedirect) to the appropriate URL for the arguments passed.

The arguments could be:

  • A model: the model’s get_absolute_url() function will be called.
  • A view name, possibly with arguments: urls.reverse() will be used to reverse-resolve the name.
  • A URL, which will be used as-is for the redirect location.

Shortcut for:

def post_view(request, post_id):
    post = Post.objects.get(pk=post_id)
    return redirect(post)
    # equivalent to: return HttpResponseRedirect(post.get_absolute_url())

def post_view(request, post_id):
    return redirect('post_details', id=post_id)
    # equivalent to: return HttpResponseRedirect(reverse('post_details', args=(post_id, )))

def relative_url_view(request):
    return redirect('/posts/archive/')
    # equivalent to: return HttpResponseRedirect('/posts/archive/')

def absolute_url_view(request):
    return redirect('')
    # equivalent to: return HttpResponseRedirect('')

See more in this post about the redirect function.



def get_object_or_404(klass, *args, **kwargs):

Shortcut for:

    return Model.objects.get(pk=1)
except Model.DoesNotExist:
    raise Http404()



def get_list_or_404(klass, *args, **kwargs):

Shortcut for:

obj_list = list(Model.objects.filter(title='test'))
if not obj_list:
    raise Http404()
return obj_list


This one is actually used by the redirect shortcut. It will do basically the same thing, except to perform the actual redirect.


def resolve_url(to, *args, **kwargs):

The arguments could be:

  • A model: the model’s get_absolute_url() function will be called.
  • A view name, possibly with arguments: urls.reverse() will be used to reverse-resolve the name.
  • A URL, which will be returned as-is.

Giuseppe Broccolo: Back to the Future Pt. 1: Introduction to pg_rewind

From Planet PostgreSQL. Published on Sep 12, 2016.


Since PostgreSQL 9.5, pg_rewind has been able to make a former master follow up a promoted standby although, in the meantime, it proceeded with its own timeline. Consider, for instance, the case of a switchover that didn’t work properly.

Have you ever experienced a “split brain” during a switchover operation? You know, when the goal is to switch the roles of the master and the standby, but instead you end up with two independent masters – each one with its own timeline? For PostgreSQL DBAs in HA contexts, this where pg_rewind comes in handy!

Until PostgreSQL 9.5, there was only one solution to this problem: re-synchronise the PGDATA of the downgraded master with a new base backup and add it to the HA cluster as a new standby node. Generally, this is not a problem, unless your database is relatively big. For instance, if there are hundreds of GBs, it is not easy to do this type of operation while trying to keep downtime as low as possible.

Restoring a database to a previous point in time can create some complexities that can be handled in different ways. For an in depth explanation of the evolution and components of fault tolerance in PostgreSQL I suggest you check out Gulcin’s series Evolution of Fault Tolerance in PostgreSQL, including a Time Travel feature that mentions the use of pg_rewind.

how pg_rewind works

pg_rewind scans the “old” master’s PGDATA folder, identifies the data blocks changed during the switch to the new timeline, then copies only these blocks from the promoted slave. This is then used to replace the changes. As a “collateral” effect, the configuration files are also copied from the promoted standby (so the DBA has to be careful to adapt them to the new role of the node in the HA cluster). However, this allows the prevention of re-syncing PGDATA completely.

To do this, it is necessary to have all the WALs produced in the final moments before the switchover from the old master. Changes are identified by comparing the status of the data blocks present in the PGDATA with the changes logged in the WALs. Once the changed blocks are identified, the WALs are replayed, miming a sort of ‘rewind’ of the timelines.


  • the instances have to be initialised with the “-k” (or --data-checksums) parameter
  • the parameter wal_log_hints has to be enabled

Until PostgreSQL 9.5, the necessary WALs were the ones starting from the last checkpoint, since it could not go behind this point in the timeline.

To better understand how it works, consider this simple example with a master:

# Set PATH variable
export PATH=/usr/pgsql-9.5/bin:${PATH}

# This is the directory where we will be working on
# Feel free to change it and the rest of the script
# will adapt itself

# Environment variables for PGDATA and archive directories

# Initialise the cluster
initdb --data-checksums -D ${MASTER_PGDATA}

# Basic configuration of PostgreSQL
cat >> ${MASTER_PGDATA}/postgresql.conf <<EOF
archive_command = 'cp %p ${ARCHIVE_DIR}/%f'
archive_mode = on
wal_level = hot_standby
max_wal_senders = 10
min_wal_size = '32MB'
max_wal_size = '32MB'
hot_standby = on
wal_log_hints = on

cat >> ${MASTER_PGDATA}/pg_hba.conf <<EOF
# Trust local access for replication
local replication replication trust

# Create the archive directory
mkdir -p ${ARCHIVE_DIR}

# Start the master
pg_ctl -D ${MASTER_PGDATA} -l ${WORKDIR}/master.log start

# Create the replication user
psql -c "CREATE USER replication WITH replication"

(note the small amount of WAL kept in the master), and then a standby:

# Create the first standby
pg_basebackup -D ${STANDBY1_PGDATA} -R -c fast -U replication -x

cat >> ${STANDBY1_PGDATA}/postgresql.conf <<EOF
port = 5433

# Start the first standby
pg_ctl -D ${STANDBY1_PGDATA} -l ${WORKDIR}/standby1.log start

Let’s insert some data on the master. You will see it also from the (hot) standby.

Now promote the standby, leaving the master as it is:

pg_ctl -D ${STANDBY1_PGDATA} promote

Now if you update the master, no changes will be visible from the standby. Moreover, in the archive/ directory it is possible to see the file 00000002.history, that shows there has been a change in the timeline during the promotion.

Now let’s “rewind” the master, and make it to follow up the promoted standby:

~$ pg_ctl -D ${MASTER_PGDATA} stop
waiting for server to shut down.... done
server stopped
~$ pg_rewind --target-pgdata=${MASTER_PGDATA} \
    --source-server="port=5433 user=postgres dbname=postgres"

note here that for the connection to the source server – the promoted standby – we used the postgres user, since a superuser is needed by pg_rewind to inspect the data blocks.

If the max_wal_size parameter is not large enough to keep the needed WALs into the pg_xlog/ directory of the standby, as I’ve deliberately made before, an error similar to the following one can be obtained:

The servers diverged at WAL position 0/3015938 on timeline 1.
could not open file "/var/lib/pgsql/9.5/master/pg_xlog/000000010000000000000002": No such file or directory

could not find previous WAL record at 0/3015938
Failure, exiting

Now, there are two possible ways to solve this:

  • manually check the missing WALs in the archive, starting from the one listed in the error message, then copy them to the pg_xlog/ directory of the master
  • add a proper restore_command in the recovery.conf and place in the PGDATA of the master, so pg_rewind will automatically find the missing WALs

The second option is probably the most suitable method. Think, for instance, if you have the WAL archive managed by Barman: you could base the restore_command to use the get-wal feature of Barman, as explained in this interesting article by Gabriele. Doing so, Barman will be used as a WAL hub, providing all the necessary WAL files to pg_rewind.

Once the missing WALs are retrieved, you can run the pg_rewind command again and the following message should ensure that everything worked properly:

~$ pg_rewind --target-pgdata=${MASTER_PGDATA} \
    --source-server="port=5433 user=postgres dbname=postgres"
servers diverged at WAL position 0/3015938 on timeline 1
rewinding from last common checkpoint at 0/3000140 on timeline 1

Keep in mind that just few blocks were copied (those changed during the split-brain), even if your database size is hundreds of GBs! Remember that the configurations are also copied, and eventually the already-present recovery.conf in the “downgraded” master has been over-written. So, remember to:

  • change the port where the instance listens in the postgresql.conf (set it as 5432 in our example)
  • change the primary_conninfo in the recovery.conf in order to make the downgraded master connect to the promoted master

Once this has been done, start the downgraded master and that will start to follow up the promoted standby and will then be its standby in turn.

Do you have a more complex HA cluster? Don’t worry! Part 2 will explain this more in depth and talk about pg_rewind in PostgreSQL 9.5!

QoS, Cost & Quotas

By Will Larson from Django community aggregator: Community blog posts. Published on Sep 11, 2016.

One of the most exciting inversions thats comes with scaling a company is the day when someone repeats the beloved refrain that engineers are more expensive than servers, followed by a pregnant pause where you collectively realize that the lines have finally crossed: servers have become more expensive than engineers.

But what do you do with this newfound knowledge? Why, you kick off a Cost Accounting or Efficiency project to start optimzing your spend! These projects tend to take on a two pronged approach of:

  1. identify some low-hanging fruit for show immediate wins, and
  2. begin to instrument and tag your costs to create transparency into how your spends actually work.

The later gets interesting quickly.

First, let’s start with the fundamental unit of infrastructure cost: a server. For each server you probably have a role (your mysql01-west is probably in a mysql role), and for each role you can probably assign an owner to it (perhaps your database team in this case). Now you write a quick script which queries your server metadata store every hour and emits the current servers and owners into Graphite or some such.

A job well done! Your work here is over… until later that week when you chat with the team who just finished migrating all your processes from dedicated hosts to Mesos or Kubernetes, who present you with an interesting question: “Sure, we glorious engineers on the Scheduling Team run the Kubernetes cluster, but we don’t write any of the apps. Why are we responsible for their costs?”

So you head back to your desk and write down the resources provided by each machine:

  1. cpu,
  2. memory,
  3. network,
  4. disk space,
  5. disk IOPs,

Thinking about those resources and your existing per-host costs, you’re able to establish a more granular pricing model for each of those resources. Then you take that model and add it as a layer on top of your per-host where the Scheduling team is able attribute their server costs downstream to the processes which run on their servers, as long as they’re able to start capturing high-fidelity per-process utilization metrics and maintaining a process-to-team mapping (that was on their roadmap anyway).

Doodling on a paper pad, you realize that things have gotten a bit more complex. Now you have:

  1. server to team mappings,
  2. server allocation metrics,
  3. team server costs,
  4. process to server mappings,
  5. process utilization metrics,
  6. team process costs.

Figuring out the total cost per team is pretty easy though: you just take the team’s server costs, plus the process costs attributed to them by other teams, minus the process costs they attribute to other teams.

Over the following weeks, you’re surprised as every infrastructure team pings you to chat. The team running Kafka wants to track traffic per topic and to attribute cost back to publishers and consumers by utilization; that’s fine with you, it fits into your existing attribution model. The Database team wants to do the same with their MySQL database which is a little bit more confusing because they want to build a model which attributes disk space, IOPs and CPU, but you’re eventually able to figure out some heuristics that are passable enough to create visibility. (What are query comments meant for if not injecting increasingly complex structured data into every query?)

The new SRE manager started scheduling weekly incident review meetings, and you listen absentmindedly while the Kafka team talks about an outage caused by a publisher which started generating far more load than usual. It’s a bummer that Kafka keeps going down, but at least their spend is going down, nothing to do with you. Later, you awake in a panic when someone suggests that we just massively overprovision the Kafka cluster to avoid these problems. You sputter out an incoherent squak of rage at this suggestion–we’ve made too much progress on reducing costs to regress now!–and leave the meeting shaken.

Next week, the MySQL team is in the incident review meeting because they ran out of disk space and had a catastrophic outage. A sense of indigestion starts to creep into your gut as you see the same person as last week gears up to speak, and then she says it, she says it again: “Shouldn’t we spend our way to reliability here?”

Demoralized on the way to get your fifth LaCroix for the day, you see the CFO walking your way. He’s been one of your biggest supporters on the cost initiative, and you perk up anticipating a compliment. Unfortunately, he starts to drill into why infrastructure costs have returned to the same growth curve and almost the same levels they were at when you started your project.

Maybe, you ponder to yourself on the commute home, no one is even looking at their cost data. Could a couple of thoughtful nudges can fix this?

You program up an automated monthly cost reports for each team, showing how much they are spending, and also where their costs fit into the overall team spend distribution. Teams with low spends start asking you if they can trade their savings in for a fancy offsite, and teams with high spends start trying to reduce their spend again. (You even rollout a daily report that only fires if it detects anomalously high spend after a new EMR job spends a couple million dollars analyzing Nginx logs.)

This helps, but the incident meetings keep ending with the suggestion to spend more for reliability, and you’ve started to casually avoid the CFO. So, you know, it’s going great… but…

While you’re typing up notes from your latest technical phone screen, you hear an argument through the thin, hastily constructed conference room walls. The product manager is saying that the best user experience is to track and retain all incoming user messages in their mailbox forever, and the engineer is yelling that it isn’t possible: only bounded queues have predictable behavior, an unbounded queue is just a queue with undefined failure conditions!

Suddenly, it all comes together: your costs and your infrastructure are both missing back pressure! As a bonus, the metrics you’ve put together for cost accounting are exactly the right inputs to start rolling out back pressure across your infrastructure. As the old adage goes, the first back pressure is the hardest, and as you brainstorm with teams you come up with a variety of techniques.

For the Kafka team, constrainted on throughput, you decide on strict per-application per-minute ratelimits. The MySQL team, where bad queries are saturating IOPs, starts circuit breaking applications generating poor queries. To work around Splunk’s strict enforcement of daily indexing quotas, you roll out a simple quality of service strategy: applications specify log priority in the structured data segments of their syslogs, and you shed as many lower priority logs as necessary to avoid overages. For users pegging your external API, you start injecting latency which causes them to behave like kinder clients with backoffs, even though they haven’t changed a thing. (Assuming you can handle a large number of concurrent, idle, connections. Otherwise you’ve just DOSing yourself, you realize later to great chagrin.)

All of the sudden you’re spending a lot of time discussing something you’ve never discussed before: how should our systems fail?

Degraded service isn’t a sign of failure, it’s normal and expected behavior. Some of your key systems are regularly degraded during peak hours, gracefully shedding load of their least important traffic. Best of all, the incident review meetings have stopped.

During a chat with your CFO, while lamenting Splunk’s pricing and quota model, you realize that you can apply back pressure to spend by assigning a cost quota for each team. Six months of angsty conversations later, each team has a quarterly cost quota derived from their historical usage and guidance from Finance, your site never goes down due to insufficient resources, and based on your excellent cost accounting efforts you are promoted infinity times.

Thanks to the many, many people I’ve chatted with about this topic over the last year or so. In particular, thanks to Cory who is probably looking for a way to stop talking about this with me every week.

Craig Kerstiens: Fun with SQL: Computing run rate and month over month growth

From Planet PostgreSQL. Published on Sep 11, 2016.

In any as-a-service business, which bills monthly, a key metric you track is MRR or monthly recurring revenue. It’s good practice to have this on a dashboard and check it on a monthly, weekly, or even daily basis. If you have a simple pricing model that has set monthly plans, say like Netflix this is pretty easy to calculate:

SELECT sum(user_subscriptions.price)
FROM user_subscriptions
WHERE users_subscriptions.ended_at IS null;

The above will give you the run rate as of this exact moment in time. It gets a little more complicated to do this in a single query that gives it to you over time. First, you’ll want to check the run rate for each day, then you’ll want to check what plan they were on at that time. Whether you’re Netflix or a smaller site users will upgrade/downgrade–especially in the case of a usage based service such as Citus Cloud.

We track run rate for ourselves as well as month over month growth all in the same query. While we’ve written more complex SQL before we thought this might be an interesting example of a number of things you can do with SQL and helpful to publicly share for others to benefit from.

First the schema. We have a users and orgs table which overall are pretty un-interesting. The reason they’re in the query is we want to filter out internal usage, as we heavily use Citus Cloud to run Citus Cloud and don’t want that skewing numbers. The interesting table is billings:

# \d billings
                      Table "public.billings"
     Column      |   Type    |              Modifiers
 id              | uuid      | not null default uuid_generate_v4()
 formation_id    | uuid      | not null
 org_id          | uuid      | not null
 validity_period | tstzrange | not null
 price_per_month | integer   | not null

You’ll notice the validity_period has a tstzrange type. This is a range type which allows you to have a from and a to range value of timestamps directly in a single column.

So on to the query. First I’m going to share the full query, then we can try to break it down further piece by piece.

WITH dates AS (
  SELECT generate_series('07/01/2016'::timestamptz, now(), '1 day'::interval) date
billing_valid AS (
  SELECT price_per_month,
  FROM billings,
  WHERE = billings.org_id
    AND org_id IN (
      SELECT id 
      FROM orgs 
      WHERE billing_email NOT LIKE ''

billings_by_day AS (
            lower(validity_period) < 
         AND (  upper(validity_period) > 
             OR upper(validity_period) is null
       ) THEN price_per_month
       ELSE 0
     END amount,,
FROM billing_valid,

billings_daily_totals AS (
    SELECT date, 
           SUM(amount) as billings
    FROM billings_by_day
    WHERE amount > 0 
    GROUP BY date

SELECT date,
       round((1.0 - (lag(billings, 28) over w / cast(billings as numeric))) * 100, 1) 
FROM billings_daily_totals
WINDOW w as (order by date)
ORDER BY date;

Breaking it down further. First we’re making heavy usage of CTEs here to give us some nice composable blocks. This helps at one: making sure we’re not making any errors as we construct the SQL and two: allowing it to be more readable as we come back to it later. Within the first CTE we’re simple generating a list of dates. This is going to allow us to have a condition (the start of each day) to evaluate if a plan is provisioned at that time or not. You can try running it yourself to see what it generates:

 SELECT generate_series('07/01/2016'::timestamptz, now(), '1 day'::interval) date;

Next is a bit of house-keeping. We’re going to find only users that we want to track their billings, or as we described earlier filter out all Citus employees usage.

SELECT price_per_month,
  FROM billings,
  WHERE = billings.org_id
    AND org_id IN (
      SELECT id 
      FROM orgs 
      WHERE billing_email NOT LIKE ''

Next is one of those building blocks that makes things easier than doing it in multiple steps. For every billing record we’re going to multiply out the number of times that record appears. But we’re only going to project out the cost if the Citus Cluster was provisioned before that day and has not been deprovisioned before–in other words at this time was it running:

        lower(validity_period) < 
     AND (  upper(validity_period) > 
         OR upper(validity_period) is null
   ) THEN price_per_month
   ELSE 0
 END amount,,

FROM billing_valid, dates

An example with data might help. If we had a record that looks like:

# SELECT * from billings limit 1;
-[ RECORD 1 ]---+-------------------------------------------------------------------------------------------------------------------
id              | b1e597f8-0dd5-4da7-b63d-b7cc4f39c6f4
formation_id    | 571492e5-051b-4da7-9331-d38ced16c3f3
org_id          | 9a9dffca-93cb-43c7-8772-924198ede0d4
validity_period | ["2016-07-01 16:32:32.645821+00","2016-07-03 20:25:54.921086+00")
price_per_month | 1800    

If we were to run the above query and that was our only record we’d get results similar to:

amount   |    date    |             formation_id
0        | 2016-07-01 | 571492e5-051b-4da7-9331-d38ced16c3f3
1800     | 2016-07-02 | 571492e5-051b-4da7-9331-d38ced16c3f3
1800     | 2016-07-03 | 571492e5-051b-4da7-9331-d38ced16c3f3
0        | 2016-07-04 | 571492e5-051b-4da7-9331-d38ced16c3f3

From here you can see how our next CTE makes it simple to do our aggregations:

SELECT date, 
           SUM(amount) as billings
    FROM billings_by_day
    WHERE amount > 0 
    GROUP BY date

And finally, we take all of those daily run rates and compute the month over month change. Or in this case we use 28 day change since it then falls on the same day of the week to not give us weird jumps across different days of the week. Within that query there’s two key things. First we create a window function WINDOW w as (order by date). Then we have this line that computes the percentage change: round((1.0 - (lag(billings, 28) over w / cast(billings as numeric))) * 100, 1). The key part in here is lag(billings, 28) over w. This uses one of the built in window function aggregates to say give me the value from 28 rows before.

Hopefully the full query now makes sense and is a good guide for how you can construct some more complex SQL queries that are both useful, but also can be understood and well reasoned about. If you found this post interesting, I’ll be speaking and live querying similar concepts at Keep Ruby Weird at the end of October, hope to see you there.

Craig Kerstiens: A tour of Postgres' Foreign Data Wrappers

From Planet PostgreSQL. Published on Sep 11, 2016.

SQL can be a powerful language for reporting. Whether you’re just exploring some data, or generating reports that show month over month revenue growth it’s the lingua franca for data analysis. But, your data isn’t always in a SQL database, even then if you’re using Postgres you can still likely use SQL to analyze, query, even joing with that data. Foreign data wrappers have been around for years in Postgres, but are continuing to mature and be a great option for joining disparate systems.

Overview of foreign data wrappers

If you’re unfamiliar, foreign data wrappers, or FDW, allow you to connect from within Postgres to a remote system. Then you can query them from directly within Postgres. While there is an official Postgres FDW that ships with Postgres itself, that allows you to connect from one Postgres DB to another, there’s also a broad community of others.

At the core of it Postgres provides certain APIs under the covers which each FDW extension can implement. This can include the ability to map SQL to whatever makes sense for a given system, push down various operators like where clauses, and as of Postgres 9.3 can even write data.

To setup a FDW you first would install the extension, then provide the connection to the remote system, setup your schema/tables, and then you’re off to the races–or well ready to query. If you’ve got more than 2-3 databases or systems in your infrastructure, you’ll often benefit from FDWs as opposed to introducing a heavyweight ETL pipeline. Don’t mistake FDWs as the most performant method for joining data, but they are often the developer time efficient means of joining these data sets.

Let’s look at just a few of the more popular and interesting ones.

Postgres FDW

The Postgres one is the easiest to get started with. First you’ll just enable it with CREATE EXTENSION, then you’ll setup your remote server:

CREATE EXTENSION postgres_fdw;

 OPTIONS (host 'foo', dbname 'core_db', port '5432');

Then you’ll create the user that has access to that database:

CREATE USER MAPPING FOR bi SERVER core OPTIONS (user 'bi', password 'secret');

Finally, create your foreign table:

  id          integer NOT NULL,
  username    varchar(255),
  password    varchar(255),
  last_login  timestamptz
SERVER core;

Now you’ll see a new table in the database you created this in called core_users. You can query this table just like you’d expect:

FROM core_users
WHERE last_login >= now() - '1 day'::interval;

You can also join against local tables, such as getting all the invoices for users that have logged in within the last month:

FROM invoices, core_users
WHERE core_users.last_login >= now() - '1 month::interval'
  AND invoices.user_id =

Hopefully this is all straight forward enough, but let’s also take a quick look at some of the other interesting ones:


For MySQL you’ll also have to download it and install it as well since it doesn’t ship directly with Postgres. This should be fairly straight forward:

$ export PATH=/usr/local/pgsql/bin/:$PATH
$ export PATH=/usr/local/mysql/bin/:$PATH
$ make USE_PGXS=1
$ make USE_PGXS=1 install

Now that you’ve built it you’d follow a very similar path to setting it up as we did for Postgres:


CREATE SERVER mysql_server
 OPTIONS (host '', port '3306');

 SERVER mysql_server
 OPTIONS (username 'foo', password 'bar');

  id          integer NOT NULL,
  username    varchar(255),
  password    varchar(255),
  last_login  timestamptz
 SERVER mysql_server;

But MySQL while different than Postgres is also more similar in SQL support than say a more exotic NoSQL store. How well do they work as a foreign data wrapper? Let’s look at our next one:


First you’ll go through much of the same setup as you did for MySQL. The one major difference though is in the final step to setup the table. Since a table doesn’t quite map in the same way with Mongo you have the ability to set two items: 1. the database and 2. the collection name.

     _id NAME,
     user_id int,
     user_username text,
     user_last_login timestamptz)
SERVER mongo_server
     OPTIONS (database 'db', collection 'users');

With this you can do some basic level of filtering as well:

FROM core_users
WHERE user_last_login >= now() - '1 day'::interval;

You can also write and delete data as well now just using SQL:

DELETE FROM core_users 
WHERE user_id = 100;

Of course just putting SQL on top of Mongo doesn’t mean you get all the flexibility of analysis that you’d have directly within Postgres, this does go a long way towards allowing you to analyze data that lives across two different systems.

Many more

A few years ago there were some key ones which already made FDWs useful. Now there’s a rich list covering probably every system you could want. Whether it’s Redis, a simple CSV one, or something newer like MonetDB chances are you can find an FDW for the system you need that makes your life easier.

A Brief Introduction to Django Channels

By Abu Ashraf Masnun from Django community aggregator: Community blog posts. Published on Sep 10, 2016.

Django has long been an excellent web framework. It has helped many developers and numerous businesses succeed over the years. But before the introduction of Channels, Django only supported the http protocol well. With the gradual evolution of the web technologies, standing here in 2016, supporting http only is simply not enough. Today, we are using websockets for real time communications, WebRTC is getting popular for real time collaboration or video calling, HTTP/2 is also being adapted by many. In the current state of the web, any modern web framework needs to be able to support more and more protocols. This is where Django Channels come into play. Channels aim at adding new capabilities to Django, including the support for modern web technologies like websockets or http2.

How does “Channels” work?

The idea behind Channels is quite simple. To understand the concept, let’s first walk through an example scenario, let’s see how Channels would process a request.

A http/websocket request hits the reverse proxy (ie, nginx). This step is not compulsory but we’re conscious developers and always make sure our requests first go through a hardened, battle proven reverse proxy before it hits our application server

Nginx passes the request to an application server. Since we’re dealing with multiple protocols now, instead of application server, let’s call it “Interface Server”. This interface server knows how to handle requests using different protocols. The interface server accepts the request and transforms into a message. It then passes the message on to a channel.

We have to write consumers which will listen on to specific channels. When new messages arrive on those channels, the consumers would process them and if needed, send a response back to a reply/response channel. The interface server listens on to these response channels and when we write back to these channels, the interface server reads the message and transmits it to the outside world (in this case, our user). The consumers are run in background worker processes. We can spawn as many workers as we like to scale up.

So as you can see, the concept is really simple – an interface server accepts requests and queues them as messages on channels. Consumers process these queues and write back responses on response channels. The interface server sends back the responses. Plain, simple yet effective!

There are channels which are already available for us. For example – http.request channel can be listened on if we want to handle incoming http messages. Or websocket.receive can be used to process incoming websocket messages. In reality, we would probably be less interested in handling http.request ourselves and rather let Django handle it. We would be more interested in adding our custom logic for websocket connections or other protocols. Besides the channels which are already available, we can also create our own custom channels for different purposes. Since the project works by passing messages to channels and handling them with background workers, we can actually use it for managing our background tasks too. For example, instead of generating thumbnails on the fly, we can pass the image information as a message to a channel and the worker does the thumbnailing in the background. By default Channels ship with a management command – runworker which can run background workers to listen to the channels. However, till now, there is no retry mechanism if the message delivery somehow fails. In this regard, Celery can be an excellent choice for writing / running / managing the background workers which would process these channels.

Daphne is now the de-facto interface server that works well with Channels. The channels and message passing work through a “channel layer” which support multiple backends. The popular ones are – In Memory, Redis, IPC. As you can guess, these backends and the channel layer is used to abstract away the process of maintaining different channels/queues and allowing workers to listen to those. In Memory backend maintains the channels in memory and is a good fit for local development. While a Redis cluster would be more suitable in a production environment for scaling up.

Let’s Build a WebSocket Echo Server

Enough talk. Let’s build a simple echo server. But before we can do that, we first have to install the package.

pip install channels

That should install Django (as it’s a dependency of channels) and channels along with the necessary packages. Start a Django project with django-admin and create an app.

Now add channels to the INSTALLED_APPS list in your For local development, we are fine with the in memory channel layer, so we need to put these lines in to define the default channel layer:

    "default": {
        "BACKEND": "asgiref.inmemory.ChannelLayer",
        "ROUTING": "realtime.routing.channel_routing",

In the above code, please note the ROUTING key. As the value of this key, we have to pass the path to our channel routing. In my case, I have an app named realtime and there’s a module named which has the channel routing.

from channels.routing import route
from .consumers import websocket_receive

channel_routing = [
    route("websocket.receive", websocket_receive, path=r"^/chat/"),

In the channel routing list, we define our routes which looks very similar to Django’s url patterns. When we receive a message through a websocket connection, the message is passed on to the websocket.receive channel. So we defined a consumer to consume messages from that channel. We also defined a path to indicate that websocket connections to /chat/ should be handled by this particular route. If we omit the path, the clients can connect to any url on the host and we can catch them all! But if we define a path, it helps us namespace things and in another cause which we will see later in this article.

And here’s the

def websocket_receive(message):
    text = message.content.get('text')
    if text:
        message.reply_channel.send({"text": "You said: {}".format(text)})

The consumer is very basic. It retrieves the text we received via websocket and replies back. Note that the websocket content is available on the content attribute of the message. And the reply_channel is the response channel here (the interface server is listening on to this channel). Whatever we send to this channel is passed back to the websocket connection.

We have defined our channel layer, created our consumer and mapped a route to it. Now we just need to launch the interface server and the background workers (which run the consumers). In local environment, we can just run – python runserver as usual. Channels will make sure the interface server and the workers are running in the background. (But this should not be used in production, in production we must use Daphne separately and launch the workers individually. See here).

Once our dev server starts up, let’s open up the web app. If you haven’t added any django views, no worries, you should still see the “It Worked!” welcome page of Django and that should be fine for now. We need to test our websocket and we are smart enough to do that from the dev console. Open up your Chrome Devtools (or Firefox | Safari | any other browser’s dev tools) and navigate to the JS console. Paste the following JS code:

socket = new WebSocket("ws://" + + "/chat/");
socket.onmessage = function(e) {
socket.onopen = function() {
    socket.send("hello world");

If everything worked, you should get an alert with the message we sent. Since we defined a path, the websocket connection works only on /chat/. Try modifying the JS code and send a message to some other url to see how they don’t work. Also remove the path from our route and see how you can catch all websocket messages from all the websocket connections regardless of which url they were connected to. Cool, no?

Our websocket example was very short and we just tried to demonstrate how things work in general. But Django Channels provide some really cool features to work with websockets. It integrates with the Django Auth system and authenticates the websocket users for you. Using the Group concept, it is very easy to create group chats or live blogs or any sort of real time communication in groups. Love Django’s generic views? We have generic consumers to help you get started fast. The channels docs is quite nice, I suggest you read through the docs and try the concepts.

Using our own channels

We can create our own channels and add consumers to them. Then we can simply add some messages to those channels by using the channel name. Like this:



Since Daphne and ASGI is still new, some people still prefer to handle their http requests via WSGI. In such cases, we can configure nginx to route the requests to different servers (wsgi / asgi) based on url, domain or upgrade header. In such cases, having the real time end points under particular namespace can help us easily configure nginx to send the requests under that namespace to Daphne while sending all others to wsgi.

Testing dates in Django

By Eat Some Code from Django community aggregator: Community blog posts. Published on Sep 10, 2016.

Django makes unit & functional testing easy (especially with WebTest). Tests on routing, permissions, database updates and emails are all straightforward to implement but how do you test dates & time? You might for example want to test regular email notifications.

Denish Patel: Recovering corrupted Postgres database!!

From Planet PostgreSQL. Published on Sep 09, 2016.

A couple of weeks ago, I had to deal with corrupted Postgres database cluster. At the end, we couldn’t able to recover some of the data but managed to recover most part of it. Having experience working with dozens of database systems, I’m pleasantly surprised to experience resiliency of Postgres database.

Kudos to Postgres Development team for building the most resilience database in the world :)

Here is my Postgres database recovery story :)


I'm posting the steps carried out during the recovery process for information purpose only. This post doesn't provide any guarantee that it will work for your use-case and/or environment.

Note: The actual database name has been replaced with “dbname”  and actual table names with “tablename”.

One of the Postgres DB cluster database experienced disk level corruption thus we were hitting this error:

postgres=# \c dbname
FATAL: could not read block 0 in file "base/16389/11930": Input/output error

Uh oh?? Really bad, isn’t it? Fortunately, it wasn’t mission critical system so we managed to take extended  outage and work on partial recovery process because we didn’t want to loose all the data!!

Once we received the complaint, we immediately backed up corrupted  database and created recovery cluster to bring up on different  server so we can go through recovery efforts!!

Trial 1:

As many of you know, the first option is to bring up recovery database cluster with  zero_damaged_pages=on   . You can set the value in Postgres config file and try to  reindex system catalog:

reindexdb -p 5433 --system dbname
reindexdb: could not connect to database dbname: FATAL: index "pg_index_indexrelid_index" contains unexpected zero page at block 0 
HINT:  Please REINDEX it.

Doh! Still, we could still not be able to connect to database !!

Trial 2:

If you aren’t aware, you should note down that there is a way to ignore indexes at system level. We started up recovery cluster with ignore_system_indexes=true setting:

pg_ctl -D /data -o '-c ignore_system_indexes=true'
dbname=# \c dbname

Yay! I could able to connect to DB now!

Trial 3:

Let’s try to reindex the database…

dbname=# reindex database "dbname";
NOTICE: table "pg_catalog.pg_class" was reindexed
2016-08-22 15:53:14.179 PDT rhost=[local] app=psql:user=postgres:db=dbname:ERROR: could not create unique index "pg_statistic_relid_att_inh_index"
2016-08-22 15:53:14.179 PDT rhost=[local] app=psql:user=postgres:db=dbname:DETAIL: Key (starelid, staattnum, stainherit)=(2608, 5, f) is duplicated.
2016-08-22 15:53:14.179 PDT rhost=[local] app=psql:user=postgres:db=dbname:STATEMENT: reindex database "dbname";
ERROR: could not create unique index "pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(2608, 5, f) is duplicated.

As the table is corrupted with duplicate entries, let’s find out and fix them.

dbname=# select starelid, staattnum, stainherit from pg_catalog.pg_statistic where starelid=2608 order by 2;
starelid | staattnum | stainherit
2608 | 1 | f
2608 | 2 | f
2608 | 3 | f
2608 | 4 | f
2608 | 5 | f
2608 | 5 | f
2608 | 6 | f
2608 | 7 | f
(8 rows)

Let’s remove one of the entry based on XMIN :

dbname=# delete from pg_catalog.pg_statistic where starelid=2608 and staattnum=5 and xmin=1228447;
Trial 4:

Restart REINDEX but it failed again!!

2016-08-22 16:01:29.698 PDT rhost=[local] app=psql:user=postgres:db=dbname:ERROR: 1 constraint record(s) missing for rel tablename
2016-08-22 16:01:29.698 PDT rhost=[local] app=psql:user=postgres:db=dbname:STATEMENT: reindex database "dbname";
ERROR: 1 constraint record(s) missing for rel tablename
Trial 5:

Let’s try to vacuum analzye the table

dbname=# vacuum analyze tablename;
2016-08-22 16:04:01.282 PDT rhost=[local] app=psql:user=postgres:db=dbname: 1 constraint record(s) missing for rel tablename
2016-08-22 16:04:01.282 PDT rhost=[local] app=psql:user=postgres:db=dbname:STATEMENT: vacuum analyze tablename;
ERROR: 1 constraint record(s) missing for rel tablename

hrm…it’s still complaining about constraint

Trial 6:

let’s disable constraint check….

dbname=# update pg_class set relchecks=0 where relname='tablename';
The above update fixed the  the constraint error
Trial 7:

Let’s reindex the database again!

dbname =# reindex database "dbname";

Yay, Reindex is successful.

Once the reindex is successfully completed, we restarted recovery cluster without zero_damaged_page and ignore_system_indices settings.

Partial tables recovery through pg_dump  process:

As the database is corrupted, it makes sense to kick off the pg_dump on the database … we kicked off the pg_dump but it was still showing some of the sequences with errors!!

/usr/lib/postgresql/9.4/bin/pg_dump dbname -p 5433 -Fc >recovery_dbname.dmp
2016-08-22 16:22:09.517 PDT rhost=[local] app=pg_dump:user=postgres:db=dbname:ERROR: invalid page in block 0 of relation base/16389/2825248
2016-08-22 16:22:09.517 PDT rhost=[local] app=pg_dump:user=postgres:db=dbname:STATEMENT: SELECT sequence_name, start_value, increment_by, CASE WHEN increment_by > 0 AND max_value = 9223372036854775807 THEN NULL WHEN increment_by < 0 AND max_value = -1 THEN NULL ELSE max_value END AS max_value, CASE WHEN increment_by > 0 AND min_value = 1 THEN NULL WHEN increment_by < 0 AND min_value = -9223372036854775807 THEN NULL ELSE min_value END AS min_value, cache_value, is_cycled FROM XX_id_seq
pg_dump: [archiver (db)] query failed: ERROR: invalid page in block 0 of relation base/16389/2825248
pg_dump: [archiver (db)] query was: SELECT sequence_name, start_value, increment_by, CASE WHEN increment_by > 0 AND max_value = 9223372036854775807 THEN NULL WHEN increment_by < 0 AND max_value = -1 THEN NULL ELSE max_value END AS max_value, CASE WHEN increment_by > 0 AND min_value = 1 THEN NULL WHEN increment_by < 0 AND min_value = -9223372036854775807 THEN NULL ELSE min_value END AS min_value, cache_value, is_cycled FROM XX_id_seq

We had issue recovering a couple of tables but we managed to recover the most of the tables in the database  !

This was our Postgres database  recovery story.

Hopefully, it will help someone in case they fall into corruption situation. Any feedback or other ideas are welcome :)


David Rader: Read environment variables from PostgreSQL using plpython

From Planet PostgreSQL. Published on Sep 09, 2016.

Sometimes in your PostgreSQL code you want to read an environment variable – whether to get the input file location for a data load, or check that the postgres user has the right path set, or verify that the TDSDUMP environment variable is set when configuring the tds_fdw to connect to SQL Server. Here’s a plpython based function that will do just that (or the gist):

create extension plpythonu;
create type py_environ_type as (name text, value text);

create or replace function py_environ(name varchar DEFAULT NULL)
  returns setof py_environ_type
as $$
  import os
  aev = []
  if name is None:
    for k, v in os.environ.items():
      aev.append((k, v))
    v = os.getenv(name)
    if v is not None:
  return aev;
$$ language plpythonu;

You can use it to get a list of all environment variables:

select * from py_environ();

Or to get just a single variable:

select value from py_environ('PATH');

If you want other filtering, just add a where clause:

select name, value from py_environ() where name like 'TDS%';

Inspired by an idea from Scott Mead and the pal_environ() function, tested to work on Linux, OS/X, and Windows.

Luca Ferrari: PGday.IT 2016: tenth edition of the italian PostgreSQL conference

From Planet PostgreSQL. Published on Sep 09, 2016.

ITPUG is proud to announce the tenth edition of the italian PostgreSQL conference, namely PGDay.IT 2016, that will take place in Prato, Tuscany, on December 13th.
The organizing committee is working to provide another great edition of the famous italian day dedicated to PostgreSQL.
Very soon the Call For Papers will start (see for more details).
In the meantime...stay tuned!


Shaun M. Thomas: PG Phriday: Irrelevant Inclinations

From Planet PostgreSQL. Published on Sep 09, 2016.

Say hi to Princess Kittybutt. She’ll be our mascot (and subject) for today. We’ll get to her in a minute.

Princess Kittybutt

When the only tool you have is a hammer, everything becomes a nail, right? With Postgres becoming more of an environment than simply a database engine, this colloquialism is starting to resemble reality. Of course, that’s not necessarily a bad thing! As Postgres accumulates copious and varied extensions, its role as an adaptive middleware solidifies. When Postgres can do something for itself, the need for sprawling harnesses of support scripts steadily decreases.

Yet there’s also the accusation that’s often levied at Emacs regarding its role as a UNIX kitchen sink. Extensions aid in preventing Postgres from descending into bloated madness. So for now at least, we’re safe to contemplate the tools available before they gain sentience and destroy humanity. For instance, can we build a game using only Postgres?

Not some contrived simulation where every interactive function calls a maintenance routine to advance the game world and respond to input. Can Postgres be the game engine, with an independently operating universe that churns along regardless of a player’s presence? Can we do that without a mature LISTEN / NOTIFY external resource that does all of the heavy lifting in some other language? Do we have all of the parts?

A new extension named pg_cron recently hit the web which makes this much easier. Postgres has had job scheduling for a while thanks to pgAgent, but it always ran as an external daemon which just happened to use Postgres tables to store schedules. Now we have something a bit more native that leverages the Postgres 9.3+ background worker functionality to integrate scheduling more directly. True to its name, Postgres now has a cron service.

Does it work?

CREATE TABLE cron_test (message TEXT);
SELECT cron.schedule(
  '* * * * *',
  $$INSERT INTO cron_test VALUES ('Hello World')$$
SELECT * FROM cron.job;
-[ RECORD 1 ]------------------------------------------
jobid    | 1
schedule | * * * * *
command  | INSERT INTO cron_test VALUES ('Hello World')
nodename | localhost
nodeport | 5432
DATABASE | postgres
username | postgres
SELECT pg_sleep(60);
SELECT * FROM cron_test;
 Hello World

Success! That job will insert “Hello World” every minute until we remove it. We won’t go through the installation process, as the README covers that well enough. However, we’d be extremely remiss if we neglected to thoroughly abuse it.

We came here to make a game, and that’s what we’re going to do! Well, a “game” anyway. We’re well aware that it’s possible to build a universe using mostly Postgres tools. We highly recommend checking out Schemaverse by the way, it’s a great way to learn Postgres features by example and practice.

Unfortunately, it would be impossible to cram all of that into a reasonable length article or ten. No, we’re going to give birth to a centralized virtual pet system. Pets exist when we aren’t around, need to be fed and played with, and might die if we neglect either for too long. Most importantly, it’s brain-numbingly easy to code. Perfect!

Let’s start with a schema.

  pet_id     SERIAL   PRIMARY KEY,
  pet_name   VARCHAR  NOT NULL,
  mood       INT      NOT NULL DEFAULT 24,
  food       INT      NOT NULL DEFAULT 24,
SELECT pet_name,
       CASE WHEN food < 8 THEN 'STARVING'
            WHEN food < 16 THEN 'HUNGRY'
            ELSE 'FULL'
       END AS appetite,
       CASE WHEN mood < 8 THEN 'DEPRESSED'
            WHEN mood < 16 THEN 'BORED'
            ELSE 'HAPPY'
       END AS spirits,
       CASE WHEN is_dead THEN 'DEAD (you monster!)'
            WHEN food < 5 OR mood < 5 THEN 'SICK'
            WHEN food < 13 OR mood < 13 THEN 'OK'
            ELSE 'GREAT'
       END AS health
  FROM db_pet

Remember when we said this would be stupidly easy to code? We have one table to store the pets, and one view to display their current status to the owner. In this world, each clock tick is worth one hour. Our view suggest pets must be fed and trained three times a day or they’ll starve to death or die from boredom. Further, we slapped a unique index on the player so we can truly punish neglectful owners. One pet is all you get, and if it dies, too bad! Maybe a less cruel future version will allow players to have multiple pets and move the unique key to player and pet name. For now, let’s be awful sadists.

We spoke a bit about functions recently, and we’ll need a few to facilitate gameplay. In particular, we’ll need only three user functions:

  • One to create and personalize the pet. We want some emotional investment, after all!
  • One to feed the pet. This function should stave off starvation for 8 hours. That’ll give us roughly three feedings per day to keep it healthy.
  • One to train the pet. Let’s assume the pet is more resilient to boredom than starvation and each training is worth 12 hours of entertainment.

Here’s how they might look:

  pet_state my_pet;
  SELECT * INTO pet_state FROM my_pet;
  -- If the user tries to create another pet, tell them no.
  -- If their pet died, that's too darn bad!
    INSERT INTO db_pet (player, pet_name)
    VALUES (SESSION_USER, new_name);
    WHEN unique_violation THEN
      IF ~ 'DEAD' THEN
        RAISE NOTICE 'Your pet is dead forever. Murderer.';
        RAISE NOTICE 'You can only ever have one pet!';
      END IF;
  SELECT * INTO pet_state FROM my_pet;
  RETURN NEXT pet_state;
  pet_state my_pet;
  -- Every feeding is good for 8 hours. Don't let the player
  -- gorge their pet beyond one day of food.
  UPDATE db_pet
     SET food = CASE WHEN food >= 16 THEN 24 ELSE food + 8 END
     AND NOT is_dead;
  SELECT * INTO pet_state FROM my_pet;
  RETURN NEXT pet_state;
  pet_state my_pet;
  -- Playing with a pet will keep it entertained for another
  -- 12 hours, up to a 24-hour limit.
  UPDATE db_pet
     SET mood = CASE WHEN mood >= 12 THEN 24 ELSE mood + 12 END
     AND NOT is_dead;
  SELECT * INTO pet_state FROM my_pet;
  RETURN NEXT pet_state;

We created each function to always return the current status of the pet. Returning row sets is just too convenient not invoke, especially when interaction is a key part of game feedback. We also had to declare each of the functions as security definers so players could modify the state of their own pet without having access to the underlying table.

Beyond letting the user do things to their pet, we also need a game loop to push all the pets toward an untimely demise unless their owner intervenes. Then we need to actually schedule it with pg_cron to activate everything.

  UPDATE db_pet
     SET food = food - 1,
         mood = mood - 1
   WHERE NOT is_dead;
  UPDATE db_pet
     SET is_dead = TRUE
   WHERE (mood < 1 OR food < 1)
     AND NOT is_dead;
SELECT cron.schedule(
  '* * * * *', 'SELECT pet_game_loop()'

Eagle-eyed cron users will immediately notice we set the game loop to run once per minute instead of once per hour. Hey, this is an article and we have a hard deadline, here. We can’t just wait several hours for Princess Kittybutt to diedemonstrate the game loop!

With that out of the way, let’s all welcome Princess Kittybutt!

SELECT * FROM new_pet('Princess Kittybutt');
      pet_name      | appetite | spirits | health 
 Princess Kittybutt | FULL     | HAPPY   | GREAT

Ah, the addition of a new pet to the family! What more could anyone ask for? Princess Kittybutt is happy, healthy, and content with us as owners. Truly, it’s stupendous to be Princess Kittybutt on this fine evening. So long as we play with her at least twice a day, and feed her three times, she will continue to live on in our database, successfully occupying many bytes and not doing much else.

Despite the uh… enhancement to the game timing, let’s check back on her in about ten minutes and see how things are going.

SELECT * FROM my_pet;
      pet_name      | appetite | spirits | health 
 Princess Kittybutt | HUNGRY   | BORED   | GREAT
SELECT * FROM feed_pet();
      pet_name      | appetite | spirits | health 
 Princess Kittybutt | FULL     | BORED   | GREAT

We noticed Princess was feeling a tad peckish, so we fed her a bit and everything is right as rain. She’s a little bored, but that’s hardly the end of the world, is it? Let’s check back in another ten minutes, shall we?

SELECT * FROM my_pet;
      pet_name      | appetite |  spirits  | health 
 Princess Kittybutt | HUNGRY   | DEPRESSED | SICK

Well, Princess Kittybutt is slightly under the weather, but we need to run a quick errand. That’s OK, right? We’re busy professionals after all, and to be honest, the life of a cat is pretty cushy. It’s not like she only has about five minutes to live or anything.

Back in five minutes! Promise!

SELECT * FROM my_pet;
      pet_name      | appetite |  spirits  |       health        
 Princess Kittybutt | STARVING | DEPRESSED | DEAD (you monster!)

Dead Princess Kittybutt

W-what?! Princess Kittybutt! No! What have we done!? When the kids get home from school, they’re going to be utterly devastated! There has to be something we can do… Wait! We can just get another cat that looks just like her, right? Right? They’ll never know the difference.

Just a little bit of SQL and…

SELECT * FROM new_pet('Princess Kittybutt 2');
NOTICE:  Your pet IS dead forever. Murderer.
      pet_name      | appetite |  spirits  |       health        
 Princess Kittybutt | HUNGRY   | DEPRESSED | DEAD (you monster!)

Why!? Poor Princess Kittybutt, you hardly had a chance to live. Oh, what cruel fate!

Anyway, what kind of practical use is an integral job scheduling system in Postgres? For one, few production databases have a 1-1 ratio between database users and OS users. As a consequence, it’s not always easy or convenient to schedule recurring events without using pgAgent or overloading OS user crontabs. Beyond that, do we really want decentralized scheduled events spread across multiple OS users anyway?

Of particular interest are data loading and manipulation operations. While most ETL systems have scheduling and transformation engines or suggest robust examples of such, sometimes it’s easier to frequently process new data internally. An active data logger or order processing system might have an army of fact tables and dimensions summarizing details at multiple levels of granularity. It’s not uncommon to keep these topped-up with functions that regularly aggregate everything to accelerate report generation.

Or of course, we could write a real game. Like Schemaverse, we might want multiple events to occur per clock tick. We can even create multiple game loop functions so each category of event follows a distinct schedule. It’s common for a game engine to assign independent jobs to available CPUs, giving the illusion of parallel activity. Well, that’s certainly a potential avenue here as well. As long as we don’t mind inhabiting a universe that only advances at a maximum rate of once per minute, that is.

That’s the only real downfall of pg_cron. Instead of being a generic scheduler, it married itself to cron’s limited syntax and granularity; it’s in the name, after all. For events that must occur more frequently than once per minute, we still need to rely on an external invocation routine of some description.

Considering how long cron has survived with that particular shortcoming, I doubt that will hold pg_cron back. It’s certainly a welcome addition to the Postgres extension family. Even though it murdered poor Princess Kittybutt.

gabrielle roth: PDXPUG: September meeting

From Planet PostgreSQL. Published on Sep 08, 2016.

When: 6-8pm Thursday September 15, 2016
Where: iovation
Who: Toby Fee
What: Connecting Your Pg DB to Node Apps

Toby will tell us about his experiences connecting Postgres DBs to Node apps using the Node libraries Bookshelf and Knex. He ran into some of limitations on the node side, and wrote some methods for allowing full query flexibility while still remaining injection-secure.

Toby is currently a developer at NWEA.

If you have a job posting or event you would like me to announce at the meeting, please send it along. The deadline for inclusion is 5pm the day before the meeting.

Our meeting will be held at iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry!

iovation provides us a light dinner (usually sandwiches or pizza).

Elevators open at 5:45 and building security closes access to the floor at 6:30.

See you there!

Fixing "Symbol not found: _PyUnicodeUCS2_Compare and _PyUnicodeUCS2_AsASCIIString"

By Michael Samoylov from Django community aggregator: Community blog posts. Published on Sep 08, 2016.

I've got a new client with a legacy Django project running on Python 2.7 with Pandas.

When I was configuring my local OS X environment I got a bunch of nasty errors:

ImportError: dlopen(/Users/mike/.pyenv/versions/project/lib/python2.7/site-packages/pandas/, 2): Symbol not found: _PyUnicodeUCS2_Compare  
  Referenced from: /Users/mike/.pyenv/versions/project/lib/python2.7/site-packages/pandas/
  Expected in: flat namespace
 in /Users/mike/.pyenv/versions/project/lib/python2.7/site-packages/pandas/

My first assumption was to upgrade Pandas from 0.15.1 to the most recent 0.18.1, but that didn't help. I rolled back to Pandas 0.15.1 to avoid any possible regressions.

After tens minutes of reading traceback messages and googling, I managed to fix that error with running $ pip install -U cython.

After that I've got:

  ImportError: dlopen(/Users/mike/.pyenv/versions/2.7.12/envs/project/lib/python2.7/site-packages/numpy/core/, 2): Symbol not found: _PyUnicodeUCS2_AsASCIIString
    Referenced from: /Users/mike/.pyenv/versions/2.7.12/envs/project/lib/python2.7/site-packages/numpy/core/
    Expected in: flat namespace
   in /Users/mike/.pyenv/versions/2.7.12/envs/project/lib/python2.7/site-packages/numpy/core/

It seemed to be fixed in the most recent Numpy version, so I ran $ pip install -U numpy pandas, and the error disappeared.

I hope this short blog post will save your time. You're welcome to share your experience with fixing similar not obvious issues.

Django Tips #14 Using the Messages Framework

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Sep 06, 2016.

Keeping the users of your application aware of what is going on makes a huge difference in the user experience. If there is something users hate more than slow applications, it is applications that does not communicate with them.

– The user clicks in a save button.
– Nothing happens.
– So, did it save the data or not?
– User reaction after a couple of (mili)seconds: *Click!* *Click!* *Click!* *Click!*

Let’s make our users more confident and comfortable, shall we?


By default, a brand new Django project already comes with the messages framework installed. If you did not change anything in relation to the messages framework, just skip to the next section. Otherwise, set it up:

    • django.contrib.messages
  • MIDDLEWARE or MIDDLEWARE_CLASSES in older versions:
    • django.contrib.sessions.middleware.SessionMiddleware
    • django.contrib.messages.middleware.MessageMiddleware
    • context_processors
      • django.contrib.messages.context_processors.messages

Message Levels and Tags

Constant Level Tag (for CSS) Purpose
DEBUG 10 debug Development-related messages that will be ignored (or removed) in a production deployment
INFO 20 info Informational messages for the user
SUCCESS 25 success An action was successful
WARNING 30 warning A failure did not occur but may be imminent
ERROR 40 error An action was not successful or some other failure occurred

By default, Django will only display messages with level greater than 20 (INFO). If you want to display DEBUG messages:

from django.contrib.messages import constants as message_constants
MESSAGE_LEVEL = message_constants.DEBUG

Or if you are running into circular imports, you can add the constant value directly:



You have two ways to use it. If you are using the built-in message levels (which more the most cases are more than enough):

from django.contrib import messages

def password(request):
    if request.method == 'POST':
        form = PasswordChangeForm(request.user, request.POST)
        if form.is_valid():
            update_session_auth_hash(request, form.user)
            messages.success(request, 'Your password was updated successfully!')  # <-
            return redirect('settings:password')
            messages.warning(request, 'Please correct the error below.')  # <-
        form = PasswordChangeForm(request.user)
    return render(request, 'profiles/change_password.html', {'form': form})

And then in the template:

{% if messages %}
  <ul class="messages">
    {% for message in messages %}
      <li class="{{ message.tags }}">{{ message }}</li>
    {% endfor %}
{% endif %}

If the success message was added, the output would be something like that:

<ul class="messages">
  <li class="success">Your password was updated successfully!</li>

You can also pass extra tags to the message:

messages.success(request, 'Your password was updated successfully!', extra_tags='alert')


<ul class="messages">
  <li class="success alert">Your password was updated successfully!</li>

Built-in methods:

messages.debug(request, 'Total records updated {0}'.format(count)), 'Improve your profile today!')
messages.success(request, 'Your password was updated successfully!')
messages.warning(request, 'Please correct the error below.')
messages.error(request, 'An unexpected error occured.')

# Or...

messages.add_message(request, messages.DEBUG, 'Total records updated {0}'.format(count))
messages.add_message(request, messages.INFO, 'Improve your profile today!')

# Useful to define custom levels:
messages.add_message(request, CRITICAL, 'A very serious error ocurred.')

Extra: Bootstrap Snippet


{% for message in messages %}
  <div class="alert {% message.tags %} alert-dismissible" role="alert">
    <button type="button" class="close" data-dismiss="alert" aria-label="Close">
      <span aria-hidden="true">&times;</span>
    {{ message }}
{% endfor %}

from django.contrib.messages import constants as messages

    messages.DEBUG: 'alert-info',
    messages.INFO: 'alert-info',
    messages.SUCCESS: 'alert-success',
    messages.WARNING: 'alert-warning',
    messages.ERROR: 'alert-danger',

And then to use it, add messages.html to your base.html template:


<!doctype html>
    <meta charset="utf-8">
    <title>Simple is Better Than Complex</title>
    {% include 'partials/header.html' %}
      <div class="container">
        {% include 'partials/messages.html' %}
        {% block content %}
        {% endblock %}
    {% include 'partials/footer.html' %}

Deploying Django with Gunicorn and Supervisor

By Michael Samoylov from Django community aggregator: Community blog posts. Published on Aug 31, 2016.

Here in Monmar we deploy all Django applications with Gunicorn and Supervisor. I personally prefer Gunicorn to uWSGI because it has better configuration options and more predictable performance.

In this article we will be deploying a typical Django application. We won't be using async workers because we're just serving HTML and there are no heavy-lifting task in background.

I'm assuming that our Django project was created with Cookiecutter Django, an awesome boilerplate developer by Daniel Roy Greenfeld. I strongly recommend using it for every new Django project.

Our requirements/production.txt file should already contain these two packages:

# WSGI Handler
# ------------------------------------------------

So, we only need to SSH to the server, create a virtualenv and populate it with packages:

$ virtualenv --python=python3.5 .venv
$ source .venv/bin/activate
$ pip3 install -r requirements.txt

Next, we're going to add missing configs to our working copy:

$ mkdir -p deploy/production && cd deploy/production
$ touch nginx.conf supervisor.conf

I'm providing real configuration files we're using for our company website. So, please update it for your project needs.

bind = 'unix:/tmp/gunicorn-monmar.sock'  
workers = 2  
timeout = 30  


upstream monmar {  
    server unix:/tmp/gunicorn-monmar.sock fail_timeout=0;
    keepalive 60;

server {  
    listen 80;
    listen [::]:80;

    location / {
        return 301$request_uri;

server {  
    listen 443;
    listen [::]:443;

    access_log /var/log/nginx/monmar.access.log;
    error_log /var/log/nginx/monmar.error.log;

    ssl on;
    ssl_certificate /etc/ssl/private/monmar-chain.pem;
    ssl_certificate_key /etc/ssl/private/monmar-private.key;

    ssl_dhparam /etc/ssl/private/dhparams.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;

    resolver valid=300s;
    resolver_timeout 5s;

    add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
    add_header X-Content-Type-Options nosniff;

    client_max_body_size 200M;

    location /admin {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;

        add_header Cache-Control "no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0";

        proxy_pass http://monmar;

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;

        proxy_cache STATIC;
        proxy_cache_valid 200 10m;
        proxy_cache_valid 404 1m;

        proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Set-Cookie;

        proxy_pass http://monmar;

    location /media {
        alias   /var/www/monmar/media;
        expires max;
        access_log off;

    location /static {
        alias   /var/www/monmar/staticfiles;
        expires max;
        access_log off;

Please note that we're using a nginx cache zone for the best performance. You can learn more about this technique in my previous article.

#!/usr/bin/env bash

cd /var/www/monmar

export $(cat .env) && exec .venv/bin/gunicorn config.wsgi -c deploy/  

Please pay attention that we're not calling .venv/bin/gunicorn directly, but wrapping it with exec. If you don't do that, your Gunicorn program won't be supervisored, and you won't be able to stop and restart it properly.



It's time to commit your changes and deploy them on your server. In the next article I will explain how we automate deployment of our Django projects with Fabric. But, now we will just deploy manually.

Let's install Supervisor on our server. I'm assuming that we're using Ubuntu.

$ sudo apt-get update && sudo apt-get install supervisor

Now let's create symlinks and run our Django application. A friendly reminder: I'm referring an existing website as an example, so please update everything according to your Django project.

$ sudo ln -sf /var/www/monmar/deploy/production/supervisor.conf /etc/supervisor/conf.d/monmar.conf
$ sudo ln -sf /var/www/monmar/deploy/production/nginx.conf /etc/nginx/sites-enabled/monmar
$ sudo chown -R www-data:www-data /var/www/monmar
$ sudo supervisorctl reload  # Load a new config
$ sudo supervisorctl start monmar
$ sudo service nginx restart

Your application should be accessible now. You're welcome to ask questions.

How to Work With AJAX Request With Django

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 29, 2016.

There are many scenarios where you may want to use AJAX requests in your web application. It is a great resource that enables web applications to be faster and more dynamic. For this short tutorial I will be using jQuery to ease the implementation.

You can choose a different framework, or even implement it using JavaScript only. But the concept will remain the same.

Initial Setup

Here is how my base.html template looks like:

{% load static %}<!doctype html>
    <meta charset="utf-8">
    <title>{% block title %}Default Title{% endblock %}</title>
    <link rel="stylesheet" type="text/css" href="{% static 'css/app.css' %}">
    {% block stylesheet %}{% endblock %}
      {% block content %}
      {% endblock %}
    <script src=""></script>
    <script src="{% static 'js/app.js' %}"></script>
    {% block javascript %}{% endblock %}

The jQuery library and all the JavaScript resources stays in the end of the HTML page for two reasons: to guarantee the DOM will be loaded whe the script is executed and to avoid inline scripts (at least scripts that uses jQuery).

All the extra or page specific JavaScript goes inside the {% block javascript %}{% endblock %} block.

Sample Scenario

Let’s say you want to validate the username field in a sign up view, as soon as the user finish typing the desired username. You want just to do a simple check, if the username is already taken or not.

from django.contrib.auth.forms import UserCreationForm
from django.views.generic.edit import CreateView

class SignUpView(CreateView):
    template_name = 'core/index.html'
    form_class = UserCreationForm

from django.conf.urls import url
from core import views

urlpatterns = [
    url(r'^signup/$', views.SignUpView.as_view(), name='signup'),


{% extends 'base.html' %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Sign up</button>
{% endblock %}

The view looks like that:

Sign Up View

Ajax Request

Let’s implement an asynchronous request to validate if the username is already taken or not.

First we gotta have a look on the HTML generated by the {{ form.as_p }}. We want to inspect the username field, which looks like that:

<input type="text" required="" name="username" maxlength="150" id="id_username" autofocus="">

What we need here is its ID, which is id_username. Let’s create a listener for the username’s field change event:


{% extends 'base.html' %}

{% block javascript %}
    $("#id_username").change(function () {
      console.log( $(this).val() );
{% endblock %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Sign up</button>
{% endblock %}

This is something I still like to do nowadays, even working with jQuery every single day. Before moving forward, make sure you got the listener right. Make sure the event is firing correctly. In this case, as the name suggests, the change event will occur every time the value of the username field changes.

You can check the output of the console.log() function in your web browser console. Generally right clicking in the page and clicking on a menu that says Inspect or something similar.

Let’s create a view that checks if a given username is taken, and return a response as JSON.

from django.contrib.auth.models import User
from django.http import JsonResponse

def validate_username(request):
    username = request.GET.get('username', None)
    data = {
        'is_taken': User.objects.filter(username__iexact=username).exists()
    return JsonResponse(data)

Add a route to this view:

from django.conf.urls import url
from core import views

urlpatterns = [
    url(r'^signup/$', views.SignUpView.as_view(), name='signup'),
    url(r'^ajax/validate_username/$', views.validate_username, name='validate_username'),

And then the simpliest implementation would be:


{% extends 'base.html' %}

{% block javascript %}
    $("#id_username").change(function () {
      var username = $(this).val();

        url: '/ajax/validate_username/',
        data: {
          'username': username
        dataType: 'json',
        success: function (data) {
          if (data.is_taken) {
            alert("A user with this username already exists.");

{% endblock %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Sign up</button>
{% endblock %}

AJAX Example

Improving the Example

The example is working, and that’s great. But there are a few details we can improve:

  • Hardcoded URL inside the Ajax function
  • User message embeded in the JavaScript

Since we are inside a Django template, we could do something like that:

  url: '{% url "validate_username" %}',

It is already better, but, I don’t really like this strategy. Two reasons: You won’t be able to extract this JavaScript code to a external file. We have an external script modifying our JavaScript. If you find yourself writing code to write a JavaScript code, give it another thought. There might be a better solution.

Now, what I usually like to do:

from django.contrib.auth.models import User
from django.http import JsonResponse

def validate_username(request):
    username = request.GET.get('username', None)
    data = {
        'is_taken': User.objects.filter(username__iexact=username).exists()
    if data['is_taken']:
        data['error_message'] = 'A user with this username already exists.'
    return JsonResponse(data)

Extract the user message to the Python code, this way is easier to work with translations.


{% extends 'base.html' %}

{% block javascript %}
    $("#id_username").change(function () {
      var form = $(this).closest("form");
        url: form.attr("data-validate-username-url"),
        data: form.serialize(),
        dataType: 'json',
        success: function (data) {
          if (data.is_taken) {

{% endblock %}

{% block content %}
  <form method="post" data-validate-username-url="{% url 'validate_username' %}">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Sign up</button>
{% endblock %}

Ideally I would put the data-validate-username-url attribute directly in the username field. But here we would need to expose all the fields in the {{ form.as_p }}. So in this case I can live with the attribute in the form tag.

Generally speaking: try to avoid modifying JavaScript code with Python code, keep the URL references in the HTML and manage user messages in the Python code.

Another tip: if you have direct access to the HTML field, prefer adding a class name like this:

<input type="text" name="username" id="id_username" class="js-validate-username">

And then you hook the change event to the class js-validate-username instead. The js- prefix suggests that there is a JavaScript code that interacts with this element. And then use this prefix for JavaScript only, never use it for styling the component as well via css.

Django to Ember #1

By Eat Some Code from Django community aggregator: Community blog posts. Published on Aug 29, 2016.

If you know Django and want to learn Ember (or vice-versa), this guide is for you. To get started, let's see what happens when a user opens the `/contact` page in both frameworks.

Simple qgis plugin repo

By Reinout van Rees' weblog from Django community aggregator: Community blog posts. Published on Aug 28, 2016.

At my company, we sometimes build QGIS plugins. You can install those by hand by unzipping a zipfile in the correct directory, but there's a nicer way.

You can add custom plugin "registries" to QGIS and QGIS will then treat your plugins just like regular ones. Here's an example registry: . Simple, right? Just a directory on our webserver. The URL you have to configure inside QGIS as registry is that of the plugins.xml file: .

The plugins.xml has a specific format:

<?xml version="1.0"?>

  <pyqgis_plugin name="GGMN lizard integration" version="1.6">
       <description>Download GGMN data from lizard, interpolate and add new points</description>
       <author_name>Reinout van Rees, Nelen &amp; Schuurmans</author_name>

   .... more plugins ...


As you see, the format is reasonably simple. There's one directory on the webserver that I "scp" the zipfiles with the plugins to. I then run this script on the directory. That script extracts the (mandatory) metadata.txt from all zipfiles and creates a plugins.xml file out of it.

A gotcha regarding the zipfiles: they should contain the version number, but, in contrast to python packages, the version should be prefixed by a dot instead of a dash. So no but It took me a while before I figured that one out!

About the metadata.txt: QGIS has a "plugin builder" plugin that generates a basic plugin structure for you. This structure includes a metadata.txt, so that's easy.

(In case you want to use zest.releaser to release your plugin, you can extend zest.releaser to understand the metadata.txt format by adding . It also generates a correctly-named zipfile.)

How to Create a One Time Link

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 24, 2016.

Django uses a very interesting approach to generate the Password reset tokens. I’m not really a security expert, neither I’m very familiar with cryptography algorithms, but it is very safe and reliable.

Before I elaborate a little be more on the one-time-link generation, I wanted to discuss about the Django’s PasswordResetTokenGenerator implementation. Because what we will be doing is actually extending this particular class to fit our needs.

Generally speaking, Django generate a token without persisting it in the database. Yet, it still have the capabilities of determining whether a given token is valid or not. Also the token is only valid for a defined number of days.

The default value for the Password Reset Token is 7 days, and it can be changed in the by changing the value of PASSWORD_RESET_TIMEOUT_DAYS.

The class have two public methods:

  • make_token(user)
  • check_token(user, token)

The make_token method will generate a hash value with user related data that will change after the password reset. Meaning, after the user clicks on the link with the hash and proceed to the password reset, the link (or the hash) will no longer be valid:

def _make_hash_value(self, user, timestamp):
    # Ensure results are consistent across DB backends
    login_timestamp = '' if user.last_login is None else user.last_login.replace(microsecond=0, tzinfo=None)
    return (
        six.text_type( + user.password +
        six.text_type(login_timestamp) + six.text_type(timestamp)

And then this hash value is used to create a hash that will be mailed to the user:

    def _make_token_with_timestamp(self, user, timestamp):
        # timestamp is number of days since 2001-1-1.  Converted to
        # base 36, this gives us a 3 digit string until about 2121
        ts_b36 = int_to_base36(timestamp)

        hash = salted_hmac(
            self._make_hash_value(user, timestamp),
        return "%s-%s" % (ts_b36, hash)

So, two things: it is using the user.password salt and user.last_login timestamp. Both will change and the link will no longer be valid. Also the SECRET_KEY is used in the salted_hmac function. So unless your SECRET_KEY was compromised, it would be impossible to reproduce the hash value.

Creating your own token

So, basically you will need an information that will change after using the link. The simplest approach would be:

from django.contrib.auth.tokens import PasswordResetTokenGenerator
from django.utils import six

class AccountActivationTokenGenerator(PasswordResetTokenGenerator):
    def _make_hash_value(self, user, timestamp):
        return (
            six.text_type( + six.text_type(timestamp) +

account_activation_token = AccountActivationTokenGenerator()

I’m pretending we have an User model with a Profile model through a One-to-One relationship. And then in this profile model we have an boolean flag named email_confirmed.

In order to use it, we could use the same approach as the password reset:

                views.ActivateAccountView.as_view(), name='activate_account'),

from django.contrib.auth import login
from django.utils.encoding import force_text
from django.utils.http import urlsafe_base64_decode

class ActivateAccountView(View):
    def get(self, request, uidb64, token):
            uid = force_text(urlsafe_base64_decode(uidb64))
            user = User.objects.get(pk=uid)
        except (TypeError, ValueError, OverflowError, User.DoesNotExist):
            user = None

        if user is not None and account_activation_token.check_token(user, token):
            user.profile.email_confirmed = True
            login(request, user)
            return redirect('profile')
            # invalid link
            return render(request, 'registration/invalid.html')

Of course there are cases and cases. Sometimes will be just way easier to generate a random token and save it in the database and simply check it and invalidate after it is “used”. But, if that’s not the case, you can inspire yourself on how Django implements the Password Reset Token.

Django Tips #13 Using F() Expressions

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 23, 2016.

In the Django QuerySet API, F() expressions are used to refer to model field values directly in the database. Let’s say you have a Product class with a price field, and you want to increase the price of all products in 20%.

A possible solution would be:

products = Product.objects.all()
for product in products:
    product.price *= 1.2

Instead you could use an F() expression to update it in a single query:

from django.db.models import F

Product.objects.update(price=F('price') * 1.2)

You can also do it for a single object:

product = Product.objects.get(pk=5009)
product.price = F('price') * 1.2

But take care with this kind of assignment. The F() object persist after saving the model.

product.price                   # price = Decimal('10.00')
product.price = F('price') + 1                  # price = Decimal('11.00') = 'What the F()'                  # price = Decimal('12.00')

So, basically after updating a field like that, product.price will hold an instance of django.db.models.expressions.CombinedExpression, instead of the actual result. If you want to access the result immediately:

product.price = F('price') + 1
print(product.price)            # <CombinedExpression: F(price) + Value(1)>
print(product.price)            # Decimal('13.00')

You can also use it to annotate data:

from django.db.models import ExpressionWrapper, DecimalField

        F('price') * F('stock'), output_field=DecimalField()

Since price is a DecimalField and stock is a IntegerField, we need to wrap the expression inside a ExpressionWrapper object.

It can be used to filter data as well:


Dealing With QueryString Parameters

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 22, 2016.

It is kinda tough to describe what the problem really is. But, do you know when you are creating an interface where you provide pagination, filters and ordering, and you are making it controlling it via URL Get parameters?

For instance if you have different options for ordering, you might think of something like that:

<div class="dropdown">
  <button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">
    Order by
  <ul class="dropdown-menu">
    <li><a href="?order=name">Name (a-z)</a></li>
    <li><a href="?order=-name">Name (z-a)</a></li>
    <li><a href="?order=price">Price</a></li>
    <li><a href="?order=date">Date</a></li>

Basically you would be sending the user to the very same page, but passing a GET parameter named order, where you could do something like that:

def products_list(request):
    products = Product.objects.all()
    order = request.GET.get('order', 'name')  # Set 'name' as a default value
    products = products.order_by(order)
    return render(request, 'products_list.html', {
        'products': products

PS: This is a minimalist example, if you pass an invalid parameter directly in the querystring you will make queryset break. I will avoid adding extra validations so we can focus on the objective of this article.

So far so good. But the problem starts to appear when you add new control, also via GET parameter. Lets say a pagination:

<ul class="pagination">
  {% for i in page_obj.paginator.page_range %}
      <a href="?page={{ i }}">{{ i }}</a>
  {% endfor %}

What would happen here: if you are ordering the results by the Date and then you move to the next page, you will lose the ordering preference.

The easiest solution would be something like that:

def products_list(request):
    return render(request, 'products_list.html', {
        'products': products,
        'order': order,
        'page': page

And then:

<div class="dropdown">
  <button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">
    Order by
  <ul class="dropdown-menu">
    <li><a href="?order=name&page={{ page }}">Name (a-z)</a></li>
    <li><a href="?order=-name&page={{ page }}">Name (z-a)</a></li>
    <li><a href="?order=price&page={{ page }}">Price</a></li>
    <li><a href="?order=date&page={{ page }}">Date</a></li>


<ul class="pagination">
  {% for i in page_obj.paginator.page_range %}
      <a href="?page={{ i }}&order={{ order }}">{{ i }}</a>
  {% endfor %}

The bigger the number of parameters, the bigger is the mess in the template.

The Solution

Last week while working on a project I faced this problem again, and I put some time to think of a better/reusable solution.

So, I came up with this template tag, and I thought about sharing with you guys. Maybe it can be useful for your as well. Basically you will need the django.template.context_processors.request in your project’s context_processors.


from django import template

register = template.Library()

def relative_url(value, field_name, urlencode=None):
    url = '?{}={}'.format(field_name, value)
    if urlencode:
        querystring = urlencode.split('&')
        filtered_querystring = filter(lambda p: p.split('=')[0] != field_name, querystring)
        encoded_querystring = '&'.join(filtered_querystring)
        url = '{}&{}'.format(url, encoded_querystring)
    return url

And then you use it this way:

{% load templatehelpers %}

<div class="dropdown">
  <button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">
    Order by
  <ul class="dropdown-menu">
    {% with params=request.GET.urlencode %}
      <li><a href="{% relative_url 'name' 'order' params %}">Name (a-z)</a></li>
      <li><a href="{% relative_url '-name' 'order' params %}">Name (z-a)</a></li>
      <li><a href="{% relative_url 'price' 'order' params %}">Price</a></li>
      <li><a href="{% relative_url 'date' 'order' params %}">Date</a></li>
    {% endwith %}


<ul class="pagination">
  {% for i in page_obj.paginator.page_range %}
      <a href="{% relative_url i 'page' request.GET.urlencode %}">{{ i }}</a>
  {% endfor %}

Now the template tag will keep that current state of your filters/ordering/pages.

Better Python Object Serialization

By Blog of Hynek Schlawack from Django community aggregator: Community blog posts. Published on Aug 22, 2016.

Serialization is everywhere. Notably for me: JSON web APIs and structured logs. And yet, general approaches to serialize arbitrary objects to JSON are rather clunky. Fortunately there’s a gem in the Python standard library that makes it easy and elegant.

Django Tips #12 Disabling Migrations to Speed Up Unit Tests

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 19, 2016.

The model migrations are certainly a great feature of the Django framework. But, when it comes down to running tests, it really slows down the process. Especially if your migration history is big. This is a simple tip to speed up your tests.

I like to create a separate settings file for this tweaks.

from settings import *

# Custom settings goes here

And then to run the tests:

python test --settings=myproject.tests_settings --verbosity=1

Django >= 1.9

One option is using the MIGRATION_MODULES setting, which is intended to define a custom name for an app’s migration module. If you set None instead, Django will ignore the migration module.

from settings import *

    'auth': None,
    'contenttypes': None,
    'default': None,
    'sessions': None,

    'core': None,
    'profiles': None,
    'snippets': None,
    'scaffold_templates': None,

Django < 1.9

This is a possible solution if you are using a version prior to 1.9. Actually, I still prefer to use it nowadays. Because I don’t need to set each app.

from settings import *

class DisableMigrations(object):
    def __contains__(self, item):
        return True

    def __getitem__(self, item):
        return 'notmigrations'

MIGRATION_MODULES = DisableMigrations()

Older Django Versions (using South)

Hold tight:


Damn! It could even live inside the production

Package of the Week: Pendulum

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 18, 2016.

Pendulum is a Python library to make your life easier when it comes down to work with date/time.


pip install pendulum

A few dependencies will be installed: pytz, tzlocal and python-dateutil. Just the python-dateutil itself is already great – but pendulum offers a few more tweaks.


import pendulum

now =  # '2016-08-18 20:24:52'

Let’s keep the now variable for the next examples.

now.add(days=1)  # '2016-08-19 20:24:52'

Accepts days, months, years, hours, minutes, seconds, microseconds, weeks.

now.add(weeks=1, hours=2)  # '2016-08-25 22:24:52'
now.subtract(weeks=1)  # '2016-08-11 20:24:52'
birthday = pendulum.create(1988, 10, 22)
birthday.age  # 27
now ='Europe/Helsinki')
future = now.add(hours=12)
past = now.subtract(years=50, days=12)

future.diff()             # '<Period [11 hours 59 minutes 41 seconds]>'
future.diff_for_humans()  # '11 hours from now'
past.diff_for_humans()    # '50 years ago'
delta = now - last_week

delta.start  # <Pendulum [2016-08-05T00:28:26.207225+03:00]>
delta.end    # <Pendulum [2016-08-12T00:28:26.207225+03:00]>

delta.in_days()      # 7
delta.in_hours()     # 168
delta.in_weekdays()  # 6
delta.in_words()     # '1 week'
it = pendulum.interval(days=15)

it.weeks       # 2
it.days        # 15
it.in_hours()  # 360
it.in_words()  # '2 weeks 1 day'




now =          # 2016-08-18 20:24:52
next_year = now.add(years=1)  # 2017-08-18 20:24:52

now.is_birthday(next_year)    # True
now.is_same_day(next_year)    # False
class methods
date = datetime.datetime(2016, 1, 1) # datetime.datetime(2016, 1, 1, 0, 0)
pendulum.instance(date)              # <Pendulum [2016-01-01T00:00:00+00:00]>                       # <Pendulum [2016-08-18T21:12:27.684083+03:00]>
pendulum.utcnow()                    # <Pendulum [2016-08-18T21:12:56.711154+00:00]>                     # <Pendulum [2016-08-18T00:00:00+03:00]>
pendulum.tomorrow()                  # <Pendulum [2016-08-19T00:00:00+03:00]>
pendulum.yesterday()                 # <Pendulum [2016-08-17T00:00:00+03:00]>

pendulum.create(2017, 6, 1)          # <Pendulum [2017-06-01T21:17:11.868599+00:00]>

pendulum.parse('2016-08-18')         # <Pendulum [2016-08-18T00:00:00+00:00]>

There is much more to explore. See the official documentation at

Exploring Django Utils #2

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 17, 2016.

Last week I started a post series exploring the django.utils module. In this second part I will be focusing more on the html module.


Module: django.utils.html


Returns the given text with ampersands, quotes and angle brackets encoded for use in HTML.

from django.utils.html import escape

escape("<strong style='font-size: 12px'>escaped html</strong>")
'&lt;strong style=&#39;font-size: 12px&#39;&gt;escaped html&lt;/strong&gt;'

It will cause already escaped strings to be escaped again:

escaped_html = escape("<strong>escaped html</strong>")
# '&lt;strong&gt;escaped html&lt;/strong&gt;'

# '&amp;lt;strong&amp;gt;escaped html&amp;lt;/strong&amp;gt;'

If this is a concern, use conditional_escape() instead.

escaped_html = conditional_escape("<strong>escaped html</strong>")
# '&lt;strong&gt;escaped html&lt;/strong&gt;'

# '&lt;strong&gt;escaped html&lt;/strong&gt;'

This function is similar to str.format, but it will conditional escape all the arguments. Prefer to use it to build small HTML fragments instead of str.format or string interpolation, as it is safer.

from django.utils.html import format_html

format_html('<div class="alert {}">{}</>', 'warning', 'Watch out!')
'<div class="alert warning">Watch out!</>'

Safely format HTML fragments:

format_html('<div class="alert {}">{}</>', '<script>alert(1);</script>', 'Watch out!')
'<div class="alert &lt;script&gt;alert(1);&lt;/script&gt;">Watch out!</>'

A wrapper of format_html, for the common case of a group of arguments that need to be formatted using the same format string.

format_html_join('\n', '<p>{}</p>', ['a', 'b', 'c'])

Another example:

data = [
    ['success', 'Success message'],
    ['warning', 'Watch out!'],
    ['danger', 'Danger!!'],

format_html_join('\n', '<div class="alert {0}">{1}</div>', data)
<div class="alert success">Success message</div>\n
<div class="alert warning">Watch out!</div>\n
<div class="alert danger">Danger!!</div>

Yet another example:

format_html_join('\n', '<tr><td>{0}</td><td>{1}</td></tr>', ((u.first_name, u.last_name)
                                                            for u in users))
from django.utils.html import linebreaks

linebreaks('convert\ninto html paragraphs\ntest')
<p>convert<br />into html paragraphs<br />test</p>

Django Tips #11 Custom Manager With Chainable QuerySets

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 16, 2016.

In a Django model, the Manager is the interface that interacts with the database. By default the manager is available through the Model.objects property. The default manager every Django model gets out of the box is the django.db.models.Manager. It is very straightforward to extend it and change the default manager.

from django.db import models

class DocumentManager(models.Manager):
    def pdfs(self):
        return self.filter(file_type='pdf')

    def smaller_than(self, size):
        return self.filter(size__lt=size)

class Document(models.Model):
    name = models.CharField(max_length=30)
    size = models.PositiveIntegerField(default=0)
    file_type = models.CharField(max_length=10, blank=True)

    objects = DocumentManager()

With that you will be able to retrieve all pdf files like this:


The thing is, this method is not chainable. I mean, you can still use order_by or filter in the result:


But if you try to chain the methods it will break:

AttributeError: 'QuerySet' object has no attribute 'smaller_than'

To make it work you must create custom QuerySet methods:

class DocumentQuerySet(models.QuerySet):
    def pdfs(self):
        return self.filter(file_type='pdf')

    def smaller_than(self, size):
        return self.filter(size__lt=size)

class DocumentManager(models.Manager):
    def get_queryset(self):
        return DocumentQuerySet(self.model, using=self._db)  # Important!

    def pdfs(self):
        return self.get_queryset().pdfs()

    def smaller_than(self, size):
        return self.get_queryset().smaller_than(size)

class Document(models.Model):
    name = models.CharField(max_length=30)
    size = models.PositiveIntegerField(default=0)
    file_type = models.CharField(max_length=10, blank=True)

    objects = DocumentManager()

Now you can use it just like any other QuerySet method:


You can keep the code inside the But as the code base grow, I prefer to keep the Managers and QuerySets in a different module, named

How to Create a Password Confirmation View

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 15, 2016.

It is a very common practice nowadays to keep alive signed in user sessions for a very long time. But some web pages usually deal with sensitive information (such as billing information or change email forms) and it is a good idea to ask the user to confirm his/her credentials. To achieve this task, we will be using the built-in check_password and the user’s last_login field.

To illustrate a little bit more what we want to achieve, take the picture below as an example:

Github Example

No redirects, we want to keep the current URL. Make it lasts for a few hours.

Building the Form

I’m using a User ModelForm to hold an instance of the logged in user. Basically I’m overriding the clean() method so to validate the informed password using the built-in function check_password, where you can test a plain-text password against its hash.

And in the same method, I’m updating the last_login with the current time. This field will be used to control when we should ask for the user’s password again.

from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.hashers import check_password
from django.utils import timezone

class ConfirmPasswordForm(forms.ModelForm):
    confirm_password = forms.CharField(widget=forms.PasswordInput())

    class Meta:
        model = User
        fields = ('confirm_password', )

    def clean(self):
        cleaned_data = super(ConfirmPasswordForm, self).clean()
        confirm_password = cleaned_data.get('confirm_password')
        if not check_password(confirm_password, self.instance.password):
            self.add_error('confirm_password', 'Password does not match.')

    def save(self, commit=True):
        user = super(ConfirmPasswordForm, self).save(commit)
        user.last_login =
        if commit:
        return user

The View Function

We will only need a view function (or a class-based view), there is no need for a url route.

from django.views.generic.edit import UpdateView
from .forms import ConfirmPasswordForm

class ConfirmPasswordView(UpdateView):
    form_class = ConfirmPasswordForm
    template_name = 'core/confirm_password.html'

    def get_object(self):
        return self.request.user

    def get_success_url(self):
        return self.request.get_full_path()


{% extends 'base.html' %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form.as_p }}
    <button type="submit">Confirm credentials</button>
{% endblock %}

The Decorator

The decorator will be responsible for checking if the application needs to ask the user to confirm his/her password. It will also make it easy to reuse across multiple views.

import datetime
from functools import wraps
from django.utils import timezone

def confirm_password(view_func):
    def _wrapped_view(request, *args, **kwargs):
        last_login = request.user.last_login
        timespan = last_login + datetime.timedelta(hours=6)
        if > timespan:
            from uploads.core.views import ConfirmPasswordView
            return ConfirmPasswordView.as_view()(request, *args, **kwargs)
        return view_func(request, *args, **kwargs)
    return _wrapped_view

A few things here: I’m importing the ConfirmPasswordView inside the wrapper class to avoid circular import. In the datetime.timedelta(hours=6) I’m defining that the application will ask for the user’s password if it’s been more than 6 hours since his/her last login.

We can extract the hardcoded 6 hours later on.


With the form, the view and the decorator we should be good to go.

from myapp.core.decorators import confirm_password

def settings(request):
    return render(request, 'core/settings.html')

If there is more than six hours since the authentication, when the user tries to access the settings page, the ConfirmPasswordView will be rendered instead, using the settings URL.


Process the form, updates the user.last_login:


That’s it. Now it is just a matter of decorating the views with @confirm_password.

Django Tips #10 AuthenticationForm Custom Login Policy

By Simple is Better Than Complex from Django community aggregator: Community blog posts. Published on Aug 12, 2016.

Since I started working with Django, I never had to spend time implementing authentication related stuff. The built-in authentication system is great and it’s very easy to plug-in and get started. Now, even if need to customize, Django makes it easy. That’s what this tip is about.

For the built-in login view, Django makes use of django.contrib.auth.forms.AuthenticationForm form to handle the authentication process. Basically it checks username, password and the is_active flag.

Django makes it easy to add custom verifications, as the AuthenticationForm has a method named confirm_login_allowed(user).

For example, if you are handling double opt-in email confirmation and don’t wanna let users without the email confirmed to log in to the application you can do something like that:

from django import forms
from django.contrib.auth.forms import AuthenticationForm

class CustomAuthenticationForm(AuthenticationForm):
    def confirm_login_allowed(self, user):
        if not user.is_active or not user.is_validated:
            raise forms.ValidationError('There was a problem with your login.', code='invalid_login')

from django.conf.urls import url
from django.contrib.auth import views as auth_views

from .forms import CustomAuthenticationForm

urlpatterns = [
    url(r'^login/$', auth_views.login, {'template_name': 'core/login.html',
        'authentication_form': CustomAuthenticationForm}, name='login'),
    url(r'^logout/$', auth_views.logout, name='logout'),

Basically it is just a matter of overriding the confirm_login_allowed method and substituting the authentication_form parameter with the new form in the urlconf. You can add any login policy, and to invalidate the authentication simply raise a ValidationError.

django-html-validator - now locally, fast!

By Peter Bengtsson from Django community aggregator: Community blog posts. Published on Aug 12, 2016.

A couple of years ago I released a project called django-html-validator (GitHub link) and it's basically a Django library that takes the HTML generated inside Django and sends it in for HTML validation.

The first option is to send the HTML payload, over HTTPS, to Not only is this slow but it also means sending potentially revealing HTML. Ideally you don't have any passwords in your HTML and if you're doing HTML validation you're probably testing against some test data. But... it sucked.

The other alternative was to download a vnu.jar file from the project and executing it in a subprocess with java -jar vnu.jar /tmp/file.html. Problem with this is that it's really slow because java programs take such a long time to boot up.

But then, at the beginning of the year some contributors breathed fresh life into the project. Python 3 support and best of all; the ability to start the vnu.jar as a local server on http://localhost:8888 and HTTP post HTML over to that. Now you don't have to pay the high cost of booting up a java program and you don't have to rely on a remote HTTP call.

Now it becomes possible to have HTML validation checked on every rendered HTML response in the Django unit tests.

To try it, check out the new instructions on "Setting the vnu.jar path".

The contributor who's made this possible is Ville "scop" Skyttä, as well as others. Thanks!!

I'm Gonna Regret This

By chrism from . Published on Jun 14, 2016.

A plea for liberals to fight for individual rights.

Is Open Source Consulting Dead?

By chrism from . Published on Sep 10, 2013.

Has Elvis left the building? Will we be able to sustain ourselves as open source consultants?

Consulting and Patent Indemification

By chrism from . Published on Aug 09, 2013.

Article about consulting and patent indemnification

Python Advent Calendar 2012 Topic

By chrism from . Published on Dec 24, 2012.

An entry for the 2012 Japanese advent calendar at

Why I Like ZODB

By chrism from . Published on May 15, 2012.

Why I like ZODB better than other persistence systems for writing real-world web applications.

A str. __iter__ Gotcha in Cross-Compatible Py2/Py3 Code

By chrism from . Published on Mar 03, 2012.

A bug caused by a minor incompatibility can remain latent for long periods of time in a cross-compatible Python 2 / Python 3 codebase.

In Praise of Complaining

By chrism from . Published on Jan 01, 2012.

In praise of complaining, even when the complaints are absurd.

2012 Python Meme

By chrism from . Published on Dec 24, 2011.

My "Python meme" replies.

In Defense of Zope Libraries

By chrism from . Published on Dec 19, 2011.

A much too long defense of Pyramid's use of Zope libraries.

Plone Conference 2011 Pyramid Sprint

By chrism from . Published on Nov 10, 2011.

An update about the happenings at the recent 2011 Plone Conference Pyramid sprint.

Jobs-Ification of Software Development

By chrism from . Published on Oct 17, 2011.

Try not to Jobs-ify the task of software development.

WebOb Now on Python 3

By chrism from . Published on Oct 15, 2011.

Report about porting to Python 3.

Open Source Project Maintainer Sarcastic Response Cheat Sheet

By chrism from . Published on Jun 12, 2011.

Need a sarcastic response to a support interaction as an open source project maintainer? Look no further!

Pylons Miniconference #0 Wrapup

By chrism from . Published on May 04, 2011.

Last week, I visited the lovely Bay Area to attend the 0th Pylons Miniconference in San Francisco.

Pylons Project Meetup / Minicon

By chrism from . Published on Apr 14, 2011.

In the SF Bay Area on the 28th, 29th, and 30th of this month (April), 3 separate Pylons Project events.