Skip to content. | Skip to navigation

Personal tools
Log in
Sections
You are here: Home

Open Source Posts

Marko Tiikkaja: UPSERTisms in Postgres, part 2: congestion

From Planet PostgreSQL. Published on Apr 22, 2014.

In my previous post I benchmarked a number of different UPSERTy implementations.  However, a number of people (rightfully) asked me how the results would change under concurrency.  So I had to come up with a way of testing that. I wanted reproducible numbers, so I figured that just throwing a lot of UPSERTs at a database was not acceptable, not to mention the problems with keeping that up while

David Fetter: 9.4: Quantiles!

From Planet PostgreSQL. Published on Apr 22, 2014.

PostgreSQL 9.4 has an amazing feature hidden underneath a complicated syntax. Here's a neat way to use it.
Continue reading "9.4: Quantiles!"

Raghavendra Rao: Faster statistics update after upgrade using "vacuumdb --analyze-in-stages" in PostgreSQL 9.4

From Planet PostgreSQL. Published on Apr 22, 2014.

As all of you know after upgrading the database server from one version to other major version, ANALYZE command should be executed to update the pg_catalogs on newly populated data. On a huge upgraded database, its a challenge for the application to gain its performance back without updating the statistics. In PostgreSQL 9.4, an option "vacuumdb --analyze-in-stages" will make this work faster to produce usable statistics required by the optimizer.  It runs in three stages with different configuration settings(default_statistics_target/vacuum_cost_delay) to analyze the database.

If the database followed any of the up-gradation procedure like pg_dump/pg_restore or pg_upgrade, then its recommended to use "vacuumdb --analyze-in-stages"

Sample output:
bash-4.1$ /usr/local/pgpatch/pg/bin/vacuumdb -p 9999 --analyze-in-stages -d tester
Generating minimal optimizer statistics (1 target)
Generating medium optimizer statistics (10 targets)
Generating default (full) optimizer statistics

Marko Tiikkaja: UPSERTisms in Postgres

From Planet PostgreSQL. Published on Apr 20, 2014.

As I'm sure a lot of people know already, PostgreSQL 9.4 will eventually be released without "native" support for UPSERT.  That said, it's still one of the most common questions on the excellent #postgresql IRC channel, and people are sometimes tempted to try and solve it via TRIGGERs (hell, even RULEs), unsuccessfully. Even though people are often correctly told to "use a function" and are

Baji Shaik: Difference between Warm, hot standby and Streaming Replication:

From Planet PostgreSQL. Published on Apr 20, 2014.


We all know about replication and its types in postgresql. There are basic 3 types of replication in postgresql i.e  Warm, hot standby and Streaming Replication. I used to get confused between these three and was unable to find the difference when I was in my initial stages of postgres learning. However I understand the difference later by reading the docs. So I would like to post key differences between these 3 types of replication in short(it also helps if anyone wants to find which one is configured for his/her environment).

Warm Standby:
==========
Its introduced in PostgreSQL 8.3(IIRC).

1. It is based on WAL log shipping which typically means WAL archives generated on Master will be transferred and applied at Standby side. So Warm Standby always waits for the WAL archive in which Master is currently writing and keeps throw messages like "cp: cannot stat <archive>: No such file or directory". So it is always one archive behind than Master and data loss will be max of 16MB(assuming a healthy warm standby by :-) )

2. In postgresql.conf file, you would need to change just three parameters in master; wal_level to archive, archive_mode and archive_command, however nothing in postgresql.conf file at standby side.
On Master:
wal_level = archive
archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'

3. In recovery.conf file, three parameters; standby_mode, restore_command and trigger_file.
4. You cannot connect to Standby, so database is not even open for read operations (or read operations are not permitted on db).

Detailed explanation and related docs are here
Hot Standby:
========
Its introduce in PostgreSQL 9.0.

1. It is also based on WAL log shipping(same as warm standby). And of-course, WALs will be transferred and applied at Standby, so one WAL behind and always waits for the WAL archive in which Master is currently writing.
2. In postgresql.conf file, you would need to change wal_level to hot_standby, archive_mode and archive_command. Since you'll likely want to use pg_basebackup you should also set max_wal_senders to at least 2 or 3. And hot_stanby = on in standby conf file.
On Master:
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'
On Slave:
hot_standby = on
3. In recovery.conf file, three parameters; standby_mode, restore_command and trigger_file. 4. You can connect to Standby for read queries(you should set hot_stanby to ON in standby postgresql.conf file). 

Detailed explanation and related docs are here.
Steaming Replication: 
==============
 Its introduced in PostgreSQL 9.0.
1. XLOG records generated in the primary are periodically shipped to the standby via the network. XLOG records shipped are replayed as soon as possible without waiting until XLOG file has been filled. The combination of Hot Standby and SR would make the latest data inserted into the primary visible in the standby almost immediately. So minimal data loss(almost only open transactions will be lost if its async rep, 0 loss if it is sync rep) 2. In postgresql.conf file, this time 5 parameters, streaming related params like below:
On Master:
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode    = on
archive_command = 'cp %p /path_to/archive/%f'
On Slave:
hot_standby=on
3. In recovery.conf file, you would need to an extra parameter including three which you add in hot/warm standby. i.e primary_conninfo, so below are four parameters:
standby_mode          = 'on'
primary_conninfo      = 'host=192.168.0.10 port=5432 user=postgres'
trigger_file = '/path_to/trigger'
restore_command = 'cp /path_to/archive/%f "%p"'
4. You can connect to Standby for read queries(you should set hot_stanby to ON in standby postgresql.conf file).
Detailed explanation and related docs are here:  && http://bajis-postgres.blogspot.in/2013/12/step-by-step-guide-to-setup-steaming.html

Kindly let me know if I miss anything.

gabrielle roth: What’s a Chelnik, you ask?

From Planet PostgreSQL. Published on Apr 18, 2014.

PDXPUG’s official logo is a mask-wearing Slonik:
pdxpug_logo

Hir name is Chelnik. Sort of like Slonik, but different.*

A couple years ago, my co-leader Mark Wong starting making amigurumi, little crocheted stuffed animals. I think he started with Angry Birds, but pretty soon after, he made a Chelnik (the one in the banner on this blog, in fact). And then another…

He took one to PgCon 2012, and donated it to the charity auction, where it netted $500 for The Ottawa Mission.

To date, Mark’s made a dozen of these elephants for sale or auction at various Pg conferences, raising money for donations to the following organizations:
PGUS ($350), The Ottawa Mission ($680), Chicago Hopes ($640), and most recently at PGConf NYC, $1200 for Heartsong.

Thanks to Mark and everyone who’s participated in these auctions.

stowaway_cropped

* No, it doesn’t really mean anything. Beer was involved in the naming.
There’s a tumblr with some photos of the various Chels he’s distributed.


Jeff Frost: Grabbing the statement durations out of a log file using gnu grep

From Planet PostgreSQL. Published on Apr 18, 2014.

Sometimes I find myself wanting to grep all the query durations out of a file for sorting.  I recently discovered that I can use gnu grep's lookbehind syntax like so:

grep -oP '(?<=duration: )[0-9\.]+' postgresql-Mon.csv

and then sort them numerically like so:

grep -oP '(?<=duration: )[0-9\.]+' postgresql-Mon.csv | sort -n | tail

Dave Cramer: PostgreSQL JDBC example with Spring Transactions

From Planet PostgreSQL. Published on Apr 17, 2014.

Using PostgreSQL JDBC with Spring.

This article came about after a user filed an issue on the JDBC list

The specific issue is that I’m using SimpleJdbcCall.execute() to call the database and getting back a Jdbc4Array. When I then try to do something like Jdbc4Array.getArray() I get a SQL error that can be tracked down to the Jdbc driver trying to use a connection object which has already been closed by the Spring Framework.

The problem is that once you get the array back more work has to be done to get the values out of it. The following code is an example of how to use transactions with spring and PostgreSQL JDBC

Simple interface to get a value out of an array Implementation which does the actual work Note the @Transaction annotation on line 22, this is required to ensure that the connection is not closed after the first call on line 42 without this annotation spring would close the connection and the next line 43 would throw an exception. The constructor line 27 is required for spring to create the implementation and inject the datasource defined in the context Spring context SQL to create function and data Code for this can be found here SpringTransactionExample Thanks to Michael Miller for the initial code used in this example

Dimitri Fontaine: New York!

From Planet PostgreSQL. Published on Apr 17, 2014.

A couple of week ago I had the chance to participate into the PGConf NYC 2014 Conference, one of the biggest conferences about PostgreSQL worldwide.

I presented one of my favourite talks over there, where the whole goal is to blow the mind of innocent developers and show them how much they can do in just SQL.

PostgreSQL for developers, window functions galore!

The basis for the talk is this detailed blog entry about the Reset Counter application and how to leverage SQL to write the code for it.

Michael Paquier: Postgres 9.4 feature highlight: Basics about logical decoding

From Planet PostgreSQL. Published on Apr 17, 2014.

The second huge feature coming in PostgreSQL 9.4 with jsonb is called logical decoding. In short, it is a new plugin facility that can be used to decode changes that happen on a database and stream them to external sources. It can be used for many things like replication, auditing or even online upgrade solutions.

Logical decoding has been introduced in the core of PostgreSQL incrementally with a set of features that could roughly be listed as follows:

  • Logical replication slots, similar to physical slots except that they are attached to a single database.
  • WAL level "logical" in wal_level, level of WAL generated by server to be able to decode changes to the database into a coherent format.
  • Creation of a SQL interface to view the changes of a replication slot.
  • Extension of the replication protocol to support logical replication (with particularly the possibility to provide a database name in parameter "replication" of a connection string)
  • Addition of REPLICA IDENTITY, a table parameter to modify how updated and deleted tuple data is written to WAL.

Then, two new utilities are present to help users to grab an understanding of how things work:

  • test_decoding, providing an example of output plugin for decoding.
  • pg_recvlogical, an example of utility that can be used to receive changes from a logical replication slot.

Logical decoding introduces a lot of new concepts and features, making it impossible to write everything in a single post. Remember however that it is possible to customize the decoding plugin, or in this post test_decoding, and the remote source receiving the changes, pg_recvlogical in the case of this post. So for now, using what Postgres core offers, let's see how to simply set up logical replication. First, be sure that the following parameters are set in postgresql.conf:

wal_level = logical
max_replication_slots = 1

max_replication_slots needs to be at least 1. test_decoding needs to be installed as well on your server. In order to work, logical replicaton needs first a logical replication slot, which can be created using pg_create_logical_replication_slot like that. Providing a plugin name is mandatory:

=# SELECT * FROM pg_create_logical_replication_slot('my_slot', 'test_decoding');
 slotname | xlog_position
----------+---------------
 my_slot  | 0/16CB0C0
(1 row)

xlog_position corresponds to the XLOG position where logical decoding starts. After its creation, it is listed in the system view pg_replication_slots, it will be marked as active once a remote source using a replication connection starts the logical decoding with this slot:

=# SELECT * FROM pg_replication_slots;
-[ RECORD 1 ]+--------------
slot_name    | my_slot
plugin       | test_decoding
slot_type    | logical
datoid       | 12393
database     | postgres
active       | f
xmin         | null
catalog_xmin | 1001
restart_lsn  | 0/16CB088

The next step is to enable the utility consuming the decoded changes, pg_recvlogical, like that for example:

pg_recvlogical --slot -d postgres --slot my_slot --start -f -

Note that you need to connect to the database where the replication slot has been created, in my case "postgres". This command will also make all the decoded changes to be printed in stdout.

pg_recvlogical provides as well options to create and drop slots, this is not really mandatory but it is rather handy when testing logical decoding on multiple slots.

OK, now you will be able to see the logical changes received by pg_recvlogical. Logical decoding cannot replicate DDL changes, so a simple DDL like that:

=# CREATE TABLE ab (a int);
CREATE TABLE

Results in that at the logical data receiver level (in the case of test_decoding of course):

BEGIN 1002
COMMIT 1002

Now here is how an insertion is decoded:

# SQL query
=# INSERT INTO ab VALUES (1);
INSERT 0 1
# Logical data receiver
BEGIN 1003
table public.ab: INSERT: a[integer]:1
COMMIT 1003

Similarly to physical slots, as long as a receiver has not consumed the changes of a slot, WAL files will be retained in pg_xlog, so be careful that you pg_xlog partition or disk does not get completely filled up! This post shows only the top of the iceberg of this feature, and there are really a lot of things to tell about it. So stay tuned! For the time being, feel free to have a look at this very promising infrastructure.

Robert Haas: Why The Clock is Ticking for MongoDB

From Planet PostgreSQL. Published on Apr 16, 2014.

Last month, ZDNet published an interview with MongoDB CEO Max Schireson which took the position that the document databases, such as MongoDB, are better-suited to today's applications than traditional relational databases; the title of the article implies that the days of relational databases are numbered. But it is not, as Schireson would have us believe, that the relational database community is ignorant of or has not tried the design paradigms which he advocates, but that they have been tried and found, in many cases, to be anti-patterns. Certainly, there are some cases in which the schemaless design pattern that is perhaps MongoDB's most distinctive feature is just the right tool for the job, but it is also misleading to think that such designs must use a document store. Relational databases can also handle such workloads, and their capabilities in this area are improving rapidly.

Read more »

Jim Mlodgenski: PostgreSQL Videos

From Planet PostgreSQL. Published on Apr 16, 2014.

With all of the new users to PostgreSQL, one of the things we wanted to do for PGConf NYC this year was to have a video showing the history of PostgreSQL. We ended up with a very professional video showing how the features have grown over the years and by the applause of the attendees delivered on our goals. You can see it at:
https://www.youtube.com/watch?v=2RSkI9dxdbo

However, our proof of concept video was very different. Since I was done by me and my video editing skills are very limited, its not nearly as clean and professional. I also have the feeling that Jonathon Katz was afraid we’d freak out some people with the Five Finger Death Punch background music so we ended up going to more professional route. I still think it had some potential so I posted it up to YouTube:
http://youtu.be/AYn2ukNEmQM

Also, thanks to Kirk Roybal for putting together the animation of PostgreSQL source code changes over the years.
https://www.youtube.com/watch?v=HsxwNvlKZRU

 

Dan Langille: Have you never been to PGCon?

From Planet PostgreSQL. Published on Apr 15, 2014.

I remember a time when I’d never been to a conference related to my passions. Once I went, things changed. I realized that making strong working relationships with others who share my passion is important. Not only does this solidify the community of which you are a member, it also helps you personally. Every conference [...]

Hans-Juergen Schoenig: Regular expressions unleased

From Planet PostgreSQL. Published on Apr 15, 2014.

When cleaning up some old paperwork this weekend I stumbled over a very old tutorial. In fact, I have received this little handout during a UNIX course I attended voluntarily during my first year at university. It seems that those two days have really changed my life – the price tag: 100 Austrian Schillings which […]

gabrielle roth: PDXPUG: April meeting this week

From Planet PostgreSQL. Published on Apr 14, 2014.

When: 7-9pm Thu April 17, 2014
Where: Iovation
Who: Brian Panulla
What: Intro to Graph Databases

Brian Panulla is a Business Intelligence Developer at Acureo. He’s been fulfilling PDXPUG’s rhombic triacontahedron needs for over a year. Next week, he’ll be giving us an Intro to Graph Databases!

Our meeting will be held at Iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry!

Building security will close access to the floor at 7:30.

See you there!


Hubert 'depesz' Lubaczewski: Waiting for 9.4 – Provide moving-aggregate support for a bunch of aggregates.

From Planet PostgreSQL. Published on Apr 13, 2014.

On 13th of April, Tom Lane committed patch: Provide moving-aggregate support for a bunch of numerical aggregates.   First installment of the promised moving-aggregate support in built-in aggregates: count(), sum(), avg(), stddev() and variance() for assorted datatypes, though not for float4/float8.   In passing, remove a 2001-vintage kluge in interval_accum(): interval array elements have been […]

Michael Paquier: Testing heartbleed on Postgres

From Planet PostgreSQL. Published on Apr 11, 2014.

Except if you have been cut from the Internet the last week, you have already heard of Heartbleed. This good candidate for the "Bug of the Year 2014" price is already costing a better-not-to-count amount of money in maintenance and development for many companies around the world.

It has already been mentioned in which cases a PostgreSQL server would be vulnerable, but you might want to run some tests to check that the update of openssl on your server is effective. After this bug went public on 2014/04/07, many scripts have popped around on the net to help you checking if a server is vulnerable to this bug or not. However, you need to know that you may not be able to test them directly on a Postgres server as Postgres uses a custom protocol before handling the connection to openssl. A connection needs to send first a message called SSLRequest described here, consisting of two 32-bit integers, 8 and 80877103. The server then answers a single byte, either 'S' if SSL connection is supported, or 'N' if not. Once 'S' is received the SSL startup handshake message can be sent for further processing.

Taking that into account, an example of script usable to test Heartbleed vulnerability on a Postgres server can be found here, extended by my colleague Heikki Linnakangas and forked from here. Particularly, note this portion of the code to handle the PostgreSQL custom protocol:

sslrequest = h2bin('''
00 00 00 08 04 D2 16 2F
''')

[...]

print 'Sending PostgreSQL SSLRequest...'
sys.stdout.flush()
s.send(sslrequest)
print 'Waiting for server response...'
sys.stdout.flush()

sslresponse = recvall(s, 1)
if sslresponse == None:
  print 'Server closed connection without responding to SSLRequest.'
  return
# Server responds 'S' if it accepts SSL, or 'N' if SSL is not supported.
pay = struct.unpack('>B', sslresponse)
if pay[0] == 0x4E: # 'N'
  print 'PostgreSQL server does not accept SSL connections.'
  return
if pay[0] != 0x53: # 'S'
  print 'Unexpected response to SSLRequest: %d.', pay
  return

# Continue with SSL start handshake...

The variable "sslrequest" is the hexadecimal conversion of SSLRequest explained above. Now let's test that on a Linux box, Archlinux more precisely. In order to reproduce the vulnerability openssl has been temporarily downgraded to 1.0.1f as it was still available in the pacman cache. A PostgreSQL server with ssl enabled has been deployed as well:

$ pacman -Ss openssl | grep 1.0.1
core/openssl 1.0.1.g-1 [installed: 1.0.1.f-2]
$ psql -c "show ssl"
 ssl
-----
 on
(1 row)

Note the part "installed: 1.0.1.f" corresponding to the installed version of openssl, which is not the one available through the packager. And here is the result obtained when checking the vulnerability with the script:

$ ./ssltest.py 127.0.0.1 -p 5432 | tail -n 1
WARNING: server returned more data than it should - server is vulnerable!

Yes server is vulnerable. Now let's see after an upgrade to 1.0.1g.

$ pacman -Ss openssl | grep 1.0.1
core/openssl 1.0.1.g-1 [installed]
$ ./ssltest.py 127.0.0.1 -p 5432 | tail -n 1
No heartbeat response received, server likely not vulnerable

Feeling better... Feel free to have a look at the script itself for more details.

Szymon Guz: Speeding Up Saving Millions of ORM Objects in PostgreSQL

From Planet PostgreSQL. Published on Apr 11, 2014.

The Problem

Sometimes you need to generate sample data, like random data for tests. Sometimes you need to generate it with huge amount of code you have in your ORM mappings, just because an architect decided that all the logic needs to be stored in the ORM, and the database should be just a dummy data container. The real reason is not important - the problem is: let’s generate lots of, millions of rows, for a sample table from ORM mappings.

Sometimes the data is read from a file, but due to business logic kept in ORM, you need to load the data from file to ORM and then save the millions of ORM objects to database.

This can be done in many different ways, but here I will concentrate on making that as fast as possible.

I will use PostgreSQL and SQLAlchemy (with psycopg2) for ORM, so all the code will be implemented in Python. I will create a couple of functions, each implementing another solution for saving the data to the database, and I will test them using 10k and 100k of generated ORM objects.

Sample Table

The table I used is quite simple, just a simplified blog post:

CREATE TABLE posts (
  id SERIAL PRIMARY KEY,
  title TEXT NOT NULL,
  body TEXT NOT NULL,
  payload TEXT NOT NULL
);

SQLAlchemy Mapping

I'm using SQLAlchemy for ORM, so I need a mapping, I will use this simple one:
class BlogPost(Base):
    __tablename__ = "posts"

    id = Column(Integer, primary_key=True)
    title = Column(Text)
    body = Column(Text)
    payload = Column(Text)

The payload field is just to make the object bigger, to simulate real life where objects can be much more complicated, and thus slower to save to the database.

Generating Random Object

The main idea for this test is to have a randomly generated object, however what I really check is the database speed, and the whole randomness is used at the client side, so having a randomly generated object doesn’t really matter at this moment. The overhead of a fully random function is the same regardless of the method of saving the data to the database. So instead of randomly generating the object, I will use a static one, with static data, and I will use the function below:

TITLE   = "title"      * 1764
BODY    = "body"       * 1764
PAYLOAD = "dummy data" * 1764

def generate_random_post():
    "Generates a kind of random blog post"
    return BlogPost(title=TITLE, body=BODY, payload=PAYLOAD)

Solution Ideas

Generally there are two main ideas for such a bulk inserting of multiple ORM objects:

  • Insert them one-by-one with autocommit
  • Insert them one-by-one in one transaction

Save One By One

This is the simplest way. Usually we don’t save just one object, but instead we save many different objects in one transaction, and making a couple of related changes in multiple transactions is a great way leading to a database with bad data.

For generating millions of unrelated objects this shouldn’t cause data inconsistency, but this is highly inefficient. I’ve seen this multiple times in code: create an object, save it to the database, commit, create another object and so on. It works, but is quite slow. Sometimes it is fast enough, but for the cost of making a very simple change in this algorithm we can make it 10 times faster.

I’ve implemented this algorithm in the function below:

def save_objects_one_by_one(count=MAX_COUNT):
    for i in xrange(1, MAX_COUNT+1):
        post = generate_random_post()
        session.add(post)
        session.commit()

Save All in One Transaction

This solution is as simple as: create objects, save them to the database, commit the transaction at the end, so do everything in one huge transaction.

The implementation differs only by four spaces from the previous one, just run commit() once, after adding all objects:

def save_objects_one_transaction(count=MAX_COUNT):
    for i in xrange(1, MAX_COUNT+1):
        post = generate_random_post()
        session.add(post)
    session.commit()

Time difference

I ran the tests multiple times, truncating the table each time. The average results of saving 10k objects were quite predictable:

  • Multiple transactions - 268 seconds
  • One transaction - 25 seconds

The difference is not surprising, the whole table size is 4.8MB, but after each transaction the database needs to write the changes on disk, which slows the procedure a lot.

Copy

So far, I’ve described the most common methods of generating and storing many ORM objects. I was wondering about another, which may seem surprising a little bit at the beginning.

PostgreSQL has a great COPY command which can copy data between a table and a file. The file format is simple: one table row per one file row, fields delimited with a defined delimiter etc. It can be a normal csv or tsv file.

My crazy idea was: how about using the COPY for loading all the generated ORM objects? To do that, I need to serialize them to a text representation, to create a text file with all of them. So I created a simple function, which does that. This function is made outside the BlogPost class, so I don't need to change the data model.

def serialize_post_to_out_stream(post, out):
    import csv
    writer = csv.writer(out, delimiter="\t", quoting=csv.QUOTE_MINIMAL)
    writer.writerow([post.title, post.body, post.payload])

The function above gets two parameters:

  • post - the object to be serialized
  • out - the output stream where the row with the post object will be saved, in Python it is a file-like object, so an object with all the functions a file object has

Here I use a standard csv module, which supports reading and writing csv files. I really don’t want to write my own function for escaping all the possible forms of data I could have - this usually leads to many tricky bugs.

The only thing left is to use the COPY command. I don’t want to create a file with data and load that later; the generated data can be really huge, and creating temporary files can just slow things down. I want to keep the whole procedure in Python, and use pipes for data loading.

I will use the psql program for accessing the PostgreSQL database. Psql has a different command called \COPY, which can read the csv file from psql's standard input. This can be done using e.g.: cat file.csv | psql database.

To use it in Python, I’m going to use the subprocess module, and create a psql process with stdin=subprocess.PIPE which will give me write access to the pipe psql reads from. The function I’ve implemented is:

def save_objects_using_copy(count=MAX_COUNT):
    import subprocess
    p = subprocess.Popen([
        'psql', 'pgtest', '-U', 'pgtest',
        '-c', '\COPY posts(title, body, payload) FROM STDIN',
        '--set=ON_ERROR_STOP=true'
        ], stdin=subprocess.PIPE
    )
    for i in xrange(1, MAX_COUNT+1):
        post = generate_random_post()
        serialize_post_to_out_stream(post, p.stdin)
    p.stdin.close()

Results

I’ve also tested that on the same database table, truncating the table before running it. After that I’ve also checked this function, and the previous one (with one transaction) on a bigger sample - 100k of BlogPost objects.

The results are:

Sample size Multiple Transactions One Transaction COPY
10k 268 s 25 s 5 s
100k 262 s 51 s

I haven’t tested the multiple transactions version for 100k sample, as I just didn’t want to wait multiple hours for finishing that (as I run each of the tests multiple times to get more reliable results).

As you can see, the COPY version is the fastest, even 5 times faster than the full ORM version with one huge transaction. This version is also memory friendly, as no matter how many objects you want to generate, it always needs to store one ORM object in memory, and you can destroy it after saving.

The Drawbacks

Of course using psql poses a couple of problems:

  • you need to have psql available; sometimes that’s not an option
  • calling psql creates another connection to the database; sometimes that could be a problem
  • you need to set up a password in ~/.psql file; you cannot provide it in the command line

You could also get the pcycopg2 cursor directly from the SQLAlchemy connection, and then use the copy_from() function, but this method needs to have all the data already prepared in memory, as it reads from a file-like object, e.g. StringIO. This is not a good solution for inserting millions of objects, as they can be quite huge - streaming is much better in this case.

Another solution to this is to write a generator, which is a file like object, and the copy_from() method can read from it directly. This function calls the file's read() method trying to read 8192 bytes per call. This can be a good idea when you don't have access to the psql, however due to the overhead for generating the 8192 bytes strings, it should be slowever than the psql version.

Francisco Figueiredo Jr: Npgsql 2.1.3 released!

From Planet PostgreSQL. Published on Apr 09, 2014.


This version restores the @@ operator support removed in the previous version. Thanks Glen Parker who provided a fix.

Downloads can be found at the usual locations:

github download page
and
the nuget repository.

Pgfoundry will be updated soon.

Oleg Bartunov: Jsonb: "Wildcard" query

From Planet PostgreSQL. Published on Apr 08, 2014.

Just to let people know about our experiments.


Consider this top-level query, which well supported by GIN in 9.4.

postgres=# select count(*) from jb where jb @> '{"tags":[{"term":"NYC"}]}'::jsonb;
count
-------
285
(1 row)

What if I want to find just {"term":"NYC"}, or even "NYC" ? Now, with some improvements, jsonb_hash_ops supports such queries with less than 5% bigger index size and not sacrificing performance !

postgres=# select count(*) from jb where jb @>> '{"term":"NYC"}'::jsonb;
count
-------
285
(1 row)
postgres=# select count(*) from jb where jb @>> '"NYC"'::jsonb;
count
-------
285
(1 row)


I can see the use-case for wildcard queries for shop aggregators, which combine different hierachies, so it's difficult to say if some specific key is on the same level in different sources.

Szymon Guz: Filling Gaps in Cumulative Sum in Postgres

From Planet PostgreSQL. Published on Apr 08, 2014.

I found an interesting problem. There was a table with some data, among which there was a date and an integer value. The problem was to get cumulative sum for all the dates, however including dates for which we don't have entries. In case of such dates we should use the last calculated sum.

Example Data

I will use an example table:

# CREATE TABLE test (d DATE, v INTEGER);

with sample data:

# INSERT INTO test(d,v)
  VALUES('2014-02-01', 10),
        ('2014-02-02', 30),
        ('2014-02-05', 10),
        ('2014-02-10', 3);

Then the data in the table looks like:

# SELECT * FROM test;
     d      |  v
------------+----
 2014-02-01 | 10
 2014-02-02 | 30
 2014-02-05 | 10
 2014-02-10 |  3
(4 rows)

What I want is to have a cumulative sum for each day. Cumulative sum is a sum for all the earlier numbers, so for the above data I want to get:

     d      |  v
------------+----
 2014-02-01 | 10
 2014-02-02 | 40
 2014-02-05 | 50
 2014-02-10 | 53
(4 rows)

The simple query for getting the data set like shown above is:

SELECT DISTINCT d, SUM(v) OVER (ORDER BY d) v
FROM test
ORDER BY d ASC;

Filling The Gaps

The query calculates the cumulative sum for each row. Unfortunately this way there are gaps between dates, and the request was to fill those in using the values from previous days.

What I want to get is:

     d      |  v
------------+----
 2014-02-01 | 10
 2014-02-02 | 40
 2014-02-03 | 40
 2014-02-04 | 40
 2014-02-05 | 50
 2014-02-06 | 50
 2014-02-07 | 50
 2014-02-08 | 50
 2014-02-09 | 50
 2014-02-10 | 53

My first idea was to use the generate_series() function, which can generate a series of data. What I need is a series of all dates between min and max dates. This can be done using:

# SELECT generate_series(
    '2014-02-01'::timestamp,
    '2014-02-05'::timestamp,
    '1 day')::date;
 generate_series 
-----------------
 2014-02-01
 2014-02-02
 2014-02-03
 2014-02-04
 2014-02-05

The generate_series() function arguments are (begin, end, interval). The function returns all timestamps from beginning to end with given interval. The return value is timestamp, so I had to cast it to date with '::date', which is a nice PostgreSQL shortcut for the standard syntax, CAST(generate_series(...) AS DATE).

I also want to use the first query to use the cumulative sum I calculated before. It can be simply achieved using the great WITH command which creates something like a temporary table, which can be queried:

# WITH temp AS 
(
  SELECT generate_series(1, 1000) d
) 
SELECT d
FROM temp
WHERE d < 4
ORDER BY d DESC;

 d
---
 3
 2
 1

Combining all the above queries resulted in the below one:

WITH y AS 
( 
  SELECT DISTINCT d, SUM(v) OVER (ORDER BY d) v
  FROM test
)
SELECT g.d,
  (SELECT v 
   FROM y 
   WHERE y.d <= g.d
   ORDER BY d DESC
   LIMIT 1)
FROM
  (SELECT generate_series(min(d), max(d), '1 day')::date d 
   FROM y) g
ORDER BY d ASC

After the earlier explanations understanding this one should be easy.

  • I placed the original query calculating the cumulative sum in the WITH block.
  • SELECT creates a row set with two columns
    • The first column is date returns from subselect, just before ORDER BY. There are returned all dates between min and max date from the original data.
    • The second column is a subquery getting calculated cumulative sum. It gets the sum for current date (from the first column), or the previous calculated.
  • And of course we need ordering at the end. The database can reorder the data as it wants during executing the query, so we always need to declare the ordering at the end. Otherwise strange things can happen (like having the same ordering of rows for years, and suddenly a totally different one, just because someone added new row, deleted some other, or just restarted application).

Magnus Hagander: PostgreSQL and the OpenSSL Heartbleed vulnerability

From Planet PostgreSQL. Published on Apr 08, 2014.

Is your PostgreSQL installation vulnerable to the Heartbleed bug in OpenSSL? The TL;DR; version is "maybe, it depends, you should read this whole thing to find out". If you are vulnerable, it is a high risk vulnerability!

The slightly longer version is that it will be vulnerable if you are using SSL, and not vulnerable if you are not. But the situation is not quite that easy, as you may be using SSL even without planning to. PostgreSQL also not provide any extra protection against the bug - if you are using SSL, you are vulnerable to the bug just as with any other service.

As the bug is in OpenSSL, however, what you need to get patched is your OpenSSL installation and not PostgreSQL itself. And of course, remember to restart your services (this includes both PostgreSQL and any other services using SSL on your system). You will then have to consider in your scenario if you have to replace your SSL keys or not - the same rules apply as to any other service.

It depends on if SSL is enabled

PostgreSQL by default ships with SSL turned off on most platforms. The most notable exception is Debian and derivatives (such as Ubuntu), which enable SSL by default.

If SSL is disabled globally, your installation is not vulnerable.

The easiest way to check this is to just use a simple SQL query:


postgres=# show ssl;
 ssl
-----
 off
(1 row)
 

If this parameter returns off, you are not vulnerable. If it returns on, you are.

If you do not need SSL, the easiest fix is to turn this off and restart PostgreSQL. This also brings additional benefits of not paying the overhead of encryption if you don't need it. If you actually use SSL, this is of course not an option.

It depends on your installation

If you have installed PostgreSQL using a package based system, such as yum (from redhat/fedora or from yum.postgresql.org), apt (from debian/ubuntu/etc or from apt.postgresql.org), FreeBSD ports etc, it is up to your operating system to provide a patch. Most major distributions have already done this - you just need to to install it (and restart your services!). If your distribution has not yet updated, you need to convince them to do so ASAP.

If you are using a PostgreSQL installation package that bundles OpenSSL, you need an updated version of this package. The most common example of this is the EnterpriseDB Graphical Installers primarily used on Windows and Mac. We expect a new version of these installers to be released within a day or a few.

Postgres.app is also vulnerable and needs an update, but is normally not used for servers.

The OpenSCG separate download packages are also vulnerable.

For each of these you will have to wait for an updated package to show up in the next couple of days. All package maintainers have been notified, so it's only a matter of time.

Per the www.postgresql.org download pages we do recommend that you always use the "package manager" system for any platform where this is supported, which means most modern Linux or BSD distributions. If you are currently using one of the above installers on these platforms, a quick fix before the packages are out would be to switch to one of the "package manager" platforms that rely on the operating system update process. This may or may not be an option of course, depending on the complexity of the installation.

If you are using a platform where this is not available (such as Windows), your only option is to wait.

pg_hba does not protect you

In PostgreSQL, the SSL negotiation happens before pg_hba.conf is matched. And the vulnerability in OpenSSL is in the negotiation phase. For this reason, even if you have restricted access to your server using pg_hba.conf IP filter rules, or your pg_hba.conf specifies only hostnossl records, this does not protect you.

Obviously, if you have an IP level firewall, either at the host or on the network, that will protect you. But pg_hba does not.

Usage in pgcrypto

The pgcrypto module in PostgreSQL uses OpenSSL to provide encryption functions when available. Since the vulnerability is specifically in the protocol negotiation, use in pgcrypto is not vulnerable to this issue.

Joel Jacobson: SET search_path for all SECURITY DEFINER functions

From Planet PostgreSQL. Published on Apr 07, 2014.

As pointed out by Peter Eisentraut in a blog post named
Schema Search Paths Considered Pain in the Butt, you need to make sure the search_path is explicitly set for all SECURITY DEFINER functions in PostgreSQL.

Fixing this manually for, in my case, 2106 functions, is, indeed a “pain in the butt”, so I crafted a little query to automate the job:

\t
\pset format unaligned
\o /tmp/fix_search_path_for_security_definer_functions.sql
select
    array_to_string(
        array_agg(
            -- inject SET search_path in-between LANGUAGE and SECURITY DEFINER in the declaration
            regexp_replace(
                pg_get_functiondef(oid),
                E'(LANGUAGE [a-z]+)\\s+(SECURITY DEFINER)',
                E'\\1\n SET search_path TO public, pg_temp\n \\2'
            )
        ),
        ';'
    )
from pg_proc
where prosecdef is true -- SECURITY DEFINER functions
-- don't include functions for which we have already specified a search_path
and not (coalesce(array_to_string(proconfig,''),'') like '%search_path%')
-- public schema
and pronamespace = 2200
;
\t
\o
\i /tmp/fix_search_path_for_security_definer_functions.sql
-- If all goes well you should see a lot of CREATE FUNCTION being spammed on the screen

Bruce Momjian: Postgres Gets the Business

From Planet PostgreSQL. Published on Apr 06, 2014.

I just returned from attending PGConf NYC. They had 259 participants, more than double last year's total. The conference was in a hotel near Wall Street.

While I have been to many Postgres user conferences, this felt like my first corporate Postgres conference. Presenters from multinational banks Goldman Sachs and Morgan Stanley explained how and why they use Postgres in their organizations. These are trend-setting organizations, and their public embrace of Postgres will have lasting benefits.

In fact, one hallway discussion was how to enable large organizations like these, particularly those with significant legal and regulatory requirements, to work with the Postgres community. Some companies have employees post from non-company email accounts like Gmail, while others contract with consulting companies to work with the Postgres community on their behalf. Unfortunately, neither of these approaches have the companies working with the community openly.

Continue Reading »

Michael Paquier: Postgres 9.4 feature highlight: Indexing JSON data with jsonb data type

From Planet PostgreSQL. Published on Apr 06, 2014.

PostgreSQL 9.4 is shipping with a new feature called jsonb, which is a new data type able to store JSON data supporting GIN indexing (!). In short, this feature, one of the most important of the upcoming release, if not the most important, puts Postgres directly in good position in the field of document-oriented database systems.

Since 9.2, an integrated JSON datatype already exists, completed with a set of functions (data generation and parsing functions) as well as operators added in 9.3. When using "json" data type, data is stored as an exact copy of the input text which functions working on it need to reparse causing some processing overhead.

The new jsonb data type stores data in a decomposed binary format, so inserting it is less performant than json because of the overhead necessary to put it in shape but it is faster as it does not need reparsing, and it has the advantage to support GIN indexing. For this last reason it is actually recommended to use jsonb for your applications instead of json (you might need only json depending on your needs though). Note as well that jsonb has the same operators as functions as json, you can refer to my previous posts on the matter to get some insight on them or directly at the documentation of Postgres.

Now let's see how jsonb works and let's compare it with json with as data sample a dump of geobase, worth 8.6 million tuples and 1.1GB, with many fields like the city name, country code (you can refer to a complete list of the fields here). After storing the data into a new table with a raw COPY, let's transform it into json/jsonb in a set of tables with a fillfactor at 100 to see how much space they use:

=# COPY geodata FROM '$HOME/Downloads/allCountries.txt';
COPY 8647839
=# CREATE TABLE geodata_jsonb (data jsonb) with (fillfactor=100);
CREATE TABLE
=# CREATE TABLE geodata_json (data json) with (fillfactor=100);
CREATE TABLE
=# \timing
Timing is on.
=# INSERT INTO geodata_json SELECT row_to_json(geodata) FROM geodata;
INSERT 0 8647839
Time: 287158.457 ms
=# INSERT INTO geodata_jsonb SELECT row_to_json(geodata)::jsonb FROM geodata;
INSERT 0 8647839
Time: 425825.967 ms

Inserting jsonb data took a little bit more time. And what is the difference of size?

=# SELECT pg_size_pretty(pg_relation_size('geodata_json'::regclass)) AS json,
          pg_size_pretty(pg_relation_size('geodata_jsonb'::regclass)) AS jsonb;
  json   |  jsonb  
---------+---------
 3274 MB | 3816 MB
(1 row)

Creating indexes on json data is possible even with 9.3, for example by indexing some given keys using the operators present (note that '->>' is used as it returns text, and that the set of keys in the index is chosen depending on the queries):

=# CREATE INDEX geodata_index ON
    geodata_json ((data->>'country_code'), (data->>'asciiname'));
CREATE INDEX
=# SELECT pg_size_pretty(pg_relation_size('geodata_index'::regclass))
    AS json_index;
 json_index 
------------
 310 MB
(1 row)
=# SELECT (data->>'population')::int as population,
          data->'latitude' as latitude,
          data->'longitude' as longitude
   FROM geodata_json WHERE data->>'country_code' = 'JP' AND
        data->>'asciiname' = 'Tokyo' AND
        (data->>'population')::int != 0;
 population | latitude | longitude 
------------+----------+-----------
    8336599 | 35.6895  | 139.69171
(1 row)
=# -- Explain of previous query
                                                       QUERY PLAN                                                        
-------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on geodata_json  (cost=6.78..865.24 rows=215 width=32)
   Recheck Cond: (((data ->> 'country_code'::text) = 'JP'::text) AND ((data ->> 'asciiname'::text) = 'Tokyo'::text))
   Filter: (((data ->> 'population'::text))::integer <> 0)
   ->  Bitmap Index Scan on geodata_index  (cost=0.00..6.72 rows=216 width=0)
         Index Cond: (((data ->> 'country_code'::text) = 'JP'::text) AND ((data ->> 'asciiname'::text) = 'Tokyo'::text))
 Planning time: 0.172 ms
(6 rows)

In this case the planner is able to use a bitmap index scan and uses the index created previously.

Now one of the new things that jsonb has and not json is the possibility to check containment within some data with the operator @>, which is indexable using GIN, as well as the existence operators ?, ?| and ?& (to check if given key(s) exist) by the way. GIN indexing is possible with two operator classes:

  • Default operator class that all four operators listed previously
  • jsonb_hash_ops, supporting only @> but performing better when searching data and having a smaller on-disk size.

Here is how it works:

=# CREATE INDEX geodata_gin ON geodata_jsonb
      USING GIN (data jsonb_hash_ops);
CREATE INDEX
=# SELECT (data->>'population')::int as population,
      data->'latitude' as latitude,
      data->'longitude' as longitude
   FROM geodata_jsonb WHERE data @> '{"country_code": "JP", "asciiname": "Tokyo"}' AND
       (data->>'population')::int != 0;
 population | latitude | longitude 
------------+----------+-----------
    8336599 | 35.6895  | 139.69171
(1 row)
 =# SELECT pg_size_pretty(pg_relation_size('geodata_gin'::regclass)) AS jsonb_gin;
 jsonb_gin
-----------
 1519 MB
(1 row)
=# -- EXPLAIN of previous query
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Bitmap Heap Scan on geodata_jsonb  (cost=131.01..31317.76 rows=8605 width=418)
   Recheck Cond: (data @> '{"asciiname": "Tokyo", "country_code": "JP"}'::jsonb)
   Filter: (((data ->> 'population'::text))::integer <> 0)
   ->  Bitmap Index Scan on geodata_gin  (cost=0.00..128.86 rows=8648 width=0)
         Index Cond: (data @> '{"asciiname": "Tokyo", "country_code": "JP"}'::jsonb)
  Planning time: 0.134 ms

Depending on the application needs, you might prefer a less space-consuming index like a btree tree on the field names of the JSON data as showed previously, the GIN indexing here having the advantage to be more generic as it covers all the fields of JSON and checks their containment.

Raghavendra Rao: While performing PITR, would it be possible to Pause/Resume in PostgreSQL ?

From Planet PostgreSQL. Published on Apr 06, 2014.

Yes, truly possible and handled smartly by PostgreSQL. To demo this, first I need to take after the standard technique of Point in Time Recovery in PostgreSQL. Various Books/Articles/Blogs demoed extremely well by extraordinary authors, hence am not going into details of how to do it, however, heading off directly to the subject i.e., how to pause while recovering with same technique. Arguably, I put forth a mathematical expression out of PITR as "PITR = (Last Filesystem Backup(LFB) + WAL Archives generated after LFB + Un-Archived WAL's in current $PGDATA/pg_xlogs)". For better understanding, I have put this into graph, in light of the fact that it clear the thought more: (Sorry, this blog is bit long, unknowingly it happened while going in details of the concept)


PITR steps,which am going to follow with slight changes that I talk about soon:

Step 1. Restore the most recent File System-level backup(FSB) to any location where recovery is planned to perform.
Step 2. If FSB is tar,then untar it, and clean the pg_xlog directory leaving archive_status. If backup has excluded this directory, then create the empty pg_xlog directory in FSB.
Step 3. Copy un-archived WAL's from crashed cluster $PGDATA/pg_xlog into $FSB/pg_xlog (Step 2)
Step 4. Delete the postmaster.pid from FSB directory.
Step 5. Create recovery.conf file in FSB directory.
Step 6. Start the cluster (FSB). 

We should put question, when pausing the recovery required ?. Maybe, to prevent multiple base restorations or roll-forward recovery but check in between or rollback a particular tables data or interest to see how far it has recovered :). Remember, pause in recovery means, its allowing to connect while recovering. To outline this, I have reproduced a situation in chart of a particular table rows improvement until to a mishap.


From above diagram, its agreeable a DEMO table rows were 10,00,000 when file system-level backup($PGDATA) taken and 40,00,000 rows before crash. In my local VM, I have made the situation on groundwork of TIME instead of date.

Pre-Requisite:
1. File System-Level Backup when DEMO tables having 10,00,000 rows.
2. From that point forward, WAL Archives before crash where DEMO table having 40,00,000 rows.
3. WAL Archives Location: /opt/PostgreSQL/9.3/archives.
4. Data Directory : /opt/PostgreSQL/9.3/data (PGDATA)
5. Backup Location : /opt/PostgreSQL/9.3/backups

Keep in mind, working with pause recovery need compulsory changes on main cluster($PGDATA) "wal_level" set to "hot_standby" and on recovery cluster(file system-level backup) "hot_standby" set to "ON". I have made these changes to main cluster, restarted the cluster to take effect and initiated the backup. If you don't mind make a note it simply a demo, so my WAL archives might not be gigantic  number's as they are in few numbers. I have listed WAL archives too here, which were generated from the time of backup to crash.
-bash-4.1$ psql -c "select count(*), now() from demo;"
count | now
---------+-------------------------------
1000000 | 2014-04-04 15:06:04.036928-07
(1 row)

-bash-4.1$ pg_basebackup -D /opt/PostgreSQL/9.3/backup/data_pitr -- I have my $PGDATA, $PGUSER, $PGPORT set, so its a straight command in my case
NOTICE: pg_stop_backup complete, all required WAL segments have been archived
Current state of WAL archives and $PGDATA/pg_xlog
-bash-4.1$ ls -lrth /opt/PostgreSQL/9.3/archives
-rw------- 1 postgres postgres 16M Apr 4 16:01 00000001000000000000001C
-rw------- 1 postgres postgres 16M Apr 4 16:01 00000001000000000000001D
-rw------- 1 postgres postgres 289 Apr 4 16:06 00000001000000000000001E.000000C8.backup
-rw------- 1 postgres postgres 16M Apr 4 16:06 00000001000000000000001E

-bash-4.1$ ls -lrth /opt/PostgreSQL/9.3/data/pg_xlog | tail -4
-rw------- 1 postgres postgres 289 Apr 4 16:06 00000001000000000000001E.000000C8.backup
-rw------- 1 postgres postgres 16M Apr 4 16:06 00000001000000000000001E
-rw------- 1 postgres postgres 16M Apr 4 16:06 00000001000000000000001F
drwx------ 2 postgres postgres 4.0K Apr 4 16:13 archive_status
Fine now, we have the backup copy, lets INSERT few records in three parts by noting the time, so it will help to pause recovery and additionally see the WAL's produced from the time of FSB.
-bash-4.1$ psql -c "insert into demo values (generate_series(1,1000000));"
INSERT 0 1000000
-bash-4.1$ psql -c "select count(*),now() from demo;"
count | now
---------+-------------------------------
2000000 | 2014-04-04 16:06:34.941615-07
(1 row)
-bash-4.1$ psql -c "insert into demo values (generate_series(1,1000000));"
INSERT 0 1000000
-bash-4.1$ psql -c "select count(*),now() from demo;"
count | now
---------+-------------------------------
3000000 | 2014-04-04 16:10:31.136725-07
(1 row)
-bash-4.1$ psql -c "insert into demo values (generate_series(1,1000000));"
INSERT 0 1000000
-bash-4.1$ psql -c "select count(*),now() from demo;"
count | now
---------+-------------------------------
4000000 | 2014-04-04 16:13:00.136725-07
(1 row)
Check the number of WAL's produced during the INSERT.
-bash-4.1$ ls -lrth /opt/PostgreSQL/9.3/archives
-rw------- 1 postgres postgres 289 Apr 4 16:06 00000001000000000000001E.000000C8.backup
-rw------- 1 postgres postgres 16M Apr 4 16:06 00000001000000000000001E
-rw------- 1 postgres postgres 16M Apr 4 16:06 00000001000000000000001F
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000020
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000021
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000022
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000023
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000024
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000025
-rw------- 1 postgres postgres 16M Apr 4 16:06 000000010000000000000026
-rw------- 1 postgres postgres 16M Apr 4 16:10 000000010000000000000027
-rw------- 1 postgres postgres 16M Apr 4 16:10 000000010000000000000028
-rw------- 1 postgres postgres 16M Apr 4 16:10 000000010000000000000029
-rw------- 1 postgres postgres 16M Apr 4 16:10 00000001000000000000002A
-rw------- 1 postgres postgres 16M Apr 4 16:13 00000001000000000000002B
Assume at this point mishap happened and you have to do recovery using FSB + WAL archives + Unarchived WAL's(if any). During recovery, I want to pause three time to see each recovery of 20,00,000, 30,00,000 and 40,00,000 rows of DEMO table by connecting to the database in READ-ONLY mode. For each resume of recovery need a restart of recovery cluster by bumping to new timeline in recovery.conf/recovery_target_time. Also,in $FSB/postgresql.conf, we have to set hot_standby=on. Here's my recovery.conf file:
-bash-4.1$ more recovery.conf
pause_at_recovery_target = true
#recovery_target_time = '2014-04-04 16:06:34' # For 2 lakh records
#recovery_target_time = '2014-04-04 16:10:31' # For 3 lakh records
#recovery_target_time = '2014-04-04 16:13:00' # For 4 lakh records
restore_command = 'cp /opt/PostgreSQL/9.3/archives/%f %p'
Let's start recovery for 20,00,000 records:
-bash-4.1$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_pitr/ start
server starting

Now in logs:

-bash-4.1$ more postgresql-2014-04-04_162524.log
2014-04-04 16:25:24 PDT-24187---[] LOG: starting point-in-time recovery to 2014-02-06 18:48:56-08
2014-04-04 16:25:24 PDT-24187---[] LOG: restored log file "00000001000000000000001E" from archive
2014-04-04 16:25:24 PDT-24187---[] LOG: redo starts at 0/1E0000C8
2014-04-04 16:25:24 PDT-24187---[] LOG: consistent recovery state reached at 0/1E000190
2014-04-04 16:25:24 PDT-24185---[] LOG: database system is ready to accept read only connections
2014-04-04 16:25:24 PDT-24187---[] LOG: restored log file "00000001000000000000001F" from archive
2014-04-04 16:25:24 PDT-24187---[] LOG: restored log file "000000010000000000000020" from archive
2014-04-04 16:25:25 PDT-24187---[] LOG: restored log file "000000010000000000000021" from archive
2014-04-04 16:25:25 PDT-24187---[] LOG: restored log file "000000010000000000000022" from archive
2014-04-04 16:25:25 PDT-24187---[] LOG: recovery stopping before commit of transaction 1833, time 2014-04-04 16:06:23.893487-07
2014-04-04 16:25:25 PDT-24187---[] LOG: recovery has paused
2014-04-04 16:25:25 PDT-24187---[] HINT: Execute pg_xlog_replay_resume() to continue
Cool, see in logs it has paused and a smart HINT asking to resume. Here, if the recovery was satisfactory, you can resume it by calling "select pg_xlog_replay_resume();"(You can check it out). Lets not resume now, however check the number of rows recovered by connecting to the server.
-bash-4.1$ psql -c "select count(*),pg_is_in_recovery() from demo;"
count | pg_is_in_recovery
---------+-------------------
2000000 | t
(1 row)
Good, it has reached to the point and paused where I requested. Lets move one more step ahead for recovering 30,00,000 rows.  Now, set the next timeline in recovery.conf/recovery_target_time and restart the cluster.
2014-04-04 16:28:40 PDT-24409---[] LOG:  restored log file "00000001000000000000002A" from archive
2014-04-04 16:28:40 PDT-24409---[] LOG: recovery stopping before commit of transaction 1836, time 2014-04-04 16:10:40.141175-07
2014-04-04 16:28:40 PDT-24409---[] LOG: recovery has paused
2014-04-04 16:28:40 PDT-24409---[] HINT: Execute pg_xlog_replay_resume() to continue.

-bash-4.1$ psql -c "select count(*),pg_is_in_recovery() from demo;"
count | pg_is_in_recovery
---------+-------------------
3000000 | t
(1 row)
Nice..., let's give the last attempt to pause at 40,00,000 rows.
2014-04-04 20:09:07 PDT-4723---[] LOG:  restored log file "00000001000000000000002B" from archive
cp: cannot stat `/opt/PostgreSQL/9.3/archives/00000001000000000000002C': No such file or directory
2014-04-04 20:09:07 PDT-4723---[] LOG: redo done at 0/2B0059A0
2014-04-04 20:09:07 PDT-4723---[] LOG: last completed transaction was at log time 2014-04-04 16:11:12.264512-07
2014-04-04 20:09:07 PDT-4723---[] LOG: restored log file "00000001000000000000002B" from archive
2014-04-04 20:09:07 PDT-4723---[] LOG: restored log file "00000002.history" from archive
2014-04-04 20:09:07 PDT-4723---[] LOG: restored log file "00000003.history" from archive
2014-04-04 20:09:07 PDT-4723---[] LOG: restored log file "00000004.history" from archive
cp: cannot stat `/opt/PostgreSQL/9.3/archives/00000005.history': No such file or directory
2014-04-04 20:09:07 PDT-4723---[] LOG: selected new timeline ID: 5
cp: cannot stat `/opt/PostgreSQL/9.3/archives/00000001.history': No such file or directory
2014-04-04 20:09:07 PDT-4723---[] LOG: archive recovery complete
2014-04-04 20:09:08 PDT-4721---[] LOG: database system is ready to accept connections
2014-04-04 20:09:08 PDT-4764---[] LOG: autovacuum launcher started

-bash-4.1$ psql -c "select count(*),pg_is_in_recovery() from demo;"
count | pg_is_in_recovery
---------+-------------------
4000000 | f
(1 row)
Oops, what happened, why it has not paused and what its complaining?. Keep in mind, if no WAL archives present at the time of recovery_target_time then it won't pause and expect as it has arrived to the last point and open the database for READ/WRITE. In logs, without much stretch make out it was hunting down for file "00000001000000000000002C" which's not available, because at that time cluster has crashed. Some may not acknowledge this behaviour but its fact and makes sense when no WAL archives present then there's no reason for pausing the recovery. If at all required to pause even after no WAL archives, then make use of standby_mode='on' (HOT_STANDBY), in this method it won't come out of recovery but wait for WAL Archives.

Hope it was useful.

--Raghav

Andrew Dunstan: Version 4.12 of the PostgreSQL Buildfarm client released.

From Planet PostgreSQL. Published on Apr 05, 2014.

I have released version 4.12 of the buildfarm client.

In addition to numerous bug fixes, it has the following:

  • the global option branches_to_build can now be 'HEADPLUSLATESTn' for any single digit n
  • there is a new module TestCollateLinuxUTF8
  • there is a new module TestDecoding which is enabled by default, (but does nothing on MSVC systems, where we can't yet run these tests.) This runs the new contrib test_decoding module, which can't run under "make installcheck".
  • running "perl -cw" on the scripts will now give you failures for missing perl modules on almost every platform. The only exception should now be on older Msys systems.
  • improvements in the sample config file to make it better organized and better reflecting of best practice.
  • find_typdefs is now supported on OSX

In addition I recently enhanced the HOWTO at http://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto covering especially best current git practice.

Thanks to Tom Lane for suggestions and complaints which are behind a number of the improvements and fixes, and for some code for OSX find_typedefs.

The release is available at http://www.pgbuildfarm.org/downloads/releases/build-farm-4_12.tgz

Baji Shaik: Anyone wants to change the tablespaces locations while running pg_upgrade ?? isn't it easy !!

From Planet PostgreSQL. Published on Apr 05, 2014.

As we all know pg_upgrade handles tablespaces in a smart way. However it creates the tablespaces in the same location of old ones. If you want to change the location after upgrade, then you would need some work including updating the catalog table with the new location(IIRC, in PG9.4, you can do it by using ALTER TABLESPACE command). So, for current versions I would like to give you a work around to change the locations of tablespaces while running the pg_upgrade. This tweak is also applicable, If you have any contrib modules installed in databases of old cluster, you have to install them in new cluster also, however we can not create databases in new cluster as it should be empty to run pg_upgrade. "pg_upgrade" creates the databases using template "template 0" in the new cluster. 

You can follow below steps to tweak the running pg_upgrade process.

1. I've upgraded PG9.1 cluster "Im_old_one" to PG9.3 cluster "Hey_Im_New", as you see below it is created new tablespace in the same directory.

-bash-4.1$ /opt/PostgreSQL/9.1/bin/psql -p 5666 -U postgres postgres
Timing is on.
psql.bin (9.1.7)
Type "help" for help.


postgres=# CREATE TABLESPACE tablsc_91 LOCATION '/tmp/tablsc_91';
CREATE TABLESPACE
Time: 1.663 ms
-bash-4.1$ /opt/PostgreSQL/9.3/bin/pg_upgrade -b /opt/PostgreSQL/9.1/bin/ -B /opt/PostgreSQL/9.3/bin/ -d /tmp/Im_old_one/ -D /tmp/Hey_Im_New/ -p 5666 -P 5667
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is a superuser                       ok

.
.
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    delete_old_cluster.sh
-bash-4.1$
-bash-4.1$
-bash-4.1$ ls -ltrh /tmp/tablsc_91/
total 8.0K
drwx------. 2 postgres postgres 4.0K Apr  5 14:25 PG_9.1_201105231
drwx------. 2 postgres postgres 4.0K Apr  5 14:29 PG_9.3_201306121
2. Now delete the new cluster and run the pg_upgrade command and keep monitoring it.
3. Press control + z (pause the job) just after the catalog dump is created. You can pause the job when you see the below output:
-bash-4.1$ /opt/PostgreSQL/9.3/bin/pg_upgrade -b /opt/PostgreSQL/9.1/bin/ -B /opt/PostgreSQL/9.3/bin/ -d /tmp/Im_old_one/ -D /tmp/Hey_Im_New/ -p 5666 -P 5667
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is a superuser                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok ---> press control + z just after this.
^Z


**Note : If we fail to pause job after this "Creating catalog dump" step, then please kill this job and re-initiate the new pg_upgrade process. The re-initiate of pg_upgrade does not require any creation of new cluster or any other changes.

4. Open the global object dump and modify CREATE TABLESAPCE command to your new location:

-bash-4.1$ vi pg_upgrade_dump_globals.sql
Above file will be created in the location from where you run pg_upgrate command, open the file and change the create tablespace command from old location to new location.
Ex:-
Change all the CREATE TABLESPACE statements in the file "pg_upgrade_dump_globals.sql"

From
CREATE TABLESPACE "tablsc_91" OWNER "postgres" LOCATION '/tmp/tablsc_91';
To
CREATE TABLESPACE "tablsc_91" OWNER "postgres" LOCATION '/tmp/tablsc_93';
5.  Once modifications has been completed, then resume the job by using "fg" command. It should continue with the restoration process.

Any suggestions/comments would be most welcome!

Hubert 'depesz' Lubaczewski: How to deal with timestamps?

From Planet PostgreSQL. Published on Apr 04, 2014.

Every now and then someone asks, on irc or mailing lists, some question which shows deep misunerstanding (or lack of understanding) of timestamps – especially the ones with time zones. Since I got bitten by this before, let me describe what timestamps are, how to work with them, and what are the most common pitfalls […]

Oleg Bartunov: Nested hstore (hstore 2.0) git repository

From Planet PostgreSQL. Published on Apr 04, 2014.

We saved for historical reason our nested hstore project at github. I don't recommend to use it, since all development resourse now are concentrated on jsonb.

Expect better jsonb indexing like structural queries support and jsonb query language in 9.5 or 9.6.

PlanetDjango.org shutting down

By Adomas Paltanavičius from Planet Django. Published on Mar 08, 2014.

Is Open Source Consulting Dead?

By chrism from plope. Published on Sep 10, 2013.

Has Elvis left the building? Will we be able to sustain ourselves as open source consultants?

Consulting and Patent Indemification

By chrism from plope. Published on Aug 09, 2013.

Article about consulting and patent indemnification

Python Advent Calendar 2012 Topic

By chrism from plope. Published on Dec 24, 2012.

An entry for the 2012 Japanese advent calendar at http://connpass.com/event/1439/

Why I Like ZODB

By chrism from plope. Published on May 15, 2012.

Why I like ZODB better than other persistence systems for writing real-world web applications.

A str. __iter__ Gotcha in Cross-Compatible Py2/Py3 Code

By chrism from plope. Published on Mar 03, 2012.

A bug caused by a minor incompatibility can remain latent for long periods of time in a cross-compatible Python 2 / Python 3 codebase.

In Praise of Complaining

By chrism from plope. Published on Jan 01, 2012.

In praise of complaining, even when the complaints are absurd.

2012 Python Meme

By chrism from plope. Published on Dec 24, 2011.

My "Python meme" replies.

In Defense of Zope Libraries

By chrism from plope. Published on Dec 19, 2011.

A much too long defense of Pyramid's use of Zope libraries.

Plone Conference 2011 Pyramid Sprint

By chrism from plope. Published on Nov 10, 2011.

An update about the happenings at the recent 2011 Plone Conference Pyramid sprint.

Jobs-Ification of Software Development

By chrism from plope. Published on Oct 17, 2011.

Try not to Jobs-ify the task of software development.

WebOb Now on Python 3

By chrism from plope. Published on Oct 15, 2011.

Report about porting to Python 3.

Open Source Project Maintainer Sarcastic Response Cheat Sheet

By chrism from plope. Published on Jun 12, 2011.

Need a sarcastic response to a support interaction as an open source project maintainer? Look no further!

Pylons Miniconference #0 Wrapup

By chrism from plope. Published on May 04, 2011.

Last week, I visited the lovely Bay Area to attend the 0th Pylons Miniconference in San Francisco.

Pylons Project Meetup / Minicon

By chrism from plope. Published on Apr 14, 2011.

In the SF Bay Area on the 28th, 29th, and 30th of this month (April), 3 separate Pylons Project events.

PyCon 2011 Report

By chrism from plope. Published on Mar 19, 2011.

My personal PyCon 2011 Report