^^**wAtcH^^**Aston Villa vs Manchester City Live Stream Soccer Online February 12, 2012

wAtcH Barclays Premier League between Aston Villa vs Manchester City live stream
Are you looking for Aston Villa vs Manchester City Free live streaming link? Then you have come to the correct place. You can watch a live online stream of Aston Villa vs Manchester City right here. The is no need to look anywhere else. The links for the event appear above.

<a href="" title=""></a>

<a href="" title=""></a>

window functions in an UPDATE

Is there a trick to using window functions to SET columns in an UPDATE?
Here is the query I'd like to run:

UPDATE profiles
SET score_tier = percent_rank()

But that gives me an error on Postgres 9.1:

ERROR: cannot use window function in UPDATE

This alternate version works, but is messier and slower:

UPDATE profiles p
SET score_tier = x.perc
percent_rank() OVER (PARTITION BY site_id ORDER BY
score AS

Using window functions to get the unpaginated count for paginated queries

Hi all,

Recently, during a performance improvement sweep for an application at my
company, one of the hotspots that was discovered was pagination.

In order to display the correct pagination links on the page, the
pagination library we used (most pagination libraries for that matter) ran
the query with OFFSET and LIMIT to get the paginated results, and then
re-ran the query without the OFFSET and LIMIT and wrapped them in a SELECT
COUNT(*) FROM main_query to get the total number of rows.

In an attempt to optimize this, we used a window function as follows:

Given a query that looked as follo

general fear question about move PGDATA from one Disc to another

Dear listmembers,

I need to move


from the / partion to another disc.

Based on the

<a href="" title=""></a>

I plan the following.

0.) Mount new pgroot, are there any hints for the mount command?

mount -t ext4 -o noatime,nodiratime,nodev,noexec,nosuid /dev/sde1

output from mount

/dev/sde1 on /pgroot type ext4

1.) create a full backup of the current DB

/usr/bin/pg_dumpall --file=/tmp/backup_before_

Detect the side effect of 'using' clause and adding coulms

Hello Guys;

Today, I have noticed that my dumps are not working due to bad practice in writing SQL queries.

In the past,  I there was a scenario where I there are two tables,  one of them is completely dependent on the other.  i.e.

PG under OpenVZ?


I've found an old thread on OpenVZ:

(2008): <a href="" title=""></a>

And a more recent question that scared me a bit:

(2011): <a href="" title=""></a>

On the PostgreSQL general mailing list, I've only found 54 results when searching for OpenVZ. I'm wondering if OpenVZ is simply unpopular, or not used at all for PG. What experiences do you have with OpenVZ? Any performance problems?

We're buying bare metal to run our clusters on, and the supplier is late delivering the machines.

plpgsql cursor reuse

Came across this problem when trying to assign to a variable a field from a record that could come from multiple cursors. PG throws an error - "
ERROR: type of parameter 7 (bigint) does not match that when preparing the plan (unknown)". If I make the null column in c1 null::bigint to match cursor c2, it works fine.

Where is this plan coming from? Why would it match c1 to a plan coming from c2? In reality, the two cursors in question are wildly different- a join of about 10 completely different tables.

PSA: XFS and Linux Cache Poisoning

Hey everyone,

We recently got bit by this, and I wanted to make sure it was known to
the general community.

In new(er) Linux kernels, including late versions of the 2.6 tree, XFS
has introduced dynamic speculative preallocation. What does this do? It
was added to prevent filesystem fragmentation by preallocating a large
chunk of memory to files so extensions to those files can go on the same
allocation. The "dynamic" part just means it adjusts the size of this
preallocation based on internal heuristics.

Unfortunately, they also changed the logic in how this extra space is

FATAL: sorry, too many clients already when I start the replication

I have strange problem. I am trying to achieve streaming replication
between 2 PostgreSQL servers with version 9.2.1. The replications worked
just fine then the servers was without load. The problem is that now then
the servers are loaded I cannot start the replication without receiving
this error:
FATAL: sorry, too many clients already

The slave server is out of the question and is stopped.

PostgreSQL and a clustered file system


Is anyone running PostgreSQL on a clustered file system on Linux? By
"clustered" I actually mean "shared", such that the same storage is
mounted by different servers at the same time (of course, only one
instance of PostgreSQL on only one server can be running on such a
setup, and there are a lot of other precautions that need to be satisfied).

VACUUM details (vacuum_freeze_min_age/vacuum_freeze_table_age)

Hi all.

After reading the manual's (v9.2) Chapter 23 and in particular sections
23.1.5 i believe i've got some confusion with respect to VACUUM's details.

Assume we're at t0 and we just VACUUM'ed (simple, not FREEZE) a specific
table and txid_current()=1500 and the pg_class entry for this table shows

Q: The 'relfrozenxid' is the freeze cutoff XID for this table which
translates to: for this table all rows that show to have been modified by
this XID, i.e.700, will always appear to be "in the past" to all normal
transactions regardless of wraparound issues.

Can dml realize the partition table's rule and make good execution plan?

Hi all:
I made partition tables:

postgres=# create table ptest(id integer, name varchar(20));
postgres=# create table ctest01(CHECK(id<5000000)) inherits (ptest);
postgres=# create table ctest02(CHECK(id>=5000000)) inherits (ptest);
postgres=# create index on ctest01(id);
postgres=# create index on ctest02(id);

postgres=# CREATE OR REPLACE FUNCTION ptest_insert_trigger() RETURNS
postgres$# BEGIN
postgres$# IF ( <5000000 ) THEN
postgres$# INSER

Enabling unaccent module on all schemas


I've installed unaccent on the public schema using CREATE EXTENSION.
However I have multiple schemas and they don't have the public schema on
their search path. I've tried calling "public".unaccent() but it fails with
'text search dictionary "unaccent" does not exist'. I suppose it is trying
to locate the unaccent dictionary on the wrong path because the search path
does not contain public. Is there anyway to overcome this issue?

Many thanks in advance!


Oracle to PostgreSQL replication with Goldengate

Hello all,
GoldenGate added PostgreSQL as a target database for replication. I tried
setting it up, and not finding any tutorial, put together a how to here -
<a href="" title=""></a>
I think near real-time replication might have quite a few cases - such as
trying out the application on PostgreSQL before really cutting over from an
Oracle database, may be running reports off PostgreSQL and so on.

Using COPY FROM on a subset of the file's column


I'm trying to import a text file into a table using the COPY command.

The text file contains four columns: id, col_1, col_2, col_3 (in that order).
My target table only has id and col_1.

From the documentation I assumed that the following statement

copy foo (id, col_1)
from 'foo.txt'
with (format csv, delimiter ';', header true);

would only import id and col_1, but instead it fails with "ERROR: extra data after last expected column".

Am I missing something, or is this a limitation of the COPY command?


Re: [GENERAL] "could not write to output file: Permission denied" during pg_dump

Correction, that should have been Postgres 9.0.4 not 9.4.

"could not write to output file: Permission denied" during pg_dump

We have several Postgres 9.4 databases on Solaris 10 that are structural
clones but with different data . While running multiple concurrent
pg_dump exports for these databases, we get sporadic errors like this:

pg_dump: dumping contents of table attachment

pg_dump: [custom archiver] could not write to output file: Permission

pg_dump: *** aborted because of error

This is after successfully dumping several tables. The table that
triggers the failure varies from run to run, and is not (or, so far has
not been) the first table processed.

PG defaults and performance (was Re: [GENERAL] Unexpectedly high disk space usage RESOLVED (Manual reindex/vacuum))

On Sat, Nov 10, 2012 at 12:26 PM, Steve Crawford
< ... at pinpointresearch dot com> wrote:
Tuning a PostgreSQL database is a major science, but is there a
reasonably easy way to get a stable baseline for comparison? We've
been exploring different hosting options recently, and one thing we
want to know is how well Postgres will perform.

Understanding streaming replication


I've been struggling with understanding all the necessary pieces for
streaming replication. So I put down the pieces as I did understand them,
and would appreciate if you guys could point out any of the stuff I
understood or have done wrong.

The set up is pgpool + streaming replication + hot stand by. No load
balancing, stand-by nodes will not receive any application queries (I don't
have that big of a query load, and I don't want to risk inconsistent
reads). There are no shared file systems, but there is a way to rsync/scp
files between nodes.

Re: [GENERAL] Phantom read example for new Repeatable Read level

After posting a reply I remembered another recent thread which was
more-or-less on the same topic, but with a different slant. Perhaps
you would also find that useful:

<a href="" title=""></a>


Re: [GENERAL] Phantom read example for new Repeatable Read level

The repeatable read isolation level did not change in 9.1.

Phantom read example for new Repeatable Read level

Dear Everybody!

Can somebody show me an example for phantom read in "Repeatable Read" mode
(in 9.1, new, serialized kind of isolation level)?

The help wrote that it is possible:

Repeatable read Not possible Not possible Possible
But I don't understand, why, and how.

C1, C2 are clients.
Every of them is in RR transaction.

What need to do in C1 to read C2's "phantom records"?
Can you show me the detailed steps?

Thanks for every info, help!

Best wishes: dd

View using dblink fails if not able to make connection

Hi Everyone,

I have a view made up of a local query unioned with a view comprised of a dblink query.

If the dblink query cannot establish a connection, I get the "could not connect to server" error and the whole view fails.

Setting fail_on_error to false does not work in this situation.

Is there any way to test the connection inside the view when the view is called and either return the whole view or just the local component if a dblink connection cannot be made?


control file errors

Hello there,

Once in awhile, as I am trying to run various versions of the Postgres DB
engine I get a message on startup indicating that my control file is not up
to snuff. Last time it happened with Postgres 9.1 on OpenIndiana 11.

So my questions are as follows:

1) Why does that generally happen?

2) When that does happen is there a way to re-init that control file



Out of Shared Memory: max_locks_per_transaction

I have a PGSQL 9.0.1 database which is on the back-end of an app I was
stress testing last night.

Folder filenode in tablespace location not indentifiable by oid2name

Hi all.

A tablespace is created successfully and so is a test table in it:
# CREATE TABLESPACE ts_name LOCATION '/path/to/tablespace/location';
# SET default_tablespace=ts_name;
# CREATE TABLE ts_test_table;

From the command line:
$ cd /path/to/tablespace/location
$ ls
$ cd PG_9.1_201105231/
$ ls
$ cd 11955/
$ ls
24683 24687 24689

All files 2468x correspond to objects identifiable by oid2name:
$ oid2name -d postgres -f 24683
From database "postgres":
Filenode Table Name
But these give nothing:
$ oid2name -d

Revoke "drop database" even for superusers?

I've a bunch of databases that cannot be dropped in any case.

I was wondering if it is possible to revoke "drop database" permission
for all users, in order that even superuser, if he wishes to drop a
database, he will need first to "grant drop database" first.

I know there is already a safety that does not allow dropping databases
in use - I just want to make even harder.


explain plan visibility

Hi all:

I have one question about the visibility of explain plan.

Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from test02; ]

And test02 table has 10,000,000 records.

Using hstore, json, lo, etc from C extensions?

Hi all

With the growing number of extensions that expose useful and
increasingly widely used custom data types, I'm wondering: is there any
way to use them from a C extension without going through the SPI?

A look at the sources for hstore and json shows that they mostly define
static functions and don't present much of an external C API. But say
they did; if I or someone else wrote a proper C level API for use of
hstore from other C extensions.

Re: [GENERAL] Range constraint with unique index

Maybe you want an exclusion constraint:

<a href="" title=""></a>

For example:

                c2 int8range NOT NULL,
                EXCLUDE USING gist (c1 WITH =, c2 WITH &&));
INSERT INTO t1 VALUES (1, '[1,100]');

Now try to insert an overlapping row:

INSERT INTO t1 VALUES (1, '[51,150]');


Message: incomplete startup packet

Hi Guys,

We are having a problem with our pgsql 9.1 on
Suddently, the database stop working and the logs shows
the statements below just before the problem.

pg_hba.conf directory?


I've searched the mailing list archives and google regarding using a
directory to contain pg_hba.conf snippets. Does such a feature exist
for any version of PG?

Would this be a better question for a pg dev mailing list?

Please Cc me, I am not (yet) subscribed to the list.


-Matt Zagrabelny

Range constraint with unique index

I have a table that has an integer and a int8range.

What I want is to add a constraint that stops anyone adding

This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in reliance of it. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of any organisation or employer.

Running out of memory while making a join


The following SQL join command runs the PostgreSQL server out of memory.
The server runs on a box with Red Hat Enterprise Linux Server release 6.3
(Santiago) and PostgreSQL 8.3.21.

select wm_nfsp from "5611_isarq".wm_nfsp
left join "5611_nfarq".nfe on
wm_nfsp.tpdoc = 7 where 1 = 1 and
wm_nfsp.codpre = 2866 and
wm_nfsp.compet = '10/2012';

Explain result:
Nested Loop Left Join (cost=7356.61..48466.46 rows=346312 width=32)
Join Filter: (wm_nfsp.tpdoc = 7)
-> Bitmap Heap Scan on wm_nfsp (cost=11.65..1162.37 rows=11 width=34)
Recheck Cond: (codpre = 2866)

find a substring on a text (data type) column

I was trying to find a substring on a text (data type) column like 'cat foo
dog ...'.
I use the query below
SELECT id FROM table WHERE name LIKE '% foo %';
Sometimes the query return with nTuples=0 but there are matching rows.
On retry, the query return with expected results. Any ideas;

(postgres ver 9.2, libpq - C Library)

Use order by clause, got index scan involved

Hi all:

What confused me is that: When I select data using order by clause, I got
the following execution plan:

postgres=# set session


postgres=# explain SELECT * FROM pg_proc ORDER BY


Index Scan using pg_proc_oid_index on pg_proc (cost=0.00..321.60
rows=2490 width=552)



My Question is :

If I want to find record using the where clause which hold the id column,
the index scan

How is execution plan cost calculated for index scan

Hi all:

I want to see the explain plan for a simple query.

Unique/Primary key not inherited in partition, workaround?

Regarding the caveats here
<a href="" title=""></a>

I am attempting to logically structure my location data. Say for example I
have cities/states/countries. I have objects that reference a location, but
at any level. An object may reference a city or it may reference a state,
depending on how granular we know its location to be. If the city is known
for an object, the state and country can be inferred, so the object need
not point to all levels in fact that would be redundant and require
checking consistency.

How to verify pg_dump files

Can anyone suggest how I could verify that the files created by pg_dump
are "okay"? They are being created for backup purposes, and the last
thing I want to do is find out that the backups themselves are in some
way corrupt.

I know I can check the output of the command itself, but what if.. I
don't know... if there are problems with the disc it writes to, or
something like that. Is there any way to check whether the output file
is "valid" in the sense that it is complete and syntactically correct?

How to evaluate "explain analyze" correctly soon after "explain" for the same statement ?

Hi all:

I have one question about the cache clearing.

If I use the following soon after database startup(or first time I use it):

postgres=# explain analyze select id,deptno from gaotab where id=200;


Index Only Scan using idx_id_dept on gaotab (cost=0.00..8.27 rows=1
width=8) (

actual time=30.912..30.915 rows=1 loops=1)

Index Cond: (id = 200)

Heap Fetches: 1

Total runtime: 47.39

The bug reporting form are unavailable

What I experienced when submitting a bug report is
Forbidden (403)

CSRF verification failed. Request aborted.

More information is available with DEBUG=True.


creating a function returning FALSE on NULL input ?

I'd like to create a function :
returning true if the length of the text arg is greater than 0 ;
false otherwise ;

and also returning false when arg is NULL, then i wrote :

CREATE FUNCTION has_infos(text) RETURNS boolean AS 'select

where i get the error :

ERROR: syntax error at or near "FALSE"

with "RETURNS NULL on NULL INPUT", that's OK.

Comparing txid_current() to xmin

I am trying to make a trigger that updates a row once and only once per
transaction (even if this trigger gets fired multiple times). The general
idea is that for a user we have a version number. When we modify the
user's data, the version number is incremented then set on the object. We
only need to increment the version number once.

I am thinking about doing something like:

update user
set version=version+1
where txid_current() != xmin and user_id = 352395;

So I guess my questions are:

How dirty is this?
Will I run into issues?
Is there a better way of doing this?


sub query reference error or user error

Hi All,
For the below test case shouldn't the update statement throw error
because the sub query projects a column which is not existing in the table.
For ex in inner select nos1 column exists in t1 and not t2.
Is this a bug or an OP error?

template1=# SELECT version();

Re: [GENERAL] Exclusion constraints with time expressions

<a href="mailto:hari. ... at gmail dot com">hari. ... at gmail dot com</a>

Can you give an example of where adding an interval based on *hours*
to TIMESTAMP WITH TIME ZONE would give a different value based on
client's time zone and DST rules?

Re: [GENERAL] Hot Standby Not So Hot Anymore

Darren Duncan wrote:

Note that the message level is LOG, not WARNING or ERROR or anything
more severe. It found the end of the valid WAL stream in WAL files it
was provided, and it's telling you how it decided it was at the end.

One thing I don't think you've really described is how you took your
base backup.

Tuning / performance questions

Hello experts! (and other like me).

We have a system, that due to recent events is getting a lot heavier use.
The application makes extensive use of functions. These functions would
typically run sub-second but now can take several seconds.

I'm wondering what general ways there are to monitor and improve
performance? We look at pgadmin's server status but that only sees the
function being run. Additionally, is there a good way to 'explain' a

Problem with streaming replication over SSL

I have streaming replication configured over SSL, and
there seems to be a problem with SSL renegotiation.

This is from the primary's log:

2012-11-06 00:13:10.990
ng 1E3/76D64000",2012-11-05 23:55:27 CET,4/0,0,LOG,08P01,"SSL
renegotiation failure",,,,,,,,,"walreceiver"

2012-11-06 00:13:10.998
ng 1E3/76D64000",2012-11-05 23:55:27 CET,4/0,0,LOG,08P01,"SSL error:
unexpected record",,,,,,,,,"walreceiver"

2012-11-06 00:13:10.998

Question about "ident_file" in postgres.conf

Hi, all,

I have a question regarding the "ident_file" configuration entry.

Why the server refused to start without specifying the "ident_file", but it
never cares whether the given "ident_file" is valid or not? In other word,
a misconfigured non-existent path for "ident_file" can also start the
server with a background message.

This does not make too much sense to me. I think the system behavior should
be opposite. If the user didn't specify the "ident_file", the pg server
should ignore.

Does PostgreSQL have complete functional test cases?


Does anyone know whether PostgreSQL has complete functional test cases? And
where can I find them?

Currently, I only find some test cases in "PG_SOURCE/src/tests", but it
seems that they are pretty simple and do not cover a lot of features of

Thanks a lot!

Exclusion constraints with time expressions


I am using 9.1.6, and I've set up a partitioned table as described in the
manual, with partitions based on a timestamptz column called 'time'. The
exclusion constraints work nicely when I select ranges of times with
literal constants.

Hot Standby Not So Hot Anymore

I had a 9.0.8 hot standby setup, one master, two slaves, working great.
Then, I tried to re-initialize by making a base backup, the way I've done
it many times before, but for some reason I can't get the standby to accept
connections. I copied the postgresql.conf and recorvery.conf out of the
way, cleaned the data directory and extracted the backup files, then
replaced the conf files.

Unexpectedly high disk space usage

We upgraded to 9.1 from 8.4 over the summer a few months ago, to new DB
servers with more disk space and memory. Unexpectedly, the DB servers
have steadily increased their disk space usage since. Reported system
load doesn't seem to be affected. It's happening to all our DB servers
running 9.1.

When we reload all pg_dumps from our worst-affected server into an
offline server, the disk space usage is about 26 GB, but the production
database is using 166 GB.

Re: [GENERAL] Pg isolation levels: 3 or 2?

Peter Geoghegan wrote:

Quick estimate of num of rows & table size

Hi all,

I read somewhere that the following query gives a quick estimate of the #
of rows in a table regardless of the table's size (which would matter in a
simple SELECT count(*)?):

SELECT (CASE WHEN reltuples > 0 THEN
END)::bigint AS estimated_row_count
FROM pg_class
WHERE oid = 'mytable'::regclass;

If relpages & reltuples are recorded accurately each time VACUUM is run,
wouldn't it be the same to just grab directly the value of reltuples like:

SELECT reltuples FROM pg_class WHERE oid='mytable'::regclass;

In the same manner, a

Difference between varchar and text?

Is there any practical difference between defining a column as a
varchar(n)vs. a varchar
vs. a text field?

I've always been under the impression that if I am wanting to index a
varchar column, it is better to set a maximum length. Is this correct?
But more importantly, what's the practical difference between varchar with
no limit and text?

Thanks, and apologies if this is a FAQ...


Syndicate content