35473 Commits

Author SHA1 Message Date
Tom Lane
a4061eeaf3 Make pg_upgrade's test.sh less chatty.
Remove "set -x", and pass "-A trust" to initdb explicitly,
to suppress almost all of the noise this script used to emit
on stderr.

This back-patches commit eb9812f27 into out-of-support branches,
pursuant to newly-established project policy.  The point is to
suppress useless noise on stderr when running check-world.

Discussion: https://postgr.es/m/d0316012-ece7-7b7e-2d36-9c38cb77cb3b@enterprisedb.com
2021-12-12 16:14:25 -05:00
Tom Lane
70c64135d0 Add checks for valid multibyte character length in UtfToLocal, LocalToUtf.
This back-patches commit d9f37e666 into out-of-support branches,
pursuant to newly-established project policy.  The point is to
suppress "uninitialized variable" warnings so that people building
these branches needn't expend brain cells verifying that it's safe
to ignore the warnings.

Discussion: https://postgr.es/m/d0316012-ece7-7b7e-2d36-9c38cb77cb3b@enterprisedb.com
2021-12-12 15:59:00 -05:00
Tom Lane
fed9cf2cb9 Use return instead of exit() in configure
Using exit() requires stdlib.h, which is not included.  Use return
instead.  Also add return type for main().

This back-patches commit 1c0cf52b3 into out-of-support branches,
pursuant to a newly-established project policy that we'll try to keep
out-of-support branches buildable on modern platforms for at least
ten major releases back, ensuring people can test pg_dump and psql
compatibility against servers that far back.  With the current
development branch being v15, that works out to keeping 9.2 and up
buildable as of today.

This fix is needed to get through 'configure' when using recent
macOS (and possibly other clang-based toolchains).  It seems to
be sufficient to get through 'check-world', although there are
annoyances such as compiler warnings, which will be dealt with
separately.

Original patch by Peter Eisentraut

Discussion: https://postgr.es/m/d0316012-ece7-7b7e-2d36-9c38cb77cb3b@enterprisedb.com
2021-12-12 14:54:37 -05:00
Tom Lane
8786f783ab Stamp 9.2.24. REL9_2_24 2017-11-06 17:17:39 -05:00
Tom Lane
203b965f27 Last-minute updates for release notes.
Security: CVE-2017-12172, CVE-2017-15098, CVE-2017-15099
2017-11-06 12:02:30 -05:00
Noah Misch
eda780281c start-scripts: switch to $PGUSER before opening $PGLOG.
By default, $PGUSER has permission to unlink $PGLOG.  If $PGUSER
replaces $PGLOG with a symbolic link, the server will corrupt the
link-targeted file by appending log messages.  Since these scripts open
$PGLOG as root, the attack works regardless of target file ownership.

"make install" does not install these scripts anywhere.  Users having
manually installed them in the past should repeat that process to
acquire this fix.  Most script users have $PGLOG writable to root only,
located in $PGDATA.  Just before updating one of these scripts, such
users should rename $PGLOG to $PGLOG.old.  The script will then recreate
$PGLOG with proper ownership.

Reviewed by Peter Eisentraut.  Reported by Antoine Scemama.

Security: CVE-2017-12172
2017-11-06 07:11:13 -08:00
Peter Eisentraut
f6a926757e Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: d2182acc2b014c9348a6ab60bfd1ce2576506338
2017-11-05 17:07:04 -05:00
Tom Lane
50714dd860 Release notes for 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, 9.2.24.
In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890
on these files, to reduce future maintenance issues.  (I'd do it further
back, except that the 9.X branches differ anyway due to xlog-to-wal
link tag renaming.)
2017-11-05 13:47:57 -05:00
Tom Lane
13406fe23d Doc: update URL for check_postgres.
Reported by Dan Vianello.

Discussion: https://postgr.es/m/e6e12f18f70e46848c058084d42fb651@KSTLMEXGP001.CORP.CHARTERCOM.com
2017-11-01 22:07:14 -04:00
Tom Lane
a4c11c1031 Dept of second thoughts: keep aliasp_item in sync with tlistitem.
Commit d5b760ecb wasn't quite right, on second thought: if the
caller didn't ask for column names then it would happily emit
more Vars than if the caller did ask for column names.  This
is surely not a good idea.  Advance the aliasp_item whether or
not we're preparing a colnames list.
2017-10-27 18:16:25 -04:00
Tom Lane
80e79718d0 Fix crash when columns have been added to the end of a view.
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly
as many non-junk tlist items as the RTE has column aliases for it.
This was true at the time the code was written, and is still true so
far as parse analysis is concerned --- but when the function is used
during planning, the subquery might have appeared through insertion
of a view that now has more columns than it did when the outer query
was parsed.  This results in a core dump if, for instance, we have
to expand a whole-row Var that references the subquery.

To avoid crashing, we can either stop expanding the RTE when we run
out of aliases, or invent new aliases for the added columns.  While
the latter might be more useful, the former is consistent with what
expandRTE() does for composite-returning functions in the RTE_FUNCTION
case, so it seems like we'd better do it that way.

Per bug #14876 from Samuel Horwitz.  This has been busted since commit
ff1ea2173 allowed views to acquire more columns, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org
2017-10-27 17:10:21 -04:00
Tom Lane
adcfa7bd16 Rethink the dependencies recorded for FieldSelect/FieldStore nodes.
On closer investigation, commits f3ea3e3e8 et al were a few bricks
shy of a load.  What we need is not so much to lock down the result
type of a FieldSelect, as to lock down the existence of the column
it's trying to extract.  Otherwise, we can break it by dropping that
column.  The dependency on the result type is then held indirectly
through the column, and doesn't need to be recorded explicitly.

Out of paranoia, I left in the code to record a dependency on the
result type, but it's used only if we can't identify the pg_class OID
for the column.  That shouldn't ever happen right now, AFAICS, but
it seems possible that in future the input node could be marked as
being of type RECORD rather than some specific composite type.

Likewise for FieldStore.

Like the previous patch, back-patch to all supported branches.

Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us
2017-10-27 12:18:57 -04:00
Tom Lane
f39fc2b27d Doc: mention that you can't PREPARE TRANSACTION after NOTIFY.
The NOTIFY page said this already, but the PREPARE TRANSACTION page
missed it.

Discussion: https://postgr.es/m/20171024010602.1488.80066@wrigleys.postgresql.org
2017-10-27 10:46:07 -04:00
Andrew Dunstan
d5fb450aab Improve gendef.pl diagnostic on failure to open sym file
There have been numerous buildfarm failures but the diagnostic is
currently silent about the reason for failure to open the file. Let's
see if we can get to the bottom of it.

Backpatch to all live branches.
2017-10-26 10:17:18 -04:00
Tom Lane
caeae886e2 Fix libpq to not require user's home directory to exist.
Some people like to run libpq-using applications in environments where
there's no home directory.  We've broken that scenario before (cf commits
5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making
it a hard error if we fail to get the home directory name while looking
for ~/.pgpass.  The previous precedent is that if we can't get the home
directory name, we should just silently act as though the file we hoped
to find there doesn't exist.  Rearrange the new code to honor that.

Looking around, the service-file code added by commit 41a4e4595 had the
same disease.  Apparently, that escaped notice because it only runs when
a service name has been specified, which I guess the people who use this
scenario don't do.  Nonetheless, it's wrong too, so fix that case as well.

Add a comment about this policy to pqGetHomeDirectory, in the probably
vain hope of forestalling the same error in future.  And upgrade the
rather miserable commenting in parseServiceInfo, too.

In passing, also back off parseServiceInfo's assumption that only ENOENT
is an ignorable error from stat() when checking a service file.  We would
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that
the far-better-tested code for ~/.pgpass treats all stat() failures alike,
I think this code ought to as well.

Per bug #14872 from Dan Watson.  Back-patch the .pgpass change to v10
where ba005f193 came in.  The service-file bugs are far older, so
back-patch the other changes to all supported branches.

Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org
2017-10-25 19:32:25 -04:00
Tom Lane
7e8d84c366 Update time zone data files to tzdata release 2017c.
DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga,
and Turks & Caicos Islands.  Historical corrections for Alaska, Apia,
Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago.
2017-10-23 18:16:04 -04:00
Tom Lane
1317d13013 Sync our copy of the timezone library with IANA release tzcode2017c.
This is a trivial update containing only cosmetic changes.  The point
is just to get back to being synced with an official release of tzcode,
rather than some ad-hoc point in their commit history, which is where
commit 47f849a3c left it.
2017-10-23 17:54:09 -04:00
Tom Lane
900a9fd64a Fix some oversights in expression dependency recording.
find_expr_references() neglected to record a dependency on the result type
of a FieldSelect node, allowing a DROP TYPE to break a view or rule that
contains such an expression.  I think we'd omitted this case intentionally,
reasoning that there would always be a related dependency ensuring that the
DROP would cascade to the view.  But at least with nested field selection
expressions, that's not true, as shown in bug #14867 from Mansur Galiev.
Add the dependency, and for good measure a dependency on the node's exposed
collation.

Likewise add a dependency on the result type of a FieldStore.  I think here
the reasoning was that it'd only appear within an assignment to a field,
and the dependency on the field's column would be enough ... but having
seen this example, I think that's wrong for nested-composites cases.

Looking at nearby code, I notice we're not recording a dependency on the
exposed collation of CoerceViaIO, which seems inconsistent with our choices
for related node types.  Maybe that's OK but I'm feeling suspicious of this
code today, so let's add that; it certainly can't hurt.

This patch does not do anything to protect already-existing views, only
views created after it's installed.  But seeing that the issue has been
there a very long time and nobody noticed till now, that's probably good
enough.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org
2017-10-23 13:57:46 -04:00
Tom Lane
0270ad1f74 Fix typcache's failure to treat ranges as container types.
Like the similar logic for arrays and records, it's necessary to examine
the range's subtype to decide whether the range type can support hashing.
We can omit checking the subtype for btree-defined operations, though,
since range subtypes are required to have those operations.  (Possibly
that simplification for btree cases led us to overlook that it does
not apply for hash cases.)

This is only an issue if the subtype lacks hash support, which is not
true of any built-in range type, but it's easy to demonstrate a problem
with a range type over, eg, money: you can get a "could not identify
a hash function" failure when the planner is misled into thinking that
hash join or aggregation would work.

This was born broken, so back-patch to all supported branches.
2017-10-20 17:12:28 -04:00
Tom Lane
bc639d4885 Doc: fix missing explanation of default object privileges.
The GRANT reference page, which lists the default privileges for new
objects, failed to mention that USAGE is granted by default for data
types and domains.  As a lesser sin, it also did not specify anything
about the initial privileges for sequences, FDWs, foreign servers,
or large objects.  Fix that, and add a comment to acldefault() in the
probably vain hope of getting people to maintain this list in future.

Noted by Laurenz Albe, though I editorialized on the wording a bit.
Back-patch to all supported branches, since they all have this behavior.

Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at
2017-10-11 16:57:03 -04:00
Tom Lane
525b09adae Fix low-probability loss of NOTIFY messages due to XID wraparound.
Up to now async.c has used TransactionIdIsInProgress() to detect whether
a notify message's source transaction is still running.  However, that
function has a quick-exit path that reports that XIDs before RecentXmin
are no longer running.  If a listening backend is doing nothing but
listening, and not running any queries, there is nothing that will advance
its value of RecentXmin.  Once 2 billion transactions elapse, the
RecentXmin check causes active transactions to be reported as not running.
If they aren't committed yet according to CLOG, async.c decides they
aborted and discards their messages.  The timing for that is a bit tight
but it can happen when multiple backends are sending notifies concurrently.
The net symptom therefore is that a sufficiently-long-surviving
listen-only backend starts to miss some fraction of NOTIFY traffic,
but only under heavy load.

The only function that updates RecentXmin is GetSnapshotData().
A brute-force fix would therefore be to take a snapshot before
processing incoming notify messages.  But that would add cycles,
as well as contention for the ProcArrayLock.  We can be smarter:
having taken the snapshot, let's use that to check for running
XIDs, and not call TransactionIdIsInProgress() at all.  In this
way we reduce the number of ProcArrayLock acquisitions from one
per message to one per notify interrupt; that's the same under
light load but should be a benefit under heavy load.  Light testing
says that this change is a wash performance-wise for normal loads.

I looked around for other callers of TransactionIdIsInProgress()
that might be at similar risk, and didn't find any; all of them
are inside transactions that presumably have already taken a
snapshot.

Problem report and diagnosis by Marko Tiikkaja, patch by me.
Back-patch to all supported branches, since it's been like this
since 9.0.

Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org
2017-10-11 14:28:34 -04:00
Tom Lane
6d2ef1cb99 Fix access-off-end-of-array in clog.c.
Sloppy loop coding in set_status_by_pages() resulted in fetching one array
element more than it should from the subxids[] array.  The odds of this
resulting in SIGSEGV are pretty small, but we've certainly seen that happen
with similar mistakes elsewhere.  While at it, we can get rid of an extra
TransactionIdToPage() calculation per loop.

Per report from David Binderman.  Back-patch to all supported branches,
since this code is quite old.

Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com
2017-10-06 12:20:13 -04:00
Alvaro Herrera
c9c37e335e Fix coding rules violations in walreceiver.c
1. Since commit b1a9bad9e744 we had pstrdup() inside a
spinlock-protected critical section; reported by Andreas Seltenreich.
Turn those into strlcpy() to stack-allocated variables instead.
Backpatch to 9.6.

2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a
spinlock-protected critical section.  Tom Lane noticed in code review.
Move down.  Backpatch to 9.6.

3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel
call) inside a spinlock-protected critical section.  Tom Lane noticed in
code review.  Move it up.  Backpatch to 9.2.

4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock.
Tom Lane noticed in code review.  Release spinlock before dying.
Backpatch to 9.2.

Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu
2017-10-03 14:58:25 +02:00
Tom Lane
72d4fd08ea Fix behavior when converting a float infinity to numeric.
float8_numeric() and float4_numeric() failed to consider the possibility
that the input is an IEEE infinity.  The results depended on the
platform-specific behavior of sprintf(): on most platforms you'd get
something like

ERROR:  invalid input syntax for type numeric: "inf"

but at least on Windows it's possible for the conversion to succeed and
deliver a finite value (typically 1), due to a nonstandard output format
from sprintf and lack of syntax error checking in these functions.

Since our numeric type lacks the concept of infinity, a suitable conversion
is impossible; the best thing to do is throw an explicit error before
letting sprintf do its thing.

While at it, let's use snprintf not sprintf.  Overrunning the buffer
should be impossible if sprintf does what it's supposed to, but this
is cheap insurance against a stack smash if it doesn't.

Problem reported by Taiki Kondo.  Patch by me based on fix suggestion
from KaiGai Kohei.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp
2017-09-27 17:05:54 -04:00
Noah Misch
6525a3a709 Don't recommend "DROP SCHEMA information_schema CASCADE".
It drops objects outside information_schema that depend on objects
inside information_schema.  For example, it will drop a user-defined
view if the view query refers to information_schema.

Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com
2017-09-26 22:39:47 -07:00
Tom Lane
9d0d0fe57c Improve wording of error message added in commit 714805010.
Per suggestions from Peter Eisentraut and David Johnston.
Back-patch, like the previous commit.

Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org
2017-09-26 15:25:57 -04:00
Peter Eisentraut
2eb84e54a2 Fix saving and restoring umask
In two cases, we set a different umask for some piece of code and
restore it afterwards.  But if the contained code errors out, the umask
is not restored.  So add TRY/CATCH blocks to fix that.
2017-09-23 10:14:30 -04:00
Tom Lane
a07105afac Sync our copy of the timezone library with IANA tzcode master.
This patch absorbs a few unreleased fixes in the IANA code.
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a
in https://github.com/eggert/tz.  Non-cosmetic changes include:

TZDEFRULESTRING is updated to match current US DST practice,
rather than what it was over ten years ago.  This only matters
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),
and only if the timezone database doesn't include either an exact
match for the zone name or a "posixrules" entry.  The latter
should not be true in any current Postgres installation, but
this could possibly matter when using --with-system-tzdata.

Get rid of a nonportable use of "++var" on a bool var.
This is part of a larger fix that eliminates some vestigial
support for consecutive leap seconds, and adds checks to
the "zic" compiler that the data files do not specify that.

Remove a couple of ancient compatibility hacks.  The IANA
crew think these are obsolete, and I tend to agree.  But
perhaps our buildfarm will think different.

Back-patch to all supported branches, in line with our policy
that all branches should be using current IANA code.  Before v10,
this includes application of current pgindent rules, to avoid
whitespace problems in future back-patches.

Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org
2017-09-22 00:04:21 -04:00
Tom Lane
e56facd8b3 Give a better error for duplicate entries in VACUUM/ANALYZE column list.
Previously, the code didn't think about this case and would just try to
analyze such a column twice.  That would fail at the point of inserting
the second version of the pg_statistic row, with obscure error messsages
like "duplicate key value violates unique constraint" or "tuple already
updated by self", depending on context and PG version.  We could allow
the case by ignoring duplicate column specifications, but it seems better
to reject it explicitly.

The bogus error messages seem like arguably a bug, so back-patch to
all supported versions.

Nathan Bossart, per a report from Michael Paquier, and whacked
around a bit by me.

Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com
2017-09-21 18:13:11 -04:00
Tom Lane
4cd6cd21d3 Fix possible dangling pointer dereference in trigger.c.
AfterTriggerEndQuery correctly notes that the query_stack could get
repalloc'd during a trigger firing, but it nonetheless passes the address
of a query_stack entry to afterTriggerInvokeEvents, so that if such a
repalloc occurs, afterTriggerInvokeEvents is already working with an
obsolete dangling pointer while it scans the rest of the events.  Oops.
The only code at risk is its "delete_ok" cleanup code, so we can
prevent unsafe behavior by passing delete_ok = false instead of true.

However, that could have a significant performance penalty, because the
point of passing delete_ok = true is to not have to re-scan possibly
a large number of dead trigger events on the next time through the loop.
There's more than one way to skin that cat, though.  What we can do is
delete all the "chunks" in the event list except the last one, since
we know all events in them must be dead.  Deleting the chunks is work
we'd have had to do later in AfterTriggerEndQuery anyway, and it ends
up saving rescanning of just about the same events we'd have gotten
rid of with delete_ok = true.

In v10 and HEAD, we also have to be careful to mop up any per-table
after_trig_events pointers that would become dangling.  This is slightly
annoying, but I don't think that normal use-cases will traverse this code
path often enough for it to be a performance problem.

It's pretty hard to hit this in practice because of the unlikelihood
of the query_stack getting resized at just the wrong time.  Nonetheless,
it's definitely a live bug of ancient standing, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us
2017-09-17 14:50:01 -04:00
Tom Lane
a64a0bfaae Fix macro-redefinition warning on MSVC.
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,
for consistency with what happens in an autoconf'd build.
But Solution.pm injects another definition of that macro into
ecpg_config.h, leading to justifiable (though harmless) compiler whining.
Make that one consistent too.  Back-patch, like the previous patch.

Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com
2017-09-03 11:01:08 -04:00
Peter Eisentraut
c109785e38 doc: Fix typos and other minor issues
Author: Alexander Lakhin <exclusion@gmail.com>
2017-09-01 23:23:33 -04:00
Tom Lane
f60a236bab Make [U]INT64CONST safe for use in #if conditions.
Instead of using a cast to force the constant to be the right width,
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.
The old approach to this is very hoary, dating from before we were
willing to require compilers to have working int64 types.

This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX
constants safe to use in preprocessor conditions, where a cast
doesn't work.  Other symbolic constants that might be defined using
[U]INT64CONST are likewise safer than before.

Also fix the SIZE_MAX macro to be similarly safe, if we are forced
to provide a definition for that.  The test added in commit 2e70d6b5e
happens to do what we want even with the hack "(size_t) -1" definition,
but we could easily get burnt on other tests in future.

Back-patch to all supported branches, like the previous commits.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 15:14:18 -04:00
Tom Lane
0bfcda9904 Ensure SIZE_MAX can be used throughout our code.
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX.  We have
a couple of places using the hack "(size_t) -1" as a fallback, but
it wasn't universally available; which means the code added in commit
2e70d6b5e fails to compile everywhere.  Move that hack to c.h so that
we can rely on having SIZE_MAX everywhere.

Per discussion, it'd be a good idea to make the macro's value safe
for use in #if-tests, but that will take a bit more work.  This is
just a quick expedient to get the buildfarm green again.

Back-patch to all supported branches, like the previous commit.

Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us
2017-09-01 13:52:54 -04:00
Tom Lane
2a82170a56 Doc: document libpq's restriction to INT_MAX rows in a PGresult.
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're
pretty much stuck with this limitation.  The documentation formerly stated
that the result of PQntuples "might overflow on 32-bit operating systems",
which is just nonsense: that's not where the overflow would happen, and
if you did reach an overflow it would not be on a 32-bit machine, because
you'd have OOM'd long since.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:38:46 -04:00
Tom Lane
a07058a6d4 Teach libpq to detect integer overflow in the row count of a PGresult.
Adding more than 1 billion rows to a PGresult would overflow its ntups and
tupArrSize fields, leading to client crashes.  It'd be desirable to use
wider fields on 64-bit machines, but because all of libpq's external APIs
use plain "int" for row counters, that's going to be hard to accomplish
without an ABI break.  Given the lack of complaints so far, and the general
pain that would be involved in using such huge PGresults, let's settle for
just preventing the overflow and reporting a useful error message if it
does happen.  Also, for a couple more lines of code we can increase the
threshold of trouble from INT_MAX/2 to INT_MAX rows.

To do that, refactor pqAddTuple() to allow returning an error message that
replaces the default assumption that it failed because of out-of-memory.

Along the way, fix PQsetvalue() so that it reports all failures via
pqInternalNotice().  It already did so in the case of bad field number,
but neglected to report anything for other error causes.

Because of the potential for crashes, this seems like a back-patchable
bug fix, despite the lack of field reports.

Michael Paquier, per a complaint from Igor Korot.

Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
2017-08-29 15:18:01 -04:00
Tom Lane
d4ab7808bc Improve docs about numeric formatting patterns (to_char/to_number).
The explanation about "0" versus "9" format characters was confusing
and arguably wrong; the discussion of sign handling wasn't very good
either.  Notably, while it's accurate to say that "FM" strips leading
zeroes in date/time values, what it really does with numeric values
is to strip *trailing* zeroes, and then only if you wrote "9" rather
than "0".  Per gripes from Erwin Brandstetter.

Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com
2017-08-29 09:34:21 -04:00
Tom Lane
2f4ffae5ad Stamp 9.2.23. REL9_2_23 2017-08-28 17:30:10 -04:00
Peter Eisentraut
f493532724 Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 93d2eb7f662bb15429730cecaaa4280298a4c50b
2017-08-28 10:07:56 -04:00
Tom Lane
bf0dcdb2eb Release notes for 9.6.5, 9.5.9, 9.4.14, 9.3.19, 9.2.23. 2017-08-27 17:35:04 -04:00
Andres Freund
8cda1fadd2 Backpatch introduction of TupleDescAttr(tupdesc, i).
2cd70845240 / c6293249d change the way individual attributes in a
TupleDesc are stored / accessed.  To reduce the effort of making
extensions compatible with postgresql 11, and to ease future
backpatching, backpatch introduction of TupleDescAttr() to all
releases.  Do not backpatch change in storage, as that'd be a breaking
change for existing and working extensions.

Author: Andres Freund
Discussion: https://postgr.es/m/20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de
Backpatch: 9.2-
2017-08-22 07:47:52 -07:00
Tom Lane
f7e4783ddb Further tweaks to compiler flags for PL/Perl on Windows.
It now emerges that we can only rely on Perl to tell us we must use
-D_USE_32BIT_TIME_T if it's Perl 5.13.4 or later.  For older versions,
revert to our previous practice of assuming we need that symbol in
all 32-bit Windows builds.  This is not ideal, but inquiring into
which compiler version Perl was built with seems far too fragile.
In any case, we had not previously had complaints about these old
Perl versions, so let's assume this is Good Enough.  (It's still
better than the situation ante commit 5a5c2feca, in that at least
the effects are confined to PL/Perl rather than the whole PG build.)

Back-patch to all supported versions, like 5a5c2feca and predecessors.

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-08-17 13:15:46 -04:00
Michael Meskes
60b135c826 Changed ecpg parser to allow RETURNING clauses without attached C variables. 2017-08-16 13:30:20 +02:00
Peter Eisentraut
98e6784aa5 Include foreign tables in information_schema.table_privileges
This appears to have been an omission in the original commit
0d692a0dc9f.  All related information_schema views already include
foreign tables.

Reported-by: Nicolas Thauvin <nicolas.thauvin@dalibo.com>
2017-08-15 19:33:04 -04:00
Tom Lane
8ae41ceae8 Handle elog(FATAL) during ROLLBACK more robustly.
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction.  In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction.  This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal".  The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.

We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals.  It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext.  That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory.  This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.

The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check.  If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.

This has been like this a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/877ey7bmun.fsf@ansel.ydns.eu
2017-08-14 15:43:20 -04:00
Tom Lane
e3335ec0b6 Absorb -D_USE_32BIT_TIME_T switch from Perl, if relevant.
Commit 3c163a7fc's original choice to ignore all #define symbols whose
names begin with underscore turns out to be too simplistic.  On Windows,
some Perl installations are built with -D_USE_32BIT_TIME_T, and we must
absorb that or we get the wrong result for sizeof(PerlInterpreter).

This effectively re-reverts commit ef58b87df, which injected that symbol
in a hacky way, making it apply to all of Postgres not just PL/Perl.
More significantly, it did so on *all* 32-bit Windows builds, even when
the Perl build to be used did not select this option; so that it fails
to work properly with some newer Perl builds.

By making this change, we would be introducing an ABI break in 32-bit
Windows builds; but fortunately we have not used type time_t in any
exported Postgres APIs in a long time.  So it should be OK, both for
PL/Perl itself and for third-party extensions, if an extension library
is built with a different _USE_32BIT_TIME_T setting than the core code.

Patch by me, based on research by Ashutosh Sharma and Robert Haas.
Back-patch to all supported branches, as commit 3c163a7fc was.

Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com
2017-08-14 11:48:59 -04:00
Tom Lane
5069017fec Remove AtEOXact_CatCache().
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1.  We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds.  There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.

However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.

There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run.  That's pretty scary though: it could easily create
more problems than it solves.  Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.

There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing.  We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it.  It hasn't found any bugs for us in many years.

Per report from Jeevan Chalke.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAM2+6=VEE30YtRQCZX7_sCFsEpoUkFBV1gZazL70fqLn8rcvBA@mail.gmail.com
2017-08-13 16:15:14 -04:00
Tom Lane
4e704aac18 Fix handling of container types in find_composite_type_dependencies.
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc.  The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.

The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain.  This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.

Add test cases illustrating the problems, and back-patch to all
supported branches.

Discussion: https://postgr.es/m/15268.1502309024@sss.pgh.pa.us
2017-08-09 17:03:10 -04:00
Tom Lane
56fc4f7c9c Stamp 9.2.22. REL9_2_22 2017-08-07 17:19:50 -04:00
Peter Eisentraut
5fa4515a6a Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 6535410ecbfd22426b13c2ddb1302145d66c1b4d
2017-08-07 13:52:45 -04:00