2001-11-18 22:27:00 +00:00
|
|
|
<!--
|
2004-04-22 07:02:36 +00:00
|
|
|
$PostgreSQL: pgsql/doc/src/sgml/backup.sgml,v 2.39 2004/04/22 07:02:35 neilc Exp $
|
2001-11-18 22:27:00 +00:00
|
|
|
-->
|
2000-06-30 16:14:21 +00:00
|
|
|
<chapter id="backup">
|
|
|
|
<title>Backup and Restore</title>
|
|
|
|
|
2001-11-12 19:19:39 +00:00
|
|
|
<indexterm zone="backup"><primary>backup</></>
|
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
<para>
|
2001-11-21 05:53:41 +00:00
|
|
|
As everything that contains valuable data, <productname>PostgreSQL</>
|
2000-06-30 16:14:21 +00:00
|
|
|
databases should be backed up regularly. While the procedure is
|
|
|
|
essentially simple, it is important to have a basic understanding of
|
|
|
|
the underlying techniques and assumptions.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
There are two fundamentally different approaches to backing up
|
2001-11-21 05:53:41 +00:00
|
|
|
<productname>PostgreSQL</> data:
|
2000-06-30 16:14:21 +00:00
|
|
|
<itemizedlist>
|
|
|
|
<listitem><para><acronym>SQL</> dump</para></listitem>
|
|
|
|
<listitem><para>File system level backup</para></listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para>
|
|
|
|
|
2000-09-29 20:21:34 +00:00
|
|
|
<sect1 id="backup-dump">
|
2000-06-30 16:14:21 +00:00
|
|
|
<title><acronym>SQL</> Dump</title>
|
|
|
|
|
|
|
|
<para>
|
2001-11-28 20:49:10 +00:00
|
|
|
The idea behind the SQL-dump method is to generate a text file with SQL
|
2000-06-30 16:14:21 +00:00
|
|
|
commands that, when fed back to the server, will recreate the
|
|
|
|
database in the same state as it was at the time of the dump.
|
2001-11-21 05:53:41 +00:00
|
|
|
<productname>PostgreSQL</> provides the utility program
|
2004-04-22 07:02:36 +00:00
|
|
|
<xref linkend="app-pgdump"> for this purpose. The basic usage of this
|
2000-06-30 16:14:21 +00:00
|
|
|
command is:
|
|
|
|
<synopsis>
|
|
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable>
|
|
|
|
</synopsis>
|
|
|
|
As you see, <application>pg_dump</> writes its results to the
|
|
|
|
standard output. We will see below how this can be useful.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2001-11-21 05:53:41 +00:00
|
|
|
<application>pg_dump</> is a regular <productname>PostgreSQL</>
|
2000-06-30 16:14:21 +00:00
|
|
|
client application (albeit a particularly clever one). This means
|
|
|
|
that you can do this backup procedure from any remote host that has
|
|
|
|
access to the database. But remember that <application>pg_dump</>
|
|
|
|
does not operate with special permissions. In particular, you must
|
|
|
|
have read access to all tables that you want to back up, so in
|
|
|
|
practice you almost always have to be a database superuser.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2000-12-10 20:47:34 +00:00
|
|
|
To specify which database server <application>pg_dump</> should
|
2000-06-30 16:14:21 +00:00
|
|
|
contact, use the command line options <option>-h
|
|
|
|
<replaceable>host</></> and <option>-p <replaceable>port</></>. The
|
|
|
|
default host is the local host or whatever your
|
|
|
|
<envar>PGHOST</envar> environment variable specifies. Similarly,
|
|
|
|
the default port is indicated by the <envar>PGPORT</envar>
|
|
|
|
environment variable or, failing that, by the compiled-in default.
|
|
|
|
(Conveniently, the server will normally have the same compiled-in
|
|
|
|
default.)
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2001-11-21 05:53:41 +00:00
|
|
|
As any other <productname>PostgreSQL</> client application,
|
2000-06-30 16:14:21 +00:00
|
|
|
<application>pg_dump</> will by default connect with the database
|
2002-11-11 20:14:04 +00:00
|
|
|
user name that is equal to the current operating system user name. To override
|
2001-05-17 21:12:49 +00:00
|
|
|
this, either specify the <option>-U</option> option or set the
|
|
|
|
environment variable <envar>PGUSER</envar>. Remember that
|
|
|
|
<application>pg_dump</> connections are subject to the normal
|
|
|
|
client authentication mechanisms (which are described in <xref
|
2000-06-30 16:14:21 +00:00
|
|
|
linkend="client-authentication">).
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Dumps created by <application>pg_dump</> are internally consistent,
|
|
|
|
that is, updates to the database while <application>pg_dump</> is
|
|
|
|
running will not be in the dump. <application>pg_dump</> does not
|
|
|
|
block other operations on the database while it is working.
|
|
|
|
(Exceptions are those operations that need to operate with an
|
2001-11-18 22:17:30 +00:00
|
|
|
exclusive lock, such as <command>VACUUM FULL</command>.)
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<important>
|
|
|
|
<para>
|
2001-11-18 22:27:00 +00:00
|
|
|
When your database schema relies on OIDs (for instance as foreign
|
2000-06-30 16:14:21 +00:00
|
|
|
keys) you must instruct <application>pg_dump</> to dump the OIDs
|
|
|
|
as well. To do this, use the <option>-o</option> command line
|
2004-02-17 09:07:16 +00:00
|
|
|
option. <quote>Large objects</> are not dumped by default,
|
|
|
|
either. See <xref linkend="app-pgdump">'s reference page if you
|
|
|
|
use large objects.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
</important>
|
|
|
|
|
2001-03-19 16:19:26 +00:00
|
|
|
<sect2 id="backup-dump-restore">
|
2000-06-30 16:14:21 +00:00
|
|
|
<title>Restoring the dump</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The text files created by <application>pg_dump</> are intended to
|
|
|
|
be read in by the <application>psql</application> program. The
|
|
|
|
general command form to restore a dump is
|
|
|
|
<synopsis>
|
|
|
|
psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable>
|
|
|
|
</synopsis>
|
|
|
|
where <replaceable class="parameter">infile</replaceable> is what
|
|
|
|
you used as <replaceable class="parameter">outfile</replaceable>
|
2004-01-11 05:46:58 +00:00
|
|
|
for the <application>pg_dump</> command. The database <replaceable
|
2000-06-30 16:14:21 +00:00
|
|
|
class="parameter">dbname</replaceable> will not be created by this
|
2002-11-11 20:14:04 +00:00
|
|
|
command, you must create it yourself from <literal>template0</> before executing
|
2001-11-28 20:49:10 +00:00
|
|
|
<application>psql</> (e.g., with <literal>createdb -T template0
|
|
|
|
<replaceable class="parameter">dbname</></literal>).
|
2001-03-13 14:08:18 +00:00
|
|
|
<application>psql</> supports similar options to <application>pg_dump</>
|
2003-03-24 14:32:51 +00:00
|
|
|
for controlling the database server location and the user name. See
|
2000-06-30 16:14:21 +00:00
|
|
|
its reference page for more information.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
If the objects in the original database were owned by different
|
|
|
|
users, then the dump will instruct <application>psql</> to connect
|
|
|
|
as each affected user in turn and then create the relevant
|
|
|
|
objects. This way the original ownership is preserved. This also
|
2001-11-18 22:27:00 +00:00
|
|
|
means, however, that all these users must already exist, and
|
2000-06-30 16:14:21 +00:00
|
|
|
furthermore that you must be allowed to connect as each of them.
|
|
|
|
It might therefore be necessary to temporarily relax the client
|
|
|
|
authentication settings.
|
|
|
|
</para>
|
|
|
|
|
2003-03-18 00:02:11 +00:00
|
|
|
<para>
|
2004-04-22 07:02:36 +00:00
|
|
|
Once restored, it is wise to run <xref linkend="sql-analyze"
|
|
|
|
endterm="sql-analyze-title"> on each database so the optimizer has
|
|
|
|
useful statistics. You can also run <command>vacuumdb -a -z</> to
|
|
|
|
<command>VACUUM ANALYZE</> all databases; this is equivalent to
|
|
|
|
running <command>VACUUM ANALYZE</command> manually.
|
2003-03-18 00:02:11 +00:00
|
|
|
</para>
|
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
<para>
|
|
|
|
The ability of <application>pg_dump</> and <application>psql</> to
|
2001-11-18 22:27:00 +00:00
|
|
|
write to or read from pipes makes it possible to dump a database
|
2003-03-24 14:32:51 +00:00
|
|
|
directly from one server to another; for example:
|
2000-06-30 16:14:21 +00:00
|
|
|
<programlisting>
|
|
|
|
pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</>
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
2001-03-13 14:08:18 +00:00
|
|
|
|
2002-11-11 20:14:04 +00:00
|
|
|
<important>
|
|
|
|
<para>
|
|
|
|
The dumps produced by <application>pg_dump</> are relative to
|
|
|
|
<literal>template0</>. This means that any languages, procedures,
|
|
|
|
etc. added to <literal>template1</> will also be dumped by
|
|
|
|
<application>pg_dump</>. As a result, when restoring, if you are
|
|
|
|
using a customized <literal>template1</>, you must create the
|
|
|
|
empty database from <literal>template0</>, as in the example
|
|
|
|
above.
|
|
|
|
</para>
|
|
|
|
</important>
|
2001-03-13 14:08:18 +00:00
|
|
|
|
2004-04-22 07:02:36 +00:00
|
|
|
<para>
|
|
|
|
For advice on how to load large amounts of data into
|
|
|
|
<productname>PostgreSQL</productname> efficiently, refer to <xref
|
|
|
|
linkend="populate">.
|
|
|
|
</para>
|
2000-06-30 16:14:21 +00:00
|
|
|
</sect2>
|
|
|
|
|
2001-03-19 16:19:26 +00:00
|
|
|
<sect2 id="backup-dump-all">
|
2004-01-11 05:46:58 +00:00
|
|
|
<title>Using <application>pg_dumpall</></title>
|
2000-06-30 16:14:21 +00:00
|
|
|
|
|
|
|
<para>
|
|
|
|
The above mechanism is cumbersome and inappropriate when backing
|
2004-04-22 07:02:36 +00:00
|
|
|
up an entire database cluster. For this reason the <xref
|
|
|
|
linkend="app-pg-dumpall"> program is provided.
|
2000-06-30 16:14:21 +00:00
|
|
|
<application>pg_dumpall</> backs up each database in a given
|
2004-04-22 07:02:36 +00:00
|
|
|
cluster, and also preserves cluster-wide data such as users and
|
|
|
|
groups. The basic usage of this command is:
|
2000-06-30 16:14:21 +00:00
|
|
|
<synopsis>
|
|
|
|
pg_dumpall > <replaceable>outfile</>
|
|
|
|
</synopsis>
|
2003-08-01 01:01:52 +00:00
|
|
|
The resulting dump can be restored with <application>psql</>:
|
|
|
|
<synopsis>
|
|
|
|
psql template1 < <replaceable class="parameter">infile</replaceable>
|
|
|
|
</synopsis>
|
|
|
|
(Actually, you can specify any existing database name to start from,
|
|
|
|
but if you are reloading in an empty cluster then <literal>template1</>
|
|
|
|
is the only available choice.) It is always necessary to have
|
|
|
|
database superuser access when restoring a <application>pg_dumpall</>
|
|
|
|
dump, as that is required to restore the user and group information.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
</sect2>
|
|
|
|
|
2001-03-19 16:19:26 +00:00
|
|
|
<sect2 id="backup-dump-large">
|
2000-06-30 16:14:21 +00:00
|
|
|
<title>Large Databases</title>
|
|
|
|
|
|
|
|
<para>
|
2001-11-21 05:53:41 +00:00
|
|
|
Since <productname>PostgreSQL</productname> allows tables larger
|
2000-06-30 16:14:21 +00:00
|
|
|
than the maximum file size on your system, it can be problematic
|
2003-03-24 14:32:51 +00:00
|
|
|
to dump such a table to a file, since the resulting file will likely
|
2004-04-22 07:02:36 +00:00
|
|
|
be larger than the maximum size allowed by your system. Since
|
2003-03-24 14:32:51 +00:00
|
|
|
<application>pg_dump</> can write to the standard output, you can
|
|
|
|
just use standard Unix tools to work around this possible problem.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<formalpara>
|
|
|
|
<title>Use compressed dumps.</title>
|
|
|
|
<para>
|
2003-03-24 14:32:51 +00:00
|
|
|
You can use your favorite compression program, for example
|
2000-06-30 16:14:21 +00:00
|
|
|
<application>gzip</application>.
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | gzip > <replaceable class="parameter">filename</replaceable>.gz
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Reload with
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
|
|
|
gunzip -c <replaceable class="parameter">filename</replaceable>.gz | psql <replaceable class="parameter">dbname</replaceable>
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
or
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <replaceable class="parameter">dbname</replaceable>
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
</formalpara>
|
|
|
|
|
|
|
|
<formalpara>
|
2003-03-24 14:32:51 +00:00
|
|
|
<title>Use <command>split</>.</title>
|
2000-06-30 16:14:21 +00:00
|
|
|
<para>
|
2003-03-24 14:32:51 +00:00
|
|
|
The <command>split</command> command
|
|
|
|
allows you to split the output into pieces that are
|
2000-06-30 16:14:21 +00:00
|
|
|
acceptable in size to the underlying file system. For example, to
|
|
|
|
make chunks of 1 megabyte:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
pg_dump <replaceable class="parameter">dbname</replaceable> | split -b 1m - <replaceable class="parameter">filename</replaceable>
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Reload with
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
createdb <replaceable class="parameter">dbname</replaceable>
|
2001-09-10 07:11:28 +00:00
|
|
|
cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable class="parameter">dbname</replaceable>
|
2000-06-30 16:14:21 +00:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
</formalpara>
|
|
|
|
|
2001-03-13 14:08:18 +00:00
|
|
|
<formalpara>
|
2001-10-01 17:46:46 +00:00
|
|
|
<title>Use the custom dump format.</title>
|
2001-03-13 14:08:18 +00:00
|
|
|
<para>
|
2001-11-21 05:53:41 +00:00
|
|
|
If <productname>PostgreSQL</productname> was built on a system with the <application>zlib</> compression library
|
2001-03-13 14:08:18 +00:00
|
|
|
installed, the custom dump format will compress data as it writes it
|
|
|
|
to the output file. For large databases, this will produce similar dump
|
2001-09-09 23:52:12 +00:00
|
|
|
sizes to using <command>gzip</command>, but has the added advantage that the tables can be
|
2001-03-13 14:08:18 +00:00
|
|
|
restored selectively. The following command dumps a database using the
|
|
|
|
custom dump format:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable>
|
|
|
|
</programlisting>
|
|
|
|
|
2004-02-17 09:07:16 +00:00
|
|
|
See the <xref linkend="app-pgdump"> and <xref
|
|
|
|
linkend="app-pgrestore"> reference pages for details.
|
2001-03-13 14:08:18 +00:00
|
|
|
</para>
|
|
|
|
</formalpara>
|
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
</sect2>
|
|
|
|
|
2001-03-19 16:19:26 +00:00
|
|
|
<sect2 id="backup-dump-caveats">
|
2000-06-30 16:14:21 +00:00
|
|
|
<title>Caveats</title>
|
|
|
|
|
|
|
|
<para>
|
2003-08-31 17:32:24 +00:00
|
|
|
For reasons of backward compatibility, <application>pg_dump</>
|
|
|
|
does not dump large objects by default.<indexterm><primary>large
|
|
|
|
object</primary><secondary>backup</secondary></indexterm> To dump
|
2004-04-22 07:02:36 +00:00
|
|
|
large objects you must use either the custom or the tar output
|
2003-08-31 17:32:24 +00:00
|
|
|
format, and use the <option>-b</> option in
|
|
|
|
<application>pg_dump</>. See the reference pages for details. The
|
|
|
|
directory <filename>contrib/pg_dumplo</> of the
|
|
|
|
<productname>PostgreSQL</> source tree also contains a program
|
|
|
|
that can dump large objects.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2004-02-17 09:07:16 +00:00
|
|
|
Please familiarize yourself with the <xref linkend="app-pgdump">
|
|
|
|
reference page.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
</sect2>
|
|
|
|
</sect1>
|
|
|
|
|
2000-09-29 20:21:34 +00:00
|
|
|
<sect1 id="backup-file">
|
2000-06-30 16:14:21 +00:00
|
|
|
<title>File system level backup</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
An alternative backup strategy is to directly copy the files that
|
2001-11-21 05:53:41 +00:00
|
|
|
<productname>PostgreSQL</> uses to store the data in the database. In
|
2000-06-30 16:14:21 +00:00
|
|
|
<xref linkend="creating-cluster"> it is explained where these files
|
|
|
|
are located, but you have probably found them already if you are
|
|
|
|
interested in this method. You can use whatever method you prefer
|
|
|
|
for doing usual file system backups, for example
|
2002-11-11 20:14:04 +00:00
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
<programlisting>
|
|
|
|
tar -cf backup.tar /usr/local/pgsql/data
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
There are two restrictions, however, which make this method
|
|
|
|
impractical, or at least inferior to the <application>pg_dump</>
|
|
|
|
method:
|
|
|
|
|
|
|
|
<orderedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
The database server <emphasis>must</> be shut down in order to
|
|
|
|
get a usable backup. Half-way measures such as disallowing all
|
2004-04-22 07:02:36 +00:00
|
|
|
connections will <emphasis>not</emphasis> work
|
|
|
|
(<command>tar</command> and similar tools do not take an atomic
|
|
|
|
snapshot of the state of the filesystem at a point in
|
|
|
|
time). Information about stopping the server can be found in
|
|
|
|
<xref linkend="postmaster-shutdown">. Needless to say that you
|
|
|
|
also need to shut down the server before restoring the data.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2004-01-19 20:12:30 +00:00
|
|
|
If you have dug into the details of the file system layout of the
|
|
|
|
database, you may be tempted to try to back up or restore only certain
|
2000-06-30 16:14:21 +00:00
|
|
|
individual tables or databases from their respective files or
|
|
|
|
directories. This will <emphasis>not</> work because the
|
|
|
|
information contained in these files contains only half the
|
2001-08-25 18:52:43 +00:00
|
|
|
truth. The other half is in the commit log files
|
|
|
|
<filename>pg_clog/*</filename>, which contain the commit status of
|
2000-06-30 16:14:21 +00:00
|
|
|
all transactions. A table file is only usable with this
|
|
|
|
information. Of course it is also impossible to restore only a
|
2001-08-25 18:52:43 +00:00
|
|
|
table and the associated <filename>pg_clog</filename> data
|
2003-03-24 14:32:51 +00:00
|
|
|
because that would render all other tables in the database
|
2000-06-30 16:14:21 +00:00
|
|
|
cluster useless.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</orderedlist>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2003-11-04 09:55:39 +00:00
|
|
|
An alternative file-system backup approach is to make a
|
|
|
|
<quote>consistent snapshot</quote> of the data directory, if the
|
2004-01-19 20:12:30 +00:00
|
|
|
file system supports that functionality (and you are willing to
|
|
|
|
trust that it is implemented correctly). The typical procedure is
|
|
|
|
to make a <quote>frozen snapshot</> of the volume containing the
|
|
|
|
database, then copy the whole data directory (not just parts, see
|
|
|
|
above) from the snapshot to a backup device, then release the frozen
|
|
|
|
snapshot. This will work even while the database server is running.
|
|
|
|
However, a backup created in this way saves
|
2003-11-04 09:55:39 +00:00
|
|
|
the database files in a state where the database server was not
|
|
|
|
properly shut down; therefore, when you start the database server
|
2004-01-19 20:12:30 +00:00
|
|
|
on the backed-up data, it will think the server had crashed
|
2003-11-04 09:55:39 +00:00
|
|
|
and replay the WAL log. This is not a problem, just be aware of
|
|
|
|
it.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2004-01-19 20:12:30 +00:00
|
|
|
If your database is spread across multiple volumes (for example,
|
|
|
|
data files and WAL log on different disks) there may not be any way
|
|
|
|
to obtain exactly-simultaneous frozen snapshots of all the volumes.
|
|
|
|
Read your filesystem documentation very carefully before trusting
|
|
|
|
to the consistent-snapshot technique in such situations.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Note that a file system backup will not necessarily be
|
2000-06-30 16:14:21 +00:00
|
|
|
smaller than an SQL dump. On the contrary, it will most likely be
|
|
|
|
larger. (<application>pg_dump</application> does not need to dump
|
2001-05-17 21:50:18 +00:00
|
|
|
the contents of indexes for example, just the commands to recreate
|
2000-06-30 16:14:21 +00:00
|
|
|
them.)
|
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
2000-07-21 00:44:13 +00:00
|
|
|
<sect1 id="migration">
|
2003-11-04 09:55:39 +00:00
|
|
|
<title>Migration Between Releases</title>
|
2003-08-31 17:32:24 +00:00
|
|
|
|
|
|
|
<indexterm zone="migration">
|
|
|
|
<primary>upgrading</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<indexterm zone="migration">
|
|
|
|
<primary>version</primary>
|
|
|
|
<secondary>compatibility</secondary>
|
|
|
|
</indexterm>
|
2000-06-30 16:14:21 +00:00
|
|
|
|
|
|
|
<para>
|
|
|
|
As a general rule, the internal data storage format is subject to
|
2003-11-04 09:55:39 +00:00
|
|
|
change between major releases of <productname>PostgreSQL</> (where
|
|
|
|
the number after the first dot changes). This does not apply to
|
|
|
|
different minor releases under the same major release (where the
|
|
|
|
number of the second dot changes); these always have compatible
|
|
|
|
storage formats. For example, releases 7.0.1, 7.1.2, and 7.2 are
|
|
|
|
not compatible, whereas 7.1.1 and 7.1.2 are. When you update
|
|
|
|
between compatible versions, then you can simply reuse the data
|
|
|
|
area in disk by the new executables. Otherwise you need to
|
2000-06-30 16:14:21 +00:00
|
|
|
<quote>back up</> your data and <quote>restore</> it on the new
|
|
|
|
server, using <application>pg_dump</>. (There are checks in place
|
|
|
|
that prevent you from doing the wrong thing, so no harm can be done
|
|
|
|
by confusing these things.) The precise installation procedure is
|
2003-11-04 09:55:39 +00:00
|
|
|
not subject of this section; these details are in <xref
|
|
|
|
linkend="installation">.
|
2000-06-30 16:14:21 +00:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The least downtime can be achieved by installing the new server in
|
|
|
|
a different directory and running both the old and the new servers
|
|
|
|
in parallel, on different ports. Then you can use something like
|
2002-11-11 20:14:04 +00:00
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
<programlisting>
|
|
|
|
pg_dumpall -p 5432 | psql -d template1 -p 6543
|
|
|
|
</programlisting>
|
2002-11-11 20:14:04 +00:00
|
|
|
|
2003-03-24 14:32:51 +00:00
|
|
|
to transfer your data. Or use an intermediate file if you want.
|
2000-06-30 16:14:21 +00:00
|
|
|
Then you can shut down the old server and start the new server at
|
|
|
|
the port the old one was running at. You should make sure that the
|
|
|
|
database is not updated after you run <application>pg_dumpall</>,
|
|
|
|
otherwise you will obviously lose that data. See <xref
|
|
|
|
linkend="client-authentication"> for information on how to prohibit
|
|
|
|
access. In practice you probably want to test your client
|
|
|
|
applications on the new setup before switching over.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
If you cannot or do not want to run two servers in parallel you can
|
|
|
|
do the back up step before installing the new version, bring down
|
|
|
|
the server, move the old version out of the way, install the new
|
|
|
|
version, start the new server, restore the data. For example:
|
2002-11-11 20:14:04 +00:00
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
<programlisting>
|
|
|
|
pg_dumpall > backup
|
2001-09-10 07:17:01 +00:00
|
|
|
pg_ctl stop
|
2000-06-30 16:14:21 +00:00
|
|
|
mv /usr/local/pgsql /usr/local/pgsql.old
|
2003-03-24 14:32:51 +00:00
|
|
|
cd ~/postgresql-&version;
|
2000-06-30 16:14:21 +00:00
|
|
|
gmake install
|
|
|
|
initdb -D /usr/local/pgsql/data
|
|
|
|
postmaster -D /usr/local/pgsql/data
|
2002-10-21 02:11:37 +00:00
|
|
|
psql template1 < backup
|
2000-06-30 16:14:21 +00:00
|
|
|
</programlisting>
|
2002-11-11 20:14:04 +00:00
|
|
|
|
2000-06-30 16:14:21 +00:00
|
|
|
See <xref linkend="runtime"> about ways to start and stop the
|
|
|
|
server and other details. The installation instructions will advise
|
|
|
|
you of strategic places to perform these steps.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
When you <quote>move the old installation out of the way</quote>
|
|
|
|
it is no longer perfectly usable. Some parts of the installation
|
|
|
|
contain information about where the other parts are located. This
|
|
|
|
is usually not a big problem but if you plan on using two
|
|
|
|
installations in parallel for a while you should assign them
|
|
|
|
different installation directories at build time.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
</sect1>
|
|
|
|
</chapter>
|
2001-11-21 05:53:41 +00:00
|
|
|
|
|
|
|
<!-- Keep this comment at the end of the file
|
|
|
|
Local variables:
|
|
|
|
mode:sgml
|
|
|
|
sgml-omittag:nil
|
|
|
|
sgml-shorttag:t
|
|
|
|
sgml-minimize-attributes:nil
|
|
|
|
sgml-always-quote-attributes:t
|
|
|
|
sgml-indent-step:1
|
|
|
|
sgml-indent-tabs-mode:nil
|
|
|
|
sgml-indent-data:t
|
|
|
|
sgml-parent-document:nil
|
|
|
|
sgml-default-dtd-file:"./reference.ced"
|
|
|
|
sgml-exposed-tags:nil
|
|
|
|
sgml-local-catalogs:("/usr/share/sgml/catalog")
|
|
|
|
sgml-local-ecat-files:nil
|
|
|
|
End:
|
|
|
|
-->
|