'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += ''; figDoc.write(zhtm); figDoc.close(); popUpWin.focus(); } //--> Using Oracle8 -- Chapter 25 - Using SQL*Loader and Export/Import

Using Oracle8


Chapter 25

Using SQL*Loader and Export/Import


Understand the Purpose and Capabilities of Import/Export and SQL*Loader

The Oracle8 Server comes with three utilities that help you load data into and out of a database: Export, Import, and SQL*Loader. The Import and Export utilities work together; Export sends database definitions and actual data to an export file and Import can read the file to perform many different tasks. You can use Export and Import for many important database tasks, such as restoring a table, generating CREATE scripts, copying data among Oracle databases, migrating among Oracle versions, and moving tables from one schema to another.

SQL*Loader is similar to Import in that it can load data; the main difference is that Import can read only Oracle export files. SQL*Loader can read text files generated by non-Oracle databases. SQL*Loader uses a control file to guide the utility on how to load data.

Export and Import can do the following:

What are fixed-length and delimited files?
A fixed-length file has a specified number of bytes for each column in the table. For example, if the specification is for 50 bytes and there are only 20 bytes in the record's column, 30 blank spaces will be appended. This way Oracle knows which byte range to look at for each column.
A delimited file has a special character (usually a tab) separating each column for each record. The character shouldn't be used within the data, so if there's a text fieldthat might contain tabs, developers generally choose a different character as the delimiter. Non-Oracle products generally use more delimited files than fixed-length files.

SQL*Loader can do the following:

Control and Configure Import and Export

FULL, OWNER, and TABLE are the three types of exports. A FULL export exports all objects, structures, and data within the database for all schemas. OWNER exports only those objects owned by the specified user accounts. TABLE exports only the specified tables and partitions for specific user accounts.

Because dozens of object types can be exported, it's important to distinguish what gets exported with each export category:

You can use the Export and Import utilities in interactive or non-interactive mode. In interactive mode, Oracle steps you through the process, prompting for basic Import/Export session information. This method, however, limits you to the simple prompts that Oracle provides. You have more flexibility in non-interactive mode because you can supply up to 23 parameters for the Export utility and 24 parameters for the Import utility. The parameters may be supplied at the command line or by specifying a command file that contains all parameters and values. You can automate Export and Import sessions by using a command file, or repeat them over and over again manually.

The syntax of running Export and Import is as follows:

EXP80 KEYWORD=value or keyword=(value1,value2,...,valueN)
IMP80 KEYWORD=value or keyword=(value1,value2,...,valueN)
Different commands on different operating systems?
Exp80 and imp80 are the commands for the Export and Import utilities on a Windows NT environment. exp and imp are the commands for the Export and Import utilities on UNIX and other operating systems. This book's examples include the exp80 and imp80 commands.

Replace KEYWORD with the parameter, and value with what should be the parameter's value. For example, use the following to export the entire database:

EXP80 USERID=SYSTEM/MANAGER FULL=Y

Use the following to import the EMP and DEPT tables owned by the SCOTT user account:

IMP80 USERID=SYSTEM/MANAGER FROMUSER=SCOTT TABLES=(EMP,DEPT)

More in-depth Export and Import sessions are discussed later in the section "Walkthroughs of Export and Import Examples."

As mentioned earlier, you can use 23 parameters during an Export session on the command line or in any specified parameter file. Table 25.1 lists all parameters that can be used with the Export utility, along with their default values (if any).

TABLE 25.1  Parameters for the Export utility
Parameter
Default Value
Description
BUFFER
OS-dependent
The size of BUFFER (in bytes) determines the memory buffer through which rows are exported. This should be larger than the size of the largest record multiplied by the number of rows that you want to fit within the buffer.
COMPRESS
Y
If COMPRESS=Y, the INITIAL storage parameter is set to the total size of all extents allocated for the object. The change takes effect only when the object is imported.
CONSISTENT
N
Setting CONSISTENT=Y exports all tables and references in a consistent state. This slows the export, as rollback space is used. If CONSISTENT=N and a record is modified during the export, the data will become inconsistent.
CONSTRAINTS
N
Specifies whether table constraints are exported.
DIRECT
N
If DIRECT=Y, Oracle bypasses the SQL command-processing layer, improving the speed of the export. Unfortunately, the new object types endemic to Oracle8, such as LOBs, don't get exported.
FEEDBACK
0
Oracle displays a period for each group of records inserted. FEEDBACK defines the size of the group. For example, if FEEDBACK=1000, a period will be displayed for every 1,000 records imported. This parameter is useful for tracking the progress of large imports.
FILE
expdat.dmp
By default, expdat.dmp (stands for EXPort DATa.DuMP) is the filename. For a more meaningful name, change the FILE parameter.
FULL
N
The entire database will be exported if FULL=Y, including tablespace definitions.
GRANTS
Y
Specifies whether all grant definitions are exported for the objects being exported.
HELP
N
No other parameters are needed if you specify HELP=Y. A basic help screen will appear.
INCTYPE
 
The valid options for this parameter are COMPLETE, CUMULATIVE, and INCREMENTAL. A COMPLETE export lays down a full export for which the other two options rely on for restores of the database. CUMULATIVE exports all tables and other objects that have changed since the last CUMULATIVE or COMPLETE export was taken; if one record in a table is altered, the entire table is exported. INCREMENTAL exports all tables and objects that have changed since the last INCREMENTAL, CUMULATIVE, or COMPLETE export.

How can you easily see the Export/Import parameters and syntax?
If you don't have this book handy when necessary and haven't memorized the dozens of parameters, the Export and Import utilities have a handy reference built in. Typing EXP80 HELP=Y orIMP80 HELP=Y at a command prompt brings up a nice one-page reference manual.

Parameter
Default Value
Description
INDEXES
Y
Specifies whether user-defined indexes are exported. System indexes created with constraints (primary key, unique key) and OID indexes are automatically exported, regardless of this parameter's value.
LOG
 
Specifies the name of the file to spool the feedback from the export session. Unless otherwise specified, Oracle appends a .LOG extension to the file.
PARFILE
 
Rather than enter all parameters on the command line, some or all can be kept in a parameter file. This parameter specifies which file to use, if desired. This parameter is especially useful for non-interactive import sessions.
POINT_IN_TIME_RECOVER
N
Exports information for a point-in-time recovery for the tablespace listed with the TABLESPACES parameter.
RECORD
Y
If you use the INCTYPE parameter with RECORD=Y, the SYS data dictionary tables INCEXP, INCFIL, and INCVID are populated with export data such as owner, type of export, and the time of export.
RECORDLENGTH
OS-dependent
The RECORDLENGTH parameter is used only when you'll import on a machine with a different byte count of the file than on the machine where the export occurs. The default should be used in most import sessions.
RECOVERY_TABLESPACES
 
Used with POINT_IN_TIME_RECOVER;specifies which tablespaces can be recovered by using point-in-time recovery. This is important because imports otherwise can't recover transactions past the time of export.
ROWS
Y
Specifies whether table and object data will be exported. If ROWS=N, only object definitions are exported.
STATISTICS
ESTIMATE
Specifies whether table and index statistics are to be analyzed with COMPUTE or ESTIMATE when imported. Only those objects that already have statistics on them will be analyzed during import. Specify NONE if no objects should be analyzed.
TABLES
 
Specifies a comma-separated list of all tables to be exported. This parameter could be used with the OWNER parameter to specify which owner to associate the tables with. Tables can also be specified with the owner.table_name format. In a non-UNIX environment such as Windows NT, you must enclose the table list within parentheses.
TABLESPACES
 
List of tablespaces to be exported with the POINT_IN_TIME_RECOVER parameter.
USERID
 
Specifies the username and password for the user conducting the import. The format for the command is username/password. You can also use Net8's @connect_string format if you want.

You can use 24 parameters during an import session. They may be specified in the command line or any specified parameter file. Table 25.2 lists all the import parameters.

TABLE 25.2  Parameters for the Import utility
Parameter
Default Value
Description
ANALYZE
Y
Imported tables have their statistics analyzed if ANALYZE is set to Y. Only those tables that already had statistics on them during the export are computed. The tables will be ESTIMATED by default, unless the export was performed with the STATISTICS=COMPUTE parameter configuration.
BUFFER
OS-dependent
The BUFFER size (in bytes) determines the memory buffer through which rows are imported. This should be larger than the size of the largest record multiplied by the number of rows that you want to fit within the buffer.
CHARSET An obsolete Oracle6 parameter, indicating whether the export was done in ASCII or EBCDIC. This information is processed automatically in Oracle7 and Oracle8.
COMMIT
N
By default, a COMMIT occurs after each table, nested table, and partition. If you're importing a large table, the rollback segments may grow large. To improve performance while loading large tables, you should set COMMIT=Y.
DESTROY
N
If you set DESTROY=Y and do a full import, Oracle will overwrite any data files that exist. If you use raw devices for your data files, they will be overwritten during a full import, as DESTROY=N won't prevent the overwriting of datafiles! Don't use this option unless you know what you're doing.
FEEDBACK
0
Oracle displays a period for each group of records inserted. FEEDBACK defines the size of the group. For example, if FEEDBACK=1000, a period is displayed for every 1,000 records imported. This parameter is useful for tracking the progress of large imports.

Export/Import and non-Oracle databases
The export file (expdat.dmp by default) is a binary file that only Oracle databases can read. Many databases and PC software can export data into delimited text files, but the Export utility can't. To transfer data from Oracle to a non-Oracle database, you must make a delimited file manually by spooling from within PL/SQL or SQL*Plus and running a custom-made script.

Parameter
Default Value
Description
FILEexpdat.dmp By default, expdat.dmp (stands for EXPort DATa.DuMP) is the name of the file that Import will import from. If the file is something other than expdat.dmp, specify it with this parameter.
FROMUSER  Specifying this parameter imports only those objects owned by the FROMUSER user account.
FULL
N
The entire database will be imported if FULL=Y.
GRANTS
Y
Specifies whether all grants will be created for the exported objects.
HELP
N
No other parameters are needed if you specify HELP=Y. A basic help screen will appear.
IGNORE
N
If IGNORE=Y, object creation errors are ignored, and records are inserted into the table. Duplicate records can result, if no unique constraints exist for the table. Non-object creation errors are still reported, such as operating system problems.
INCTYPE  If you're importing an incremental export, tables are dropped and recreated. You must first restore from the last SYSTEM export (specify INCTYPE=SYSTEM). Then import every incremental export (specify INCTYPE=RESTORE) until the desired changes are applied to the database.
INDEXES
Y
Specifies whether user-defined indexes are imported. System indexes created with constraints (primary key, unique key) and OID indexes are automatically imported, regardless of this parameter's value.
INDEXFILE  Specifies the name of the file to generate CREATE INDEX statements. Unless otherwise specified, Oracle appends an .SQL extension to the file. (This parameter is explained in more detail later in this chapter.)

Using the LOG parameter
You can specify a log file for the Export and Import sessions with the LOG parameter. The file, which mimics what's output to the screen, reports all successful and unsuccessful messages to be examined at a later point. Some Oracle experts use this file during an automated load process and then checks the file for errors. A database administrator is paged if any appear.

Parameter
Default Value
Description
LOG
 
Specifies the name of the file to spool the feedback from the import session. Unless otherwise specified, Oracle appends an .LOG extension to the file.
PARFILE
 
Rather than enter all parameters on the command line, some or all may be kept in a parameter file. The PARFILE parameter specifies which file to use, if desired. This parameter is especially useful for non-interactive import sessions.
POINT_IN_TIME_RECOVER
N
Performs a point-in-time recovery for the tablespace exported with the TABLESPACES parameter.
RECORDLENGTH
OS-dependent
Used only when importing on a machine with a different byte count of the file than on the machine where the export occurred. The default should be used in most Import sessions.
SHOW
N
Displays each SQL statement and doesn't apply them to the database. The SQL statements can be viewed and modified when used with the FILE parameter. (The SHOW parameter is explained in more detail later in this chapter.)
SKIP_UNUSABLE_INDEXES
N
Allows the postponement of index creation until the record data is imported. The indexes affected are only those set to an unusable state, and all other indexes are created if INDEXES=Y (the default value) is set.
TABLES
 
Specifies a comma-separated list of all tables to be imported. You should use this parameter with the FROMUSER parameter. In a non-UNIX environment, such as Windows NT, enclose the table list within parentheses.
TOUSER
 
Specifies the user account into which tables should be imported, if you want it to be different from the original owner of the tables. This parameter needs to be used with the FROMUSER parameter.
USERID
 
Specifies the username and password for the user conducting the import. The format for the command is username/password. You may also use Net8's @connect_string format if desired.

Walkthroughs of Export and Import Examples

The following sections walk you through a few examples of the Export and Import utilities, to demonstrate some of the most often used features and to point out a few pitfalls that you should avoid. You see how to copy objects among schemas, how Import works when it encounters errors (such as when a table already exists during an Import session), and how to export and import table partitions.

Copy Database Objects from One Schema to Another

The most widespread use of Export/Import is for backup and recovery (as explained earlier). The next is copying data from one schema to another, or from one database to another.

During the import step, you can create the tables into any database you want, as long as the export file is on the database server.

In this first example, assume that there are two tables in the QUE schema: BOOK_DISTRIBUTION and BOOK_SALES. We'll copy these tables from the QUE schema to the SALES schema. You can export all of QUE's objects or specify only the two tables. The latter is done in this walkthrough.

Make sure you have the proper permissions
To copy or move tables among schemas, the user accounts must have proper permissions. CREATE ANY TABLE is needed to create the tables in the destination user account. In addition, either the SELECT permission on the table(s) or the EXP_ANY_TABLE permission must be granted to the user performing the export. All tablespace quotas need to be sufficient for the table to be created.

Copy two tables from one schema to another

  1. Export the data with this command:
EXP80 USERID=system/manager TABLES=QUE.BOOK_DISTRIBUTION,QUE.BOOK_SALES
    You will see the following output:
Export: Release 8.0.3.0.0 Production on Fri May 13 10:16:16 1998
Copyright  1997 Oracle Corporation. All rights reserved.
Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
With the distributed, heterogeneous, replication, objects
And parallel query options
PL/SQL Release 3.0.3.0.0 - Production
Export done in WE8IDO8859P1 character set and WE8ISO8859P1
 NCHAR character set
About to export QUE's objects ...
. about to export QUE's tables via Conventional Path ...
. . exporting table   BOOK_DISTRIBUTION  3280 rows exported
. . exporting table   BOOK_SALES       241031 rows exported
Export terminated successfully without warnings.
    The data is stored by default in the expdat.dmp export file, or in a different path or export file specified with the FILE parameter. If you specify a log file with the LOG parameter, the output is placed in there as well.

  1. Import the exported data. You need to consider three situations when importing: if the tables don't exist in the SALES schema, if the tables do exist in the SALES schema and are empty, and if the tables exist in the SALES schema and already contain data.
    IMP80 USERID=system/manager FROMUSER=QUE TOUSER=SALES
    TABLES=BOOK_DISTRIBUTION,BOOK_SALES IGNORE=Y
    

    1. Check the screen output, log files, or database for any possible errors.

    What if the table contains data?
    If the tables exist in the SALES schema and already contain data, you'll have two options: truncate the tables before beginning the Import session, or risk having duplicate or rejected data. Either way, use the same syntax when you are ready to import the data.

    How Import Reacts When Encountered with an Error

    There's always a chance of errors occurring. As briefly described in the preceding example, you can get errors if a table already exists with data. This is especially true if there are unique or primary key constraints on the data. In addition, errors can occur if a table exists in the database and IGNORE=Y wasn't specified. Read the following example (and don't do this at home or office, as you may end up creating duplicate records in your tables).

    Detect errors in an Export/Import session

    1. Export the QUE.BOOK_SALES table into the default expdat.dmp export file:
      EXP80 USERID=system/manager TABLES=QUE.BOOK_SALES
      
    2. At this point, there is the BOOK_SALES table in the QUE schema's database. Assume that there are 241,031 records, and that the table has a primary key called PK_BOOK_SALES. Assume that the following command is entered:
      EXP80 USERID=system/manager FROMUSER=QUE TABLES=BOOK_SALES
      
      If data from the export file is loaded into the table, it will violate the constraint:
      Import: Release 8.0.3.0.0 Production on Fri May 13 10:22:17
      1998
      Copyright  1997 Oracle Corporation. All rights reserved.
      Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
      With the distributed, heterogeneous, replication, objects
      And parallel query options
      PL/SQL Release 3.0.3.0.0 - Production
      
      Export file created by EXPORT:V08.00.03 via conventional path
      . importing QUE's objects into QUE
      . . importing table             "BOOK_SALES"
      IMP-00019: row rejected due to ORACLE error 1
      IMP-00003: ORACLE error 1 encountered
      ORA-000001: unique constraint (QUE.PK_BOOK_SALES) violated
      Column 1 4321
      Column 2 GREENE
      Column 3 ANDREA
      Column 4 613
      Column 5 16-JUL-70:00:00:00
      
      This would also appear in the log file if specified. If other tables were specified, Oracle would continue with the Import session until completed; only those records that defied the primary key wouldn't get loaded.

    Exporting and Importing Table Partitions

    The table partition is a new Oracle8 object type. The Export and Import utilities have been expanded to handle the export and import of table (and index) partitions. For this example, assume there's a BASEBALL_PLAYER table with five partitions based on the player's last name: A, E, J, O, and V.

    To export all partitions, you can simply specify the table's name:

    EXP80 USERID=system/manager FROMUSER=QUE
    TABLES=BASEBALL_PLAYER
    

    The result will look like this:

    Export: Release 8.0.3.0.0 Production on Fri May 13 10:16:16
     1998
    Copyright  1997 Oracle Corporation. All rights reserved.
    Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
    With the distributed, heterogeneous, replication, objects
    And parallel query options
    PL/SQL Release 3.0.3.0.0 - Production
    Export done in WE8IDO8859P1 character set and WE8ISO8859P1
     NCHAR character set
    About to export QUE's objects ...
    . about to export QUE's tables via Conventional Path ...
    . . exporting table               BASEBALL_PLAYER
    . . exporting partition           A
    342 rows exported
    . . exporting partition           E    
    172 rows exported
    . . exporting partition           J
    613 rows exported
    . . exporting partition           O
    291 rows exported
    . . exporting partition           V
    716 rows exported
    Export terminated successfully without warnings.
    

    If you want to export only a specific partition of the BASEBALL_PLAYER table, use TABLE_NAME:PARTITION_NAME as the format of the TABLES parameter specification:

    EXP80 USERID=system/manager FROMUSER=QUE
    TABLES=BASEBALL_PLAYER:J
    

    The result will look like this:

    Export: Release 8.0.3.0.0 Production on Fri May 13 10:16:16
     1998
    Copyright  1997 Oracle Corporation. All rights reserved.
    Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
    With the distributed, heterogeneous, replication, objects
    And parallel query options
    PL/SQL Release 3.0.3.0.0 - Production
    Export done in WE8IDO8859P1 character set and WE8ISO8859P1
     NCHAR character set
    About to export QUE's objects ...
    . about to export QUE's tables via Conventional Path ...
    . . exporting table              BASEBALL_PLAYER
    . . exporting partition          J
                       613 rows exported
    Export terminated successfully without warnings.
    

    The syntax for using Import to import partitions is similar to the Export syntax-specify the partition with TABLE_NAME:PARTITION_NAME. The following example imports two partitions from a previous export:

    IMP80 USERID=system/manager FROMUSER=QUE
    TABLES=BASEBALL_PLAYER:A,BASEBALL_PLAYER:J
    

    The result will look like this:

    Import: Release 8.0.3.0.0 Production on Fri May 13 10:22:17
     1998
    Copyright  1997 Oracle Corporation. All rights reserved.
    Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
    With the distributed, heterogeneous, replication, objects
    And parallel query options
    PL/SQL Release 3.0.3.0.0 - Production
    
    Export file created by EXPORT:V08.00.03 via conventional path
    . . importing partition            "BASEBALL_PLAYER":"A"
             342 rows imported
    . . importing partition            "BASEBALL_PLAYER":"J"
             613 rows imported
    Import terminated successfully without warnings.
    

    Additional Export and Import Features

    The Export and Import utilities have additional useful features. The following sections explain how to generate various CREATE statement scripts with the SHOW option, generate CREATE INDEX statement scripts with the INDEXFILE option, and defragment a tablespace with the Export and Import utilities.

    Using the SHOW Parameter

    The Import utility's SHOW parameter is used to show the SQL statements that Oracle would perform if the import were to occur. It doesn't physically do anything to the database; its purpose is for information only.

    The purpose of the SHOW option
    You can use this parameter to generate all SQL statements used to create the database's data files and all its objects. This includes creating comments, tablespaces, users, privilege grants, roles and their assignments, quota definitions, audit definitions, rollback segments, sequences, tables, constraints, indexes, packages, procedures, partitions, user-defined datatypes, and so on.

    Used with the LOG parameter, you can save the SQL statements into a script file, listed in the proper order of dependencies. For instance, a table is created before an index, primary keys are created before foreign keys, and so on. A sample portion of the output from specifying SHOW=Y follows:

    "ALTER SCHEMA = "QUE""
    "CREATE UNIQUE INDEX "U_NAME" ON "BASEBALL_PLAYER"
     ("LAST_NAME" , "FIRST_NAME", "MIDDLE_INITIAL" )  PC"
    "TFREE 10 INITRANS 2 MAXTRANS 255 STORAGE
     (INITIAL 10240 NEXT 10240 MINEXTEN"
    "TS 1 MAXEXTENTS 121 PCTINCREASE 50 FREELISTS 1)
     TABLESPACE "USER_DATA" LOGG"
    "ING"
    "ALTER TABLE "BASEBALL_PLAYER" ADD CHECK
     (LAST_NAME IS NULL OR FIRST_NAME IS NULL "
    ") ENABLE"
    "ALTER TABLE "BASEBALL_PLAYER" ADD PRIMARY KEY
     ("PLAYER_ID") ENABLE"
    "GRANT SELECT ON "BASEBALL_PLAYER" TO PUBLIC"
    "ALTER TABLE "BASEBALL_PLAYER" ADD FOREIGN KEY
     ("PLAYER_ID") REFERENCES "STATISTICS" ("P"
    "LAYER_ID") ENABLE"
    

    Notice that the output's formatting is syntactically incorrect (quotation marks are inappropriately placed and some words are split among two lines). The log file can be modified to change almost any aspect of the database. Each line begins and ends with a quotation mark. Be sure to string these marks from the beginning and ending of each line. In addition, Oracle doesn't wrap words between lines in the output. This results in some statements having words and numbers being cut in two. To remedy this, you must manually join the lines in each statement. The preceding sample portion could be cleaned up to look like the following script:

    ALTER SCHEMA = QUE;
    
    CREATE UNIQUE INDEX U_NAME
     ON BASEBALL_PLAYER (LAST_NAME, FIRST_NAME, MIDDLE_INITIAL)
     PCTFREE 10 INITRANS 2 MAXTRANS 255
     STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1
     MAXEXTENTS 121
                 PCTINCREASE 50 FREELISTS 1)
       TABLESPACE USER_DATA LOGGING;
    ALTER TABLE BASEBALL_PLAYER ADD CHECK (LAST_NAME IS NULL OR
       FIRST_NAME IS NULL) ENABLE;
    ALTER TABLE BASEBALL_PLAYER ADD PRIMARY KEY (PLAYER_ID)
     ENABLE;
    GRANT SELECT ON BASEBALL_PLAYER TO PUBLIC;
    ALTER TABLE BASEBALL_PLAYER ADD FOREIGN KEY (PLAYER_ID)
    REFERENCES STATISTICS (PLAYER_ID) ENABLE;
    

    Using the INDEXFILE Parameter

    The INDEXFILE parameter is used to generate CREATE INDEX statements. Unlike the SHOW parameter, which has extra quotation marks and words cut up among different lines, INDEXFILE generates a clean and usable file. The value of the INDEXFILE parameter specifies the name of the file to be created. By default, Oracle appends an .SQL extension unless otherwise specified.

    Generic statements don't execute
    Generic table-creation statements are shown but are commented out so that they don't execute if the script is run.

    The following is part of the output file X.LOG from an import with INDEXFILE=X.LOG specified. Notice how Oracle wraps all lines appropriately and doesn't add quotation marks before and after each line, as it does with SHOW=Y:

    REM  CREATE TABLE "QUE"."BASEBALL__PLAYER"
     ("PLAYER_ID" NUMBER(6, 0), "LAST_NAME"
    
    REM  VARCHAR2(60), "FIRST_NAME " VARCHAR2(60),
     "MIDDLE_INITIAL " VARCHAR2(5), "START_DATE"
    REM  DATE) PCTFREE 10 PCTUSED 40 INITRANS 1
     MAXTRANS 255 LOGGING
    REM  STORAGE(INITIAL 10240 NEXT 10240 MINEXTENTS 1
     MAXEXTENTS 121
    REM  PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1)
     TABLESPACE "USER_DATA" ;
    REM  ... 58 rows
    CONNECT QUE;
    CREATE UNIQUE INDEX "QUE"."U_NAME" ON "BASEBALL_PLAYER"
     ("LAST_NAME ",
    "FIRST_NAME", "MIDDLE_INITIAL" ) PCTFREE 10 INITRANS 2
     MAXTRANS 255 STORAGE (INITIAL 10240
    NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50
     FREELISTS 1)
    TABLESPACE "USER_DATA" LOGGING ;
    

    Reorganizing a Fragmented Tablespace

    Tablespace fragmentation occurs when objects are created, dropped, enlarged, and shrunk. The more often this occurs, the more fragmented the tablespace becomes. This causes free space to be broken into many separated island-like chunks of space. In addition, if an object has multiple extents, they could be spread out over different parts of the tablespace; this causes performance and maintenance problems. With many small blocks of free space instead of fewer large free spaces, you may be unable to create some objects that you otherwise could have. By defragmenting the tablespace, the data is reorganized so that all free space is put into one contiguous area, and each object's extents are grouped next to each other.

    The Export and Import utilities can fix fragmentation in two ways:

    The easiest and most dramatic method fixes both fragmentation problems for the entire database. The database is shut down during the process. This forces all sessions against the database to stop (which is why it's the most dramatic method).

    Fixing a storage parameter
    Fixing a table's storage parameter is fairly easy compared to the entire tablespace, as only one table is affected. To fix an entire tablespace, you'll have to drop and recreate it, along with all the tables within it.

    Fix both fragmentation problems quickly

    1. Export the entire database, specifying FULL=Y and COMPRESS=Y. FULL=Y exports the entire database; COMPRESS=Y changes the INITIAL storage parameter of each table in the database (if necessary) to fit within one extent each. For example, if a table has five 10M extents, the COMPRESS=Y specification changes the table to one 50M extent when imported.
    2. Drop and recreate the entire database.
    3. Import the entire database with the FULL=Y parameter.

    Defragment just one tablespace (a less dramatic method)

    1. Determine which tables exist in the tablespace and export just those tables.
    2. Drop all the tables in the tablespace.
    3. Issue the ALTER TABLESPACE COALESCE command (or drop and recreate the tablespace). This will coalesce all the free space within the tablespace into one contiguous area.
    4. Import the tables. Oracle will create the tables adjacent to each other within the tablespace, and all free space will still be contiguous in one chunk of space.

    Control and Configure SQL*Loader

    You can call the SQL*Loader utility from the operating system's command line and, if needed, use a parameter file known as the control file. This file contains all the logic and formatting controls that SQL*Loader can use for an automatic loading session.

    You can enter the parameters in any order. A sample format of a command-line SQL*Loader session follows. You pass parameters to the SQLLDR command (issued at a command prompt) as follows:

    SQLLDR USERID=system/manager CONTROL=test_control.ctl
    DATA=the_file.txt LOG=the_log.log
    

    First, it's important to know what options are available at the command line (see Table 25.3).

    What's the BAD file used for?
    The BAD file is a text file filled with improperly formatted data. For example, if a record in the text file has too few columns or a character where a number was expected, that record won't be loaded into the database and will instead be put into the BAD file, if specified with the BAD parameter.

    TABLE 25.3  Parameters for the SQL*Loader command line
    Parameter
    Default Value
    Description
    BADcontrol_filename.bad Specifies the file in which all bad data is kept. The default filename is the control filename with a .BAD extension.
    BINDSIZESystem-dependent Size of the path bind array (in bytes) for conventional loads.
    CONTROL  Specifies the name of the control file. If left blank, only command-line parameters are used. If supplied with no extension, Oracle prompts you to enter the extension.
    DATAcontrol_file.dat Specifies the file that contains all data to be loaded. The default filename is the control filename with a .DAT extension.
    DIRECTFALSE If set to TRUE, direct mode is used. Conventional path will otherwise be used. (Direct mode is discussed at length later in the section "Conventional Path and Direct Path Modes.")
    DISCARD  Specifies the file where all discarded data is kept. Discarded records are those that don't meet any conditional settings defined in the control file. Any data inserted before the SQL*Loader utility stops (due to exceeding the discard parameter) will be committed in the database.
    DISCARDMAXALL The maximum number of invalid records that may be encountered before the SQL*Loader session is to stop.
    ERRORS
    50
    Specifies how many total errors can be encountered before SQL*Loader stops. Any data inserted before the SQL*Loader utility stops (due to exceeding the ERRORS parameter) will be committed in the database.

    What's the DISCARD file used for?
    The DISCARD file is a text file, like the BAD file, but is populated with all records that don't meet an IF condition specified in the control file (if any such IF clauses exist). For example, if there's a condition that only employees older than 25 get loaded into the database, those who are 25 or younger will be placed into the discard file.

    Parameter
    Default Value
    Description
    LOADALL Specifies the maximum number of records to load before stopping.
    LOGcontrolfile.log Specifies the log file's name, where information on the success or failure of the SQL*Loader session is reported.
    PARALLELFALSE If set to TRUE, loads are performed in parallel where possible.
    PARFILE  An additional file that contains more parameter specifications.
    ROWS64 in conventional mode Number of rows to put in the path bind array, for conventional loads. For direct path loads, ROWS specifies the number of rows to read before a data save is performed.
    SILENT  You can choose from many options, including ALL, DISCARDS, ERRORS, FEEDBACK, HEADER, and PARTITIONS. The related messages will then be suppressed from the output.
    SKIP
    0
    The number of records to skip before starting the load. This parameter is important for restarting a load process after stopping an earlier session. SKIP can specify how many records were loaded already and where to continue.
    USERID  Specifies the username and password for the user conducting the SQL*Loader session. The format for the command is username/password. You may also use Net8's @connect_string format if desired.

    What the different SILENT options mean
    The SILENT parameter has six possible keywords. DISCARDS suppresses messages created for each record written to the discard file. ERRORS suppresses Oracle data-error messages that cause entries to be added to the BAD file. FEEDBACK suppresses all "commit point reached" messages. HEADER suppresses header messages from the screen; they still show up in the log file. PARTITIONS suppresses the partition statistics messages. ALL implements the suppression of all the other five key-words.

    SQL*Loader reads in parameters and instructions from a control file and, if requested, generates a bad file, a discard file, and a log file. The control file controls how the external text file will be mapped to the Oracle database. The syntax of several control files are shown later in the "Walkthroughs of SQL*Loader Examples" section. There are so many possibilities that it could take several chapters to describe. In fact, this has already been done; refer to the Oracle Server Utilities User's Guide (which comes with your Oracle software) for a full reference. The control file is extremely flexible and consists of many subsections.

    The SQL*Loader utility, like the Export and Import utilities, has a log file. The log file has many sections, including the environment, a running account of transactions, and summary statistics at the end.

    Walkthroughs of SQL*Loader Examples

    The following sections show three of the most common SQL*Loader tasks: loading from a delimited file, loading with conditional checking, and loading into multiple database tables. The examples give you a good idea of the SQL*Loader utility, and give you concrete examples of how control files are set up.

    For these examples, assume that two tables are created with the following commands:

    CREATE TABLE baseball_player (
    
           player_id              NUMBER(7)     NOT NULL,
           last_name              VARCHAR2(40)  NOT NULL,
           first_name              VARCHAR2(30)  NOT NULL,
           middle_initial         VARCHAR2(5),
           start_date             DATE)
    /
    CREATE TABLE player_statistics (
    
           player_id              NUMBER(7)      NOT NULL,
           year                   NUMBER         NOT NULL,
           batting_average        NUMBER,
           home_runs              NUMBER,
           stolen_bases           NUMBER,
    /
    

    Also assume that there are three text data files. PLAYER.TXT is as follows:

    0001,Taub,Caleb,S,24-APR-98
    0002,Sandor,Penya,,12-MAR-92
    0003,Glickman,Gayle,,27-JUN-94
    0004,Murphy,Ann,,25-FEB-95
    0005,Greene,Donald,G,16-JUL-70
    0006,Greene,Jennifer,R,12-SEP-92
    0007,Deutsch,Jon,,04-OCT-97
    0008,Hurley,John,,02-MAR-98
    0009,Klimczak,Rhonda,,09-MAR-70
    0010,Kaplan,Todd,R,11-FEB-68
    0011,Hudson,Hoyt,,05-AUG-94
    0012,Buberel,Jason,,09-NOV-96
    0013,Verberkmoes,Ryan,,17-DEC-97
    0014,Booey,Baba,,12-NOV-95
    

    STATS.TXT is as follows:

    0001,1996,320,10,4
    0001,1997,330,12,7
    0002,1997,230,0,3
    0003,1995,110,3,0
    0003,1996,186,6,3
    0003,1997,205,12,7
    0004,1997,313,33,9
    0005,1997,330,40,35
    0006,1995,280,5,12
    0006,1996,297,9,20
    0006,1997,310,23,20
    

    MORE_STATS.TXT is as follows:

    0001 1996 Taub    Caleb 320
    0002 1997 Sandor  Penya 230
    0004 1996 Murphy  Ann   186
    0007 1997 Deutsch John  205
    
    Comma-delimited files
    Notice how a comma separates each field in the PLAYER.TXT and STATS.TXT files. This is a comma-delimited file. Many programs use tab-delimited files, where tabs separate the fields. If the data itself were to contain commas, such as text fields with English sentences in it, SQL*Loader would think those commas were the field separators and get confused. In your case, however, the data is simple enough so that this isn't the case.

    Load from a Comma-Delimited File

    Use the following example control file to load the PLAYER.TXT data file into the BASEBALL_PLAYER table:

    LOAD DATA
    INFILE 'PLAYER.TXT'
    INTO TABLE BASEBALL_PLAYER
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
      (player_id,last_name,first_name,middle_initial,start_date)
    

    Assume that this control file is named LOAD1.CTL. Notice that it starts with LOAD DATA (all control files start with this). Also note that Oracle automatically converts the date from DD-MON-YY to the internal Oracle data format. If dates were stored differently in the data file, you would specify a different default format.

    Type the following to invoke SQL*Loader from the command line:

    SQLLDR system/manager CONTROL=LOAD1.CTL LOG=LOAD1.LOG
     BAD=LOAD1.BAD DISCARD=LOAD1.DSC
    

    The output will look similar to this:

    SQL*Loader: Release 8.0.3.0.0 - Production on
     Fri Apr 09 10:39:32 1998
     Copyright 1997 Oracle Corporation. All rights reserved.
    Commit point reached - logical record count 14
    

    Load with Conditional Checking

    The first example was the simplest form of loading. This example will take it one step further by using conditional checking. Suppose that you want only to load from the STATS.TXT table where the year is 1997. The control file would then have a WHEN clause, like the following LOAD2.CTL file:

    LOAD DATA
    INFILE 'STATS.TXT'
    INTO TABLE PLAYER_STATISTICS
    WHEN YEAR = "1997"
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
      (player_id,year,batting_average,home_runs,stolen_bases)
    

    Invoke SQL*Loader with the following statement at the command line:

    SQLLDR system/manager CONTROL=LOAD2.CTL BAD=LOAD2.BAD
     DISCARD=LOAD2.DSC LOG=LOAD2.LOG
    

    This will load only 1997 data, as shown in this output:

    SQL*Loader: Release 8.0.3.0.0 - Production on
     Fri Apr 09 11:21:16 1998
     Copyright 1997 Oracle Corporation. All rights reserved.
    Control File:   load2.ctl
    Data File:      STATS.TXT
    Bad File:       load2.bad
    Discard File:   load2.dsc
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 65536 bytes
    Continuation:   none specified
    Path used:      Conventional
    Table STATISTICS, loaded when YEAR = 1997
    Insert option in effect for this table: INSERT
    Record 1: Discarded - failed all WHEN clauses.
    Record 4: Discarded - failed all WHEN clauses.
    Record 5: Discarded - failed all WHEN clauses.
    Record 9: Discarded - failed all WHEN clauses.
    Record 10: Discarded - failed all WHEN clauses.
    Table STATISTICS:
    6 Rows successfully loaded.
    5 Rows not loaded because all WHEN clauses were failed.
    Space allocated for bind array:
                      7168 bytes (64 rows)
    Space allocated for memory besides bind array:      0 bytes
    Total logical records skipped:         0
    Total logical records read:           11
    Total logical records rejected:        0
    Total logical records discarded:       6
    Run began on Fri Apr 09 11:21:16 1998
    Run ended on Fri Apr 09 11:21:18 1998
    Elapsed time was:     00:00:02.12
    Elapsed CPU time was: 00:00:00.35
    

    Load Data into Multiple Tables

    This example uses the MORE_STATS.TXT data file, which contains the player_id, year, last_name, first_name, and batting_average in a fixed-length format. player_id is stored from position 1 to 4, year from 6 to 9, last_name from 11 to 17, first_name from 19 to 23, and batting_average from 25 to 27. This will be denoted in the LOAD3.CTL control file, as follows:

    LOAD DATA
    INFILE 'MORE_STATS.TXT'
    BADFILE 'LOAD3.BAD'
    DISCARDFILE 'LOAD3.DSC'
    INTO TABLE baseball_player
           (player_id       POSITION(1:4)   INTEGER EXTERNAL,
            last_name       POSITION(11:17) CHAR,
            first_name       POSITION(19:23) CHAR)
    INTO TABLE player_statistics
           (player_id       POSITION(1:4)   INTEGER EXTERNAL,
            year            POSITION(6:9)   INTEGER EXTERNAL,
            batting_average POSITION(25:27) CHAR)
    
    Each table uses player_id
    Note how the player_id is used in each table, so that each record in the MORE_STATS.TXT data file is loaded into the BASEBALL_PLAYER and PLAYER_STATISTICS tables.

    The POSITION clause is used because the file is of fixed length. All NOT NULL constraints are met for both tables, so no records will be rejected. This is shown in the resulting log file, as follows:

    SQL*Loader: Release 8.0.3.0.0 - Production on
     Fri Apr 09 11:38:23 1998
     Copyright 1997 Oracle Corporation. All rights reserved.
    Control File:   load3.ctl
    Data File:      MORE_STATS.TXT
    Bad File:       load3.bad
    Discard File:   load3.dsc
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 65536 bytes
    Continuation:   none specified
    Path used:      Conventional
    Insert option in effect for this table: INSERT
    Table BASEBALL_PLAYER, loaded from every logical record.
    Column Name      Position    Len   Term   Encl    Datatype
    PLAYER_ID        1:4         4                    INTEGER
    LAST_NAME        11:17       7                    CHAR
    FIRST_NAME       19:23       5                    CHAR
    Table PLAYER_STATISTICS, loaded from every logical record.
    Column Name      Position    Len   Term   Encl    Datatype
    PLAYER_ID        1:4         4                    INTEGER
    YEAR             6:9         4                    INTEGER
    BATTING_AVERAGE  25:27       3                    CHAR
    
    Table BASEBALL_PLAYER:
            11 Rows successfully loaded.
    Table PLAYER_STATISTICS:
            11 Rows successfully loaded.
    
    Space allocated for bind array:        4241 bytes (64 rows)
    Space allocated for memory besides bind array:      0 bytes
    Total logical records skipped:         0
    Total logical records read:           11
    Total logical records rejected:        0
    Total logical records discarded:       6
    Run began on Fri Apr 09 11:38:23 1998
    Run ended on Fri Apr 09 11:38:25 1998
    Elapsed time was:     00:00:03.18
    Elapsed CPU time was: 00:00:00.75
    

    Additional SQL*Loader Features

    The SQL*Loader utility has many additional features. The most important include the conventional/direct path modes, the UNRECOVERABLE clause, special Oracle8 index usability, and loading data in parallel.

    Conventional Path and Direct Path Modes

    SQL*Loader can run in two modes: conventional (the default mode) and direct path loading. Conventional mode is less efficient but is needed if other users are accessing the tables during the load. Direct path mode bypasses some Oracle overhead processing for better performance. More specifically, conventional path loading uses SQL INSERT statements and bind-array buffer processing to load records into the database. This will also fire INSERT triggers if they exist. Direct path load bypasses all this by first formatting the data into Oracle data block format, and then writing directly into the database's data files. Direct path load can also work with the PARALLEL parameter for even better performance. Specify DIRECT=TRUE to use direct path mode.

    The UNRECOVERABLE Clause
    Use the UNRECOVERABLE clause to greatly improve performance for direct path loading. That way, SQL*Loader doesn't record changes to the redo logs; it just records a relatively small volume of invalidation redo. This saves overhead, especially for larger data loads.

    Oracle8 Index Usability

    Oracle8 introduced new index maintenance with SQL*Loader. Specifically, the SKIP_UNUSABLE_INDEXES parameter applies if any tables have indexes in the IU (Index Unusable) state. If this parameter is set to TRUE, these indexes won't be maintained during the SQL*Loader session. The indexes are built one at a time after the load session, which reduces sort processing. In addition, the SKIP_INDEX_MAINTENANCE parameter applies only for direct path mode and, if set to TRUE, stops maintenance on indexes during the load.

    Parallel Loading

    You can load a table in parallel with SQL*Loader with direct path load. To do this requires some setup, as the text file must be broken into several smaller files. Assume that you're about to load 40 million records into a table with the SQL*Loader utility. You would break the text file (data.txt for this example) into four files: data1.txt, data2.txt, date3.txt, and data4.txt, with about 10 million records each. You can then run four separate SQLLOAD sessions in parallel by issuing the following commands (assuming that the LOAD1.CTL, LOAD2.CTL, LOAD3.CTL, and LOAD4.CTL control files are properly configured):

    SQLLOAD USERID=system/manager CONTROL=LOAD1.CTL
     PARALLEL=TRUE DIRECT=TRUE
    SQLLOAD USERID=system/manager CONTROL=LOAD2.CTL
     PARALLEL=TRUE DIRECT=TRUE
    SQLLOAD USERID=system/manager CONTROL=LOAD3.CTL
     PARALLEL=TRUE DIRECT=TRUE
    SQLLOAD USERID=system/manager CONTROL=LOAD4.CTL
     PARALLEL=TRUE DIRECT=TRUE
    

    The combined result will be about four times faster than if one process were to load the entire 40 million records.


    © Copyright, Macmillan Computer Publishing. All rights reserved.