Emp is a partitioned table with two composite partitions p1 and p2. Delete the table from the database. Create the table again with the new partitions. Import the table data. The following example shows how to repartition a table on a different column:. Starting Import from the command line with no parameters initiates the interactive method. The interactive method does not provide prompts for all Import functionality. The interactive method is provided only for backward compatibility.
The following example shows the interactive method:. You may not see all the prompts in a given Import session because some prompts depend on your responses to other prompts. Note: If you specify No at the previous prompt, Import prompts you for a schema name and the table names you want to import for that schema:.
Entering a null table list causes all tables in the schema to be imported. You can only specify one schema at a time when you use the interactive method. Because an incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export, an import from an incremental export file imports the table's definition and all its data, not just the changed rows.
Because imports from incremental export files are dependent on the method used to export the data, you should also read Incremental, Cumulative, and Complete Exports. It is important to note that, because importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported.
This behavior differs from a normal import. During a normal import, objects are not dropped and an error is usually generated if the object already exists. The order in which incremental, cumulative, and complete exports are done is important.
A set of objects cannot be restored until a complete export has been run on a database. Once that has been done, the process of restoring objects follows the steps listed below.
Import the most recent complete export file. When restoring tables with this method, you should always start with a clean database that is, no user tables before starting the import sequence.
Importing Object Types and Foreign Function Libraries from an Incremental Export File For incremental imports only, object types and foreign function libraries are handled as system objects.
This imports the most recent definition of the object type including the object identifier and the most recent definition of the library specification. If the object type does not exist, or if it exists but its object identifier does not match, the table is not imported. This indicates the object type had been dropped or replaced subsequent to the incremental export, requiring that all tables dependent on the object also had been dropped. This section describes the behavior of Import with respect to index creation and maintenance.
This approach saves on index updates during Import of existing tables. Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index.
Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index re creation or maintenance after import completes is generally faster than updating the indexes for each row inserted by Import.
Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. The index-creation commands that would otherwise be issued by Import are instead stored in the specified file. For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system.
Assume that partition p1 contains a much larger amount of data in the existing table t , compared with the amount of data to be inserted by the Export file expdat.
Assume that the reverse is true for p2. A database with many non-contiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks.
You can reduce fragmentation by performing a full database export and import as follows:. Shut down Oracle after all users are logged off. Delete the database. See your Oracle operating system-specific documentation for information on how to delete a database. See the Oracle8i Administrator's Guide for more information about creating databases.
By default, Import displays all error messages. If you specify a log file by using the LOG parameter, Import writes the error messages to the log file in addition to displaying them on the terminal. You should always specify a log file when you import. Also see your operating system-specific documentation for information on redirecting output.
When an import completes without errors, the message "Import terminated successfully without warnings" is issued. If one or more non-fatal errors occurred, and Import was able to continue to completion, the message "Import terminated successfully with warnings" occurs. If a fatal error occurs, Import ends immediately with the message "Import terminated unsuccessfully.
Additional Information: Messages are documented in Oracle8i Error Messages and your operating system-specific documentation. If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.
A row error is generated if a row violates one of the integrity constraints in force on your system, including:. Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid data types, and by any other INSERT error. Errors can occur for many reasons when you import database objects, as described in this section.
When such an error occurs, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file. If a database object to be imported already exists in the database, an object creation error occurs. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported. The database object is not replaced.
If the object is a table, rows are imported into it. Note that only object creation errors are ignored, all other errors such as operating system, database, and SQL are reported and processing may stop.
This could occur, for example, if Import were run twice. If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences.
A sequence that is not dropped before the import is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted. If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table.
If not, a rollback of the current table occurs before Import continues. When a fatal error occurs, Import terminates. SQL, a fatal error occurs and causes Import to terminate. This section describes factors to take into account when using Export and Import across a network. When transferring an export file across a network, be sure to transmit the file using a protocol that preserves the integrity of the file. For example, when using FTP or a similar file transfer protocol, transmit the file in binary mode.
Transmitting export files in character mode causes errors when the file is imported. Net8 lets you export and import over a network. For example, running Import locally, you can read data into a remote Oracle database. For the exact syntax of this clause, see the user's guide for your Net8 protocol. For more information on Net8, see the Net8 Administrator's Guide.
See also Oracle8i Distributed Database Systems. Note: In certain situations, particularly those involving data warehousing, snapshots may be referred to as materialized views.
This section retains the term snapshot. The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables master table, snapshot log table definition, and snapshot tables can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode Export; you cannot use table-mode Export.
This section discusses how fast refreshes are affected when these objects are imported. Oracle8i Replication provides more information about snapshots and snapshot logs. The imported data is recorded in the snapshot log if the master table already exists for the database to which you are importing and it has a snapshot log.
As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required. After you have done a complete refresh, subsequent fast refreshes will work properly.
In contrast, when a primary key snapshot log is exported, the keys' values do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import.
See Oracle8i Replication for information about primary key snapshots. A snapshot that has been restored from an export file has "gone back in time" to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported. Each refresh leaves a signature.
A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted all log entries with times before the earliest remaining signature. When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A.
However, log entries needed for a fast refresh may no longer exist. If the log entries do exist because they are needed for another snapshot that has yet to be refreshed , they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required. Snapshots, snapshot logs, and related items are exported with the schema name explicitly given in the DDL statements, therefore, snapshots and their related items cannot be imported into a different schema.
If a user without the correct privileges attempts to import from an export file that contains tables with fine-grain access policies, a warning message will be issued. If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user unless the table:. See Reorganizing Tablespaces to see how you can use this to your advantage. Tables are exported with their current storage parameters.
If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. Note that LOB data might not reside in the same tablespace as the containing table. If LOB data reside in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.
Before Import, you may want to pre-create large tables with different storage parameters before importing the data. By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. Read-only tablespaces can be exported. If you want read-only functionality, you must manually make the tablespace read-only after the import.
You can drop a tablespace by redefining the objects to use different tablespaces before the import. In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name before logging off as the tablespace you want to drop.
All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Objects are not imported into the default tablespace if the tablespace does not exist or the user does not have the necessary quotas for their default tablespace. If a user's quotas allow it, the user's tables are imported into the same tablespace from which they were exported.
However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, the table is not a type table, and is not an index-only table with an overflow segment.
This scenario can be used to move a user's tables from one tablespace to another. For example, you need to move JOE's tables from tablespace A to tablespace B after a full database export.
Follow these steps:. Note: Role revokes do not cascade. Therefore, users who were granted other roles by JOE will be unaffected. Drop JOE's tables from tablespace A. Give JOE a quota on tablespace B and make it the default tablespace. Import JOE's tables. Character Set and NLS Considerations This section describes the character set conversions that can take place during export and import operations.
If the character set in the export file is different than the Import user session character set, Import performs a character set conversion to its user session character set. Import can perform this conversion only if the ratio of the width of the widest character in its user session character set to the width of the smallest character in the export file character set is one 1. A final character set conversion may be performed if the target database's character set is different from Import's user session character set.
To minimize data loss due to character set conversions, it is advisable to ensure that the export database, the export user session, the import user session, and the import database all use the same character set. If the national character set of the source database is different than the national character set of the import database, a conversion is performed.
Some 8-bit characters can be lost that is, converted to 7-bit equivalents when importing an 8-bit character set export file. Most often, this is seen when accented characters lose the accent mark. Refer to the sections Character Set Conversion. For multi-byte character sets, Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1.
If the ratio is not 1, the user-session character set should be set to match the export character set, so that Import does no conversion. During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character.
The default character is defined by the target character set. The Oracle server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by import. Be sure you feel very confident of your knowledge of type validation and how it works before attempting to import with this feature disabled.
Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables. Rows are imported into the object table. Import of rows may fail if rows with the same object identifier already exist in the object table. If an object table was created using the OID AS option to assign it the same object identifier as another table, both tables cannot be imported.
One may be imported, but the second receives an error because the object identifier is already in use. Importing Existing Object Tables and Tables That Contain Object Types Users frequently pre-create tables before import to reorganize tablespace usage or change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format except for storage parameters.
For object tables and tables that contain columns of object types, format compatibilities are more restrictive. For tables containing columns of object types, the same object type must be specified and that type must have the same object identifier as the original. Export writes information about object types used by a table in the Export file, including object types from different schemas.
Object types from different schemas used as top level columns are verified for matching name and object identifier at import time. Object types from different schemas that are nested within other object types are not verified. If the object type already exists, its object identifier is verified. Import retains information about what object types it has created, so that if an object type is used by multiple tables, it is created only once. Note: In all cases, the object type must be compatible in terms of the internal format used for storage.
Import does not verify that the internal format of a type is compatible. If the exported data is not compatible, the results can be unpredictable. Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:. If fatal errors occur inserting data in outer tables, the rest of the data in the outer table is skipped, but the corresponding inner table rows are not skipped.
This may result in inner table rows not being referenced by any row in the outer table. If an insert to an inner table fails after a non-fatal error, its outer table row will already have been inserted in the outer table and data will continue to be inserted in it and any other inner tables of the containing table.
This circumstance results in a partial logical row. If fatal errors occur inserting data into an inner table, the import skips the rest of that inner table's data but does not skip the outer table or other nested tables. You should always carefully examine the logfile for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.
Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user. Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database.
Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data. For operating system directory aliases, if the directory syntax used in the export system is not valid on the import system, no error is reported at import time. I'd like to emphasize what kblair wrote in his last paragraph. Once the tablespaces and users were in place, the import worked.
I hope you Find Right solution here. Ketul patel Ketul patel 4 4 silver badges 8 8 bronze badges. Deepak this is code dump with no commentary to help anyone. Alex Alex 43 1 1 bronze badge. That's how it goes when trying to find solutions to problems with Oracle.
It seems like knowing how Oracle works is the most precious commodity in the world. Sharing this knowledge is like casting your pearls before swine. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Podcast Helping communities build their own LTE networks.
The interactive method is provided only for backward compatibility. When you invoke Import interactively, the response given by Import depends on what you enter at the command line. Table shows the possibilities. Additionally, if you omit the password and allow Import to prompt you for it, you cannot specify the instance string as well. You can specify instance only with username. After Import is invoked, it displays the following prompts.
You may not see all prompts in a given Import session because some prompts depend on your responses to other prompts. Some prompts show a default answer. If the default is acceptable, press Enter.
Entering a null table list causes all tables in the schema to be imported. You can specify only one schema at a time when you use the interactive method. This section describes the different types of messages issued by Import and how to save them in a log file.
You can capture all Import messages in a log file, either by using the LOG parameter or, for those systems that permit it, by redirecting Import's output to a file. The Import utility writes a log of detailed information about successful loads and any errors that may occur. Import does not terminate after recoverable errors. For example, if an error occurs while importing a table, Import displays or logs an error message, skips to the next table, and continues processing.
These recoverable errors are known as warnings. For example, if a nonexistent table is specified as part of a table-mode import, the Import utility imports all other tables. Then it issues a warning and terminates successfully. Some errors are nonrecoverable and terminate the Import session. These errors typically occur because of an internal problem or because a resource, such as memory, is not available or has been exhausted. If one or more recoverable errors occurs but Import is able to continue to completion, Import displays the following message:.
If a nonrecoverable error occurs, Import terminates immediately and displays the following message:. Oracle9i Database Error Messages and your Oracle operating system-specific documentation. Import provides the results of an import operation immediately upon completion. Depending on the platform, Import may report the outcome in a process exit code as well as recording the results in the log file. This enables you to check the outcome from the command line or script.
Table shows the exit codes that are returned for various results. If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.
A row error is generated if a row violates one of the integrity constraints in force on your system, including:. Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid datatypes, or by any other INSERT error.
Errors can occur for many reasons when you import database objects, as described in this section. When these errors occur, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file. If a database object to be imported already exists in the database, an object creation error occurs. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.
The database object is not replaced. If the object is a table, rows are imported into it. Note that only object creation errors are ignored; all other errors such as operating system, database, and SQL errors are reported and processing may stop. This could occur, for example, if Import were run twice. If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences.
If a sequence is not dropped before the import, it is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. Resource limitations can cause objects to be skipped.
When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted. If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If not, a rollback of the current table occurs before Import continues. For each specified table, table-level Import imports all rows of the table.
With table-level Import:. If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table.
If the table creation is successful, table-level Import reads all source data from the export file into the target table. After Import, the target table contains the partition definitions of all partitions and subpartitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes including partition bounds of the source partitions are maintained on Import.
Partition-level Import can only be specified in table mode. It lets you selectively load data from specified partitions or subpartitions in an export file. Keep the following guidelines in mind when using partition-level import. If you specify a partition name for a composite partition, all subpartitions within the composite partition are used as the source. In the following example, the partition specified by the partition-name is a composite partition.
All of its subpartitions will be imported:. The following example causes row data of partitions qc and qd of table scott. If table e does not exist in the Import target database, it is created and data is inserted into the same partitions. If table e existed on the target system before Import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc and qd.
This section describes the behavior of Import with respect to index creation and maintenance. Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index creation, re-creation, or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.
Index creation can be time consuming, and therefore can be done more efficiently after the import of all other objects has completed. The index-creation statements that would otherwise be issued by Import are instead stored in the specified file. This approach saves on index updates during import of existing tables. Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index.
For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system. Assume that partition p1 contains a much larger amount of data in the existing table t , compared with the amount of data to be inserted by the Export file expdat.
Assume that the reverse is true for p2. A database with many noncontiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:.
Oracle9i Database Administrator's Guide for more information about creating databases. This section describes factors to take into account when using Export and Import across a network.
Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode.
Transmitting export files in character mode causes errors when the file is imported. With Oracle Net, you can perform exports and imports over a network. For example, if you run Export locally, you can write data from a remote Oracle database into a local export file.
If you run Import locally, you can read data into a remote Oracle database. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.
This section describes the character set conversions that can take place during export and import operations. The following sections describe character conversion as it applies to user data and DDL. If the character sets of the source database are different than the character sets of the import database, a single conversion is performed. To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.
Some 8-bit characters can be lost that is, converted to 7-bit equivalents when you import an 8-bit character set export file. Most often, this is apparent when accented characters lose the accent mark. During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character.
The default character is defined by the target character set. Oracle9i Database Globalization Support Guide. The following sections describe points you should consider when you import particular database objects.
The Oracle database server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by Import. To do this, Import compares the types's unique identifier TOID with the identifier stored in the export file. If those match, Import then compares the type's unique hashcode with that stored in the export file. Import will not import table rows if the TOIDs or hashcodes do not match.
Be sure you are confident of your knowledge of type validation and how it works before attempting to perform an import operation with this feature disabled. Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:.
Users frequently create tables before importing data to reorganize tablespace usage or to change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format except for storage parameters. For object tables and tables that contain columns of object types, format compatibilities are more restrictive. For object tables and for tables containing columns of objects, each object the table references has its name, structure, and version information written out to the Export file.
Export also includes object type information from different schemas, as needed. Import verifies the existence of each object type required by a table prior to importing the table data. This verification consists of a check of the object type's name followed by a comparison of the object type's structure and version from the import system with that found in the Export file. If an object type name is found on the import system, but the structure or version do not match that from the Export file, an error message is generated and the table data is not imported.
Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:.
You should always carefully examine the log file for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted. Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results.
For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user. Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data. For directory aliases, if the operating system directory syntax used in the export system is not valid on the import system, no error is reported at import time.
Subsequent access to the file data receives an error. It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system. Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and filenames used in the library's specification on the export file are invalid on the import system, no error is reported at import time.
Subsequent usage of the callout functions will receive an error. It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system. If the compilation is successful, it can be accessed by remote procedures without error. The compilation takes place the next time the procedure, function, or package is used. When you import Java objects into any schema, the Import utility leaves the resolver unchanged.
The resolver is the list of schemas used to resolve Java full names. This means that after an import, all user classes are left in an invalid state until they are either implicitly or explicitly revalidated. An implicit revalidation occurs the first time the classes are referenced. Both methods result in the user classes being resolved successfully and becoming valid. Import does not verify that the location referenced by the external table is correct.
If the formats for directory and filenames used in the table's specification on the export file are invalid on the import system, no error is reported at import time. It is the responsibility of the DBA or user to manually move the table and ensure the table's specification is valid on the import system. Importing a queue table also imports any underlying queues and the related dictionary information.
A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pretable and posttable action procedures maintain the queue dictionary. LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data.
To do this, first create a table specifying the new CLOB column. Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database.
In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported, and the failure to import column comments on such views.
In particular, if viewa uses the stored procedure procb , and procb uses the view viewc , Export cannot determine the proper ordering of viewa and viewc. If viewa is exported before viewc and procb already exists on the import system, viewa receives compilation warnings at import time.
Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. Access violations could occur when the view is used if the grantor does not have the proper privileges after the missing tables are created.
If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient.
For the view to be compiled, the privilege must be granted directly to the importer. You can export tables with fine-grained access control policies enabled. When doing so, keep the following considerations in mind:. To restore the fine-grained access control policies, the user who imports from an export file containing such tables must have the following privileges:.
If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, a warning message will be issued. Therefore, it is advisable for security reasons that the exporter and importer of such tables be the DBA. Oracle9i Application Developer's Guide - Fundamentals for more information about fine-grained access control. In certain situations, particularly those involving data warehousing, snapshots may be referred to as materialized views.
This section retains the term snapshot. The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables master table, snapshot log table definition, and snapshot tables can be exported independently of one another.
0コメント