Introduction — HPE Shadowbase Online Loader (SOLV) and HPE Shadowbase ETL

Figure 1 depicts a typical HPE Shadowbase Online Loader (SOLV) offline loading scenario. In this example, a point-in-time copy of the source database is taken and applied into a target database environment. This form of loading is referred to as ‘offline’ because any application changes made to the source database by the application after the load starts (and after the data has been loaded) will not subsequently be replicated into the target database. This form of loading is most suitable when the source database is not actively being updated, or a point-in-time snap-shot of the source database is required.

In Figure 1, an application is processing user requests and reading and/or updating the source database. SOLV is loading the source database into the target database via the HPE Shadowbase replication engine. This component performs data and similar other transformations, then prepares and applies the data into the target database. Note that the application has access to the source database and optionally the target database while the load progresses.

Diagram of SOLV Flow Chart - (please see the paragraph that starts with "Figure 1 depicts" for a full image description).

Figure 1 — HPE Shadowbase Online Loader (SOLV) Offline Loading
(Change Data Replication is not Active)

SOLV allows the entire source file/table (or key ranges) to be specified for the load. Limited source data filtering (e.g., by partition, or data content) is also supported – see the SOLV solution brief. Additionally, the target can be empty or contain data when the load starts.  To refresh the target data, SOLV will overwrite existing data with incoming data from the source. If additional data exists in the target that is not in the source, the user should delete that target data before performing the load or refresh operation.

Figure 2 depicts a typical SOLV online loading scenario. In this example, an application is actively processing user requests and reading and/or updating the source database. SOLV is loading the source database into the target database via the HPE Shadowbase replication engine. Note that the application has full access to the source database and optionally the target database while the load progresses. The HPE Shadowbase replication engine is responsible for merging the application’s source database changes (DML and DDL activity) that have been collected in the Audit Trail (or other database change log), with the data being loaded, before it applies them as a merged stream into the target database. SOLV and the HPE Shadowbase replication engine then keep the target database synchronized with the changes being made to the source after the load completes.

Diagram of SOLV Integrated Data Flow Chart - (please see the paragraph that starts with "Figure 2 depicts" for a full image description).

Figure 2 — HPE Shadowbase Online Loader (SOLV) Integrated Data
(Change Data is Actively Being Applied to the Source Database)

SOLV loading has special patented features that allow it to properly merge the data being loaded with the data being replicated. Note however, that the SOLV loading can occur without HPE Shadowbase replication being active/in-use. In this case, SOLV acts like a stand-alone data conversion utility, reading and converting the source data format into the target data format, and applying that information into the target environment.

The utility can load audited and non-audited HPE NonStop Enscribe source files and HPE NonStop SQL tables into any target environment and database combination supported by the Shadowbase line of data replication products (e.g., HPE NonStop Enscribe or SQL targets, or Other Server targets such as Oracle, Sybase, SQL Server, DB2, and MySQL).

The HPE Shadowbase ETL utility uses and extends the SOLV loading capabilities to allow for reading and injecting events from flat files into the HPE Shadowbase replication engine for processing, as well as producing flat files of database data or database change events that can then be subsequently processed by an ETL tool.


 Related Solutions:
Related Solution Brief: