HPE Shadowbase Streams for Data and Application Integration
Share This!
- Copy link to clipboard
- Email link
- Print
Powerful and Flexible Facilities to Integrate Existing Applications and Databases
Use HPE Shadowbase Streams to:
- replicate database changes from a source database into a target process, adapter, API, or environment (called “application integration”)
- replicate database changes from a source database into a target database (called “data integration”)
Shadowbase data streaming essentially performs change data capture (CDC) of the source database changes, and then filters, transforms, cleanses, and packages the change events to be delivered to the target database or environment. In essence, Shadowbase is building an event-driven-architecture (EDA) to allow sharing the source data across the enterprise, to build out new value-add services and eliminate having “data silos”.
*Note: There are several different integration combinations. Basically, database integration is more common than application integration, but by purchasing HPE Shadowbase Streams, a user can perform both.
Solution Brief:
HPE Shadowbase Streams — Integrate Data and Applications to Create New Solutions
- Shadowbase Data and Application Integration from Shadowbase Software on Vimeo.
- YouTube link (same video): Shadowbase Data and Application Integration from Shadowbase Software on YouTube.
Applications
Integrate existing applications and services in real-time
“Use real data [and] real applications to find real problems early.”
– Credit Union Application Developer
Use Shadowbase Streams for applications that:
- are isolated, legacy, siloed, or “stale,”
- cannot be directly modified,
- were never intended or designed to work together,
- and when the original application developers are unavailable or have left the company.
Shadowbase application integration provides a suite of adapters to easily integrate the source data with the target environment. Available adapters include:
- Integrating with KAFKA
- Integrating with IBM MQ environments
- JSON output format
- Flat-files (CSV, fixed position, tab-delimited)
White Paper:
HPE Shadowbase Streams for Application Integration
Streamline events across applications, creating a data pipeline from a data pump
While a source application generates changes and events, Shadowbase Streams immediately distributes them across an enterprise as they occur. This powerful architecture allows an application to simply utilize the events that are delivered by Shadowbase Streams.
Rapidly create new value-add business functionality and services that were not possible before integration without modifying existing application code
HPE Shadowbase Streams modernizes legacy applications and databases by enabling the creation of new and valuable business services to enhance competitiveness, reduce costs or increase revenue, and satisfy regulatory requirements.
Many benefits
- Create a local copy of data updates from a source application
- Enhance competitiveness
- Expand existing functionality
- Generally, improve the user experience
- Help satisfy regulatory requirements
- Improve end-user response times
- Increase revenue
- Rapidly generate new and valuable services
- Reduce costs
Examples
- Real-time fraud detection
- Real-time business intelligence
- Real-time sales analysis and online analytical processing (OLAP)
Databases
Integrate existing database environments in real-time
Use Shadowbase Streams for databases to:
- rapidly deliver (or ingest) information where and when it is needed,
- in a required format, and
- distribute (or ingest) updated information in real-time throughout an enterprise.
White Paper:
HPE Shadowbase Streams for Data Integration
Change data capture
With data integration, Shadowbase Streams uses change data capture (CDC) technology to capture change data and create a real-time copy of selected data from a source system’s database to a target system’s database.
Examples
- Feed data warehouses
- Populate data marts
- Build online query processing systems (OLQP) to offload reporting from the host
- Create localized data sets for distributed applications