Automating and accelerating data pipelines for data
lakes and the cloud
Attunity is changing data integration with the latest
release of the Attunity platform to deliver data, ready for analytics, to
diverse platforms on-premises and in the Cloud.
Unlike the traditional batch-oriented and inflexible ETL
approaches of the last decade, Attunity provides the modern, real-time
architecture you need to harness the agility and efficiencies of new data lakes
and cloud offerings.
Streaming:
Universal Stream Generation
Databases can now publish events to all major streaming services, including Kafka, Confluent, Amazon Kinesis, Microsoft Azure Event Hub, MapR Streams.
Databases can now publish events to all major streaming services, including Kafka, Confluent, Amazon Kinesis, Microsoft Azure Event Hub, MapR Streams.
Optimized Data Streaming
Flexible message formats including JSON and AVRO along with the separation of data and metadata into separate topics allows for smaller data messages and easier integration of metadata into various schema registries.
Flexible message formats including JSON and AVRO along with the separation of data and metadata into separate topics allows for smaller data messages and easier integration of metadata into various schema registries.
Cloud:
AWS S3 and Kinesis
Data can now land in S3 either in bulk load, change data capture, or published to Amazon Kinesis
Data can now land in S3 either in bulk load, change data capture, or published to Amazon Kinesis
Snowflake Data Loading
The entire catalog of Attunity supported data sources can now feed a Snowflake data warehouse in bulk load or via change data capture.
The entire catalog of Attunity supported data sources can now feed a Snowflake data warehouse in bulk load or via change data capture.
Preferred data movement solution for Amazon Web Services
and Microsoft Azure
Deep partnerships and broad product integration with industry leaders
Deep partnerships and broad product integration with industry leaders
Data Lakes:
Automate the Creation of Analytics-Ready Data Lakes
Data lands in optimized time based partitions and Attunity automatically creates the schema and structures in the Hive Catalog for Operational Data Stores (ODS) and Historical Data Stores (HDS) – with no manual coding. Supported distributions include Hortonworks, AWS Elastic MapReduce, and Cloudera.
Data lands in optimized time based partitions and Attunity automatically creates the schema and structures in the Hive Catalog for Operational Data Stores (ODS) and Historical Data Stores (HDS) – with no manual coding. Supported distributions include Hortonworks, AWS Elastic MapReduce, and Cloudera.
Ensure Data Consistency
Attunity automatically reconciles continuous data inserts, updates and deletions, while providing ACID compliance, without manual coding or disruption. Attunity also recognizes and responds to source data structure changes (DDL) and automatically applies changes to your data lake.
Attunity automatically reconciles continuous data inserts, updates and deletions, while providing ACID compliance, without manual coding or disruption. Attunity also recognizes and responds to source data structure changes (DDL) and automatically applies changes to your data lake.
Enterprise Management & Control:
Enterprise-wide Control and Management
Scalability to thousands of tasks; resiliency and recovery to maintain data integration processes across multiple data centers and hybrid cloud environments
Scalability to thousands of tasks; resiliency and recovery to maintain data integration processes across multiple data centers and hybrid cloud environments
Performance and Data Flow Analytics
Comprehensive historical and real-time reporting for improved capacity planning and performance monitoring of all data flows
Comprehensive historical and real-time reporting for improved capacity planning and performance monitoring of all data flows
Operational Metadata Creation and Discovery
Central repository shared across the Attunity platform and with third party tools for enterprise-wide reporting
Central repository shared across the Attunity platform and with third party tools for enterprise-wide reporting
Micro services API
New REST and .NET APIs designed for invoking and managing
Attunity services using a standard web-based UI
New REST and .NET APIs designed for invoking and managing
Attunity services using a standard web-based UI
Downloading the
software:
https://4web.s3.amazonaws.com/Files/Replicate/AttunityReplicate_Express_Linux_X64.rpm
Installing Attunity
Prerequisite
1. Windows or Linux 64-bit
Server (dedicated to running 'Attunity Replicate' software)
- Located in same local network as Source
Database(s)
- Will host 'Attunity Replicate' software
(download available via Welcome Email sent automatically when one
activates the AMI)
- Minimum Hardware Requirements:
- Windows Server 2008 R2, 2012 or 2012 R2
- Red Hat Linux Enterprise Linux 6.2 and
above
- SUSE Linux 11 and above
- Quad-Core processor, 8 GB RAM, and 320
GB of disk space.
2. Attunity
Replicate Console is web-based and requires one of the following browsers:
- Microsoft Internet Explorer Version 9 or
higher
- Mozilla Firefox Version 38 and above
- Google Chrome
Installation:
Run the downloaded rpm "AttunityReplicate_Express_Linux_X64.rpm" as below
Verifying that the Attunity Replicate Server is Running:
Accessing the Attunity Replicate Express Console:
Attunity
Replicate Server on Windows:
https://<computer name>/AttunityReplicate
Attunity Replicate Server on Linux:
https://<computer name>:<port>/AttunityReplicate
Where <computer name> is the name or IP address of the computer where the Attunity Replicate Server is installed and <port> is the C UI Server port (3552 by default).
https://<computer name>/AttunityReplicate
Attunity Replicate Server on Linux:
https://<computer name>:<port>/AttunityReplicate
Where <computer name> is the name or IP address of the computer where the Attunity Replicate Server is installed and <port> is the C UI Server port (3552 by default).
Open browser and
type below url
https://localhost:3552/AttunityReplicate
Setting console password:
Set the password as below and restart the server
Open the console now
Configuring Source:
Open the 'Attunity Replicate Console' and select 'Manage Endpoint
Connections'
Click ‘New Endpoint Connection'
Configure with 'Add Database' with these values:
Test the connection and click on save
Configuring Target:
Open the 'Attunity Replicate Console' and select 'Manage Endpoint
Connections'
Click ‘New Endpoint Connection'
Configure with 'File' with these values:
Test the connection and click on save
Creating, Running, and
Monitoring Replicate Tasks
1: Open Attunity Replicate and select 'New Task'. In the 'New
Task' window, enter a unique name, then select 'OK'.
2: Drag your recently configured source database to the 'Drop
source database here' area on the right.
3: Drag your
recently configured target database to the 'Drop target database here' area
on the right
4: Select 'Table Selection' on the right. The
'Select Tables' window opens.
5: From the 'Schema' drop-down
list, select a schema.
6: Click 'Search' to find
the tables in the Schema. You may then 'add' /
'remove' / 'add all' / 'remove all'. Then click 'OK'.
Note: As this is trail version all the features are not available. Attunity Replicate is not an open source software.
No comments:
Post a Comment