Add new source Big Data Processing Database & Warehouse ......volumes of data, and conduct a...
Transcript of Add new source Big Data Processing Database & Warehouse ......volumes of data, and conduct a...
Ataccama ONE Big Data Processing & Integration is architected for high performance and scalability, offering rich, built-in features that respond to a range of data transformation needs. Quickly understand and explore data in your data lake, rapidly process enormous volumes of data, and conduct a detailed data quality analysis before executing necessary transformations.
Ataccama ONE Big Data Processing covers the entire data integration, ingestion, transformation, preparation, and management process, including data extraction, import to a data lake and Hadoop, cleansing, and general processing. This ensures data is ready for further analytics at the right place and time, and in the right format. All configuration happens through an intuitive interface, allowing business users to process and shape data without the need for Hadoop-specific knowledge such as MapReduce or Spark.
U S E C A S E S
General data processing & integration
IoT & streaming data integration
Data catalog & business glossary for data lakes
Transformations, aggregations, data enrichment, and more
Matching on Hadoop
Quality control in transactional and analytical applications through a business-friendly interface
Cleansing & unification in system migrations
Quality assurance in software integration projects
Data quality improvement in address and contact information
Data cleansing & unification for client identification purposes
Profiling, validation & correction of incomplete records as part of data integration project analysis
Detecting inconsistencies & irregular patterns for fraud prevention and more
Data preparation for further analytical use
BI
G
DA
TA
P
RO
CE
SS
IN
G
&
DA
TA
I
NT
EG
RA
TI
ON
Big Data Processing & Data IntegrationRich data integration and
transformation capabilities for data
engineers, data scientists, DevOps,
and business users.
MENU
Catalog items
Data sources
NAVIGATION
CUSTOM FILTERS
Favorites
GDPR
ataccama ONE PROJECTSRULESGLOSSARYCATALOG
Add new source
IntegrationsBig DataFile UploadDatabase & Warehouse
ZohoTableauThoughtSpotSugarCRMSalesforce
Power BIPipedriveMicroStrategyMarketoLooker
HubspotGoogle DriveEinstein AnalyticsDropboxAmazon S3
Integrations
OtherParquetKafka
HiveHDFSHBASEAzure Data Lake StorageAvro
Big Data
C L I E N T R E V I E W S
GARTNER PEER INSIGHTS
Read about our clients’ experience with our tools.
Review our products.
G A R T N E R . C O M / R E V I E W S ( A T A C C A M A )
F E A T U R E S
Seamless migration between local and big data environments, as existing configurations can be run on any en-vironment without any changes or need for recompilation.
Data lake ready: Integrate, transform, and enrich your data with external sources, with support for HDFS, Azure Data Lake Storage, Amazon S3, and other S3 compatible object storages. AWS Glue Data Catalog, Hive, HBase, Kafka, Avro, Parquet, ORC, TXT, CSV, and Excel are also supported.
Support for elastic computing/big data processing on demand: Profile, process, and cleanse your data on automatically provisioned clusters with support for Azure HDInsight, Amazon EMR, Google Dataproc, Databricks, Cloudera, Hortonworks, and MapR clusters. MapReduce, Spark, and Spark 2 engines are utilized.
Support for IoT and Spark Streaming, including streaming integration with Apache Kafka, Apache NiFi, and Amazon Kinesis.
Unique profiling: Enjoy rapid data analysis and advanced semantic profiling functionality.
Advanced core functionality: A set of algorithms (capable of hierarchical unification by identifier keys, irrespective of internal data structures) can perform approximate matching during record unification.
Hadoop MapReduce and Apache Spark native support: All calculations and processing are executed directly on a cluster, with no need to remove any data from Hadoop. Based on cluster characteristics, the solution is automatically translated into a series of MapReduce jobs, or directly utilizes Spark. Ataccama ONE supports all major Hadoop distributions.
Rich data integration and data preparation capabilities for data engineers and data scientists. Profile, assess,
transform, and join your datasets in a data lake and in
the cloud.
BIG DATA ENGINE | High-Level ArchitectureHadoop Cluster
Results into DB storage if needed to be available outside the cluster
AD-HOC DEPLOYMENTrun as a MapReduce or Spark job
Ataccama IDE
Ataccama Server
Non-Hadoop Environment
AD-HOC DEPLOYMENTread file / connect to DB and run on desktop
PRODUCTION DEPLOYMENTupload configuration on server
PRODUCTION DEPLOYMENTrun as a MapReduce or Spark job
PRODUCTION DEPLOYMENTread file / connect to DB and run on server
G E T S T A R T E D
ASK FOR A DEMO
See what we can do for your data project. Get in touch to see a full demo of Ataccama ONE.
I N F O @ A T A C C A M A . C O M
TRY ONE–CLICK DATA PROFIL ING
Free account. Instant insights.
O N E . A T A C C A M A . C O M
LOOK FOR US IN THE MAGIC QUADRANTS
Gartner Magic Quadrant for Data Quality ToolsGartner Magic Quadrant for MDM Solutions
READ HOW MACHINE LEARNING TRANSFORMS DATA MANAGEMENT
A T A C C A M A . C O M / F R E E M I U M
ATACCA M A O N E: A I- POW E R E D DATA CU R AT I O N F O R D I G I TA L T R A NSF O R M AT I O N
Discover more p latform modules at ataccama.com
Data Discovery & Profiling
Metadata Management & Data Catalog
Data Quality Management
Master & Reference Data Management
Big Data Processing & Data Integration