Datastage Overview
What Is DataStage?
Sometimes DataStage is sold to and installed in an organization and its IT support staff are expected to maintain it and to solve DataStage users' problems. In some cases IT support is outsourced and may not become aware of DataStage until it has been installed. Then two questions immediately arise: "what is DataStage?" and "how do we support DataStage?".
This white paper addresses the first of those questions, from the point of view of the IT support provider. Manuals, web-based resources and instructor-led training are available to help to answer the second. DataStage is actually two separate things.
Ø In production (and, of course, in development and test environments) DataStage is just another application on the server, an application which connects to data sources and targets and processes ("transforms") the data as they move through the application. Therefore DataStage is classed as an "ETL tool", the initials standing for extract, transform and load respectively. DataStage "jobs", as they are known, can execute on a single server or on multiple machines in a cluster or grid environment. Like all applications, DataStage jobs consume resources: CPU, memory, disk space, I/O bandwidth and network bandwidth.
Ø DataStage also has a set of Windows-based graphical tools that allow ETL processes to be designed, the metadata associated with them managed, and the ETL processes monitored. These client tools connect to the DataStage server because all of the design information and metadata are stored on the server. On the DataStage server, work is organized into one or more "projects". There are also two DataStage engines, the "server engine" and the
"Parallel engine".
The server engine is located in a directory called DSEngine whose location is recorded in a hidden file called /.dshome (that is, a hidden file called .dshome in the root directory) and/or as the value of the environment variable DSHOME. (On Windows-based DataStage servers the folder name is Engine, not DSEngine, and its location is recorded in the Windows registry rather than in /.dshome.)
DataStage Engines
The server engine is the original DataStage engine and, as its name suggests, is restricted to running jobs on the server. The parallel engine results from acquisition of Orchestrate, a parallel execution technology developed by Torrent Systems, in 2003. This technology enables work (and data) to be distributed over multiple logical "processing nodes" whether these are in a single machine or multiple machines in a cluster or grid configuration. It also allows the degree of parallelism to be changed without change to the design of the job
Design-Time Architecture
Let us take a look at the design-time infrastructure. At its simplest, there is a DataStage server and a local area network on which one or more DataStage client machines may be connected. When clients are remote from the server, a wide area network may be used or some form of tunnelling protocol (such as Citrix MetaFrame) may be used instead.
InfoSphere DataStage provides these features and benefits:
• Powerful, scalable ETL platform—supports the collection, integration and transformation of large volumes of data, with data structures ranging from simple to complex.
• Support for big data and Hadoop—enables you to directly access big data on a distributed file system, and helps clients more efficiently leverage new data sources by providing JSON support and a new JDBC connector.
• Near real-time data integration—as well as connectivity between data sources and applications.
• Workload and business rules management—helps you optimize hardware utilization and prioritize mission-critical tasks.
• Ease of use—helps improve speed, flexibility and effectiveness to build, deploy, update and manage your data integration infrastructure.
• Rich support for DB2Z and DB2 for z/OS—including data load optimization for DB2Z and balanced optimization for DB2 on z/OS
Powerful, scalable ETL platform
• Manages data arriving in near real-time as well as data received on a periodic or scheduled basis.
• Provides high-performance processing of very large data volumes.
• Leverages the parallel processing capabilities of multiprocessor hardware platforms to help you manage growing data volumes and shrinking batch windows.
• Supports heterogeneous data sources and targets in a single job including text files, XML, ERP systems, most databases (including partitioned databases), web services, and business intelligence tools.
Support for big data and Hadoop
• Includes support for IBM InfoSphere BigInsights, Cloudera, Apache and Hortonworks Hadoop Distributed File System (HDFS).
• Offers Balanced Optimization for Hadoop capabilities to push processing to the data and improve efficiency.
• Supports big-data governance including features such as impact analysis and data lineage.
Workload and business rules management
• Helps enable policy-driven control of system resources and prioritization of different classes of workloads.
• Helps you optimize hardware utilization and prioritize tasks, control job activities where resources exceed specified thresholds, and assess and reassign the priority of jobs as they are submitted into the queue.
• Integrates with IBM Operational Decision Management (formerly ILOG JRules), allowing you to implement decision logic within IBM InfoSphere Information Server.
Near real-time data integration
• Captures messages from Message Oriented Middleware (MOM) queues using Java Message Services (JMS) or WebSphere MQ adapters, allowing you to combine data into conforming operational and historical analysis perspectives.
• Provides a service-oriented architecture (SOA) for publishing data integration logic as shared services that can be reused over the enterprise.
• Can simultaneously support high-speed, high reliability requirements of transactional processing and the large volume bulk data requirements of batch processing.
Ease of use
• Includes an operations console and interactive debugger for parallel jobs to help you enhance productivity and accelerate problem resolution.
• Helps reduce the development and maintenance cycle for data integration projects by simplifying administration and maximizing development resources.
• Offers operational intelligence capabilities, smart management of metadata and metadata imports, and parallel debugging capabilities to help enhance productivity when working with partitioned data.
Very good post, Guru!
ReplyDelete