ODI Load Plans .doc

85
« Oracle releases... | Main | Load plans - getting... » What’s new with Oracle Data Integrator 11.1.1.5 - Load Plans By FX on May 29, 2011 Oracle Data Integrator 11gR1 PS1 introduces a major new feature called Load Plans. This post will give you an overview of this feature. The documentation defines the load plan as an executable object that can contain a hierarchy of steps that can be executed conditionally, in parallel or in series. The leaves of this hierarchy are Scenarios. Packages, interfaces, variables, and procedures can be added to Load Plans for executions in the form of scenarios. In a nutshell, Load Plans are extremely powerful objects for organizing and launching scenarios in a production context. They should help you getting rid of manual scripts coded for starting scenarios in the correct order, and of packages used to launch other scenarios in cascade. Creating a Load Plan Load Plans appear in both the Designer and Operator Navigator as shown below. They are available for edition in a development and production repositories, and can be organized into scenario folders.

Transcript of ODI Load Plans .doc

Oracle releases... | Main | Load plans - getting...

Whats new with Oracle Data Integrator 11.1.1.5 - Load Plans

By FX on May 29, 2011Oracle Data Integrator 11gR1 PS1 introduces a major new feature called Load Plans. This post will give you an overview of this feature.

The documentation defines the load plan asan executable object that can contain a hierarchy of steps that can be executed conditionally, in parallel or in series. The leaves of this hierarchy are Scenarios. Packages, interfaces, variables, and procedures can be added to Load Plans for executions in the form of scenarios.In a nutshell, Load Plans are extremely powerful objects for organizing and launching scenarios in a production context. They should help you getting rid of manual scripts coded for starting scenarios in the correct order, and of packages used to launch other scenarios in cascade.

Creating a Load Plan

Load Plans appear in both the Designer and Operator Navigator as shown below. They are available for edition in a development and production repositories, and can be organized into scenario folders.

Creating a load plan is pretty much straightforward: Right-click and select New(Load Plan), specifiy a Name for the Load Plan. As a Load Plan will be launching scenarios, you can define at that level how these scenarios will be logged (LogSession, and Log Session Step, etc options).

The real work with Load Plans takes place on the Steps tab. There, you can define a hierarchy of steps. The leaves of this hieararchy will be Scenarios that will be started in sequence, in parallel and/or conditionally based on the values of variables.

In the example below, theDatawarehouse Load Plan does the following in a sequence (serial step).

1. First it runs an Initializationstep (this step starts a scenario called INITIALIZATION),

2. It does aRefresh [of the] Dimensions in Parallel (more information below.)

3. Then it evaluates the value of the IS_LOAD_FACT variable. This variable passed as a startup parameter of this load plan.

If this value if 1, it run the LOAD_SALES then the FINALIZE_FACT_LOADING scenarios

If this value if 2, it run the LOAD_SALES scenario only

Otherwise, it runs the FINALIZE_FACT_LOADING scenario

Refreshing the Dimensions in Parallel implies that we perform two actions simultaneously:

Load the Products (this is done by the LOAD_PRODUCTS scenario)

Load Geographies (This is a package loading a set of country/regions/cities tables) and thenload Customers (this is a second package).

This steps embedsa serial stepwithin a parallel step. It is possible in load plans to embed steps within steps, creating a complete execution flow in the hierarchy.

To add the steps, you can either use the wizards (available with the "+" button in the toolbar). You can also drag and drop scenarios, interfaces, procedures, etc directly from the Designer tree view into the Step hierarchy to automatically create a scenario for this component and add this scenario as a step in the load plan.

If you prefer top-down development, you can create a load plan and add using the wizard scenarios that do not exist yet. In the example below, the scenario added with the wizard does not exist yet, and by using version number -1, we simply tell the load plan to execute the latest version of this scenario.

In addition, you can, from a load plan step, access the object from which the scenario was created or regenerate the scenario. Reorganizing the load plan is also extremely simple as it is just matter of drag and drop !

Running Load Plans

After saving your Load Plan this, you can run it by clicking the execute button in the toolbar. The load plan running will be shown in the Load Plan Executions accordion on the Operator. The Steps tab of the Load Plan Runwill show you the steps executed, their status and statistics. This whole tab reflects the executions in progress, and can be refreshed while the executions take place.The sessions started by the load plan still appear in the Session's list, but the Steps tab is 10 times more useful to monitor the overall execution of all these sessions.By clicking on the Session ID link (in blue) in this tab, you open the Session editor and can drill down into the session.

Like Scenarios, Load Plans can also be started from a command line or a web service interface.They can of course be scheduled using external scheduling or the built-in scheduler.

Note that Load Plans require a JEE or Standalone agent for running. They cannot run within the Studio Local Agent. This is due to the fact that the Load Plan execution flow is distributedacross the agents running the sessions started from the Load Plan. Using this architecture, there is no single technical failure point that may prevent a load planfrom proceeding its execution flow when the execution takes place on multiple agents.

Exception Handling

Exception Handling and Restartability behavior are one of coolest things in the Load Plans.

An Exception is simply a group of steps (like a mini Load Plans) that can be executed in case of a failure.

In the example above, I have defined two Exceptions (Minor Exception and Major Exception). They will start a scenario that mail the administrator. The major one in addition starts a scenario to dump the log.These exceptions can be triggered on step failure.

Every step has a property that indicates when a whether the exception should be executed when this steps fails, and whether the failure on this step should be raised to the parent step in the hierarchy. By raising a failure, you can escalate the failure, up to the root step, which fails the whole Load. By ignoring the failure, you flag this step's failure as a minor error.

In the example below, if any of the parallel stepsrefreshing the dimensions fails ("Max Error Child Count=0") the Refresh dimension is considered as failed. In the even of such failure, I will run the Minor Exception and continue the load. Even if not all dimensions are refreshed, the fact can still be loaded, as I am using ODI data integrity framework to isolate facts that would reference dimensions not correctly refreshed.

This example also illustrate the restartability for such a step. If I decide to restart this load plan, only the failed children would be restarted, as defined by the Restart Type option.

Note that when restarting an existing load plan, ODI does not overwrite the first load plan run, but copies it and restarts the copy. Each Load Plan Run is preserved for error identification and tracking.

Load Plans vs. Packages

Users already knowledgeable of ODI should now wonder: Are Load Plans a new type of packages?Well, although there are similarities between these two objects, they do not have the same objective:

Packages are (simply said) technical workflows with a strong transaction nature, produced mainly by data integration developers.

Load Plans aim at making easier the functional design and administration of a production, and are produced by production users and data integration project leads/architects.

Let's discuss the differences:

CapabilityLoad PlansPackagesComments

EditionDesign-Time and Run-TimeDesign Time. Packages are compiled into scenarios at run-time.If production needs to modify the execution flow, it is preferable to deliver a load plan.

Starting/MonitoringUI, Command-Line, Web Services, SchedulingUI, Command-Line, Web Services, SchedulingBoth features are equivalent.

Transactions.Each Load Plan Step contains its own transactions.Package Steps may share transaction.If the workflow requires a transaction that spawns across several steps, use a package.

ParallelismYes, using Parallel Steps. Parallel execution is easy to follow in the Operator.Yes, by starting other scenarios. Parallel execution is hard to follow in the Operator.When there is a strong need for parallel step execution, use preferably Load Plans.

RestartabilityYes. Status of previous runs is persisted.Yes. Status of previous executions is overwritten. Database transactions are not continued, hence restarting the whole package is often needed.Due to their transactional nature and the fact that their execution is overwritten by the new execution, packages are often restarted as atomic units of work. Load Plan provide better flexibility for restartability.

Branching/LoopsBranching (Case/When is supported) Loops are not supported.Branching and Looping are supported.If there is a need for looping in a workflow, use preferably packages.

That's all for today. Stay tuned for more deep dives in the 11.1.1.5 new features !!!What's New in ODI 11g? - Part 4: Core Features

By FX on Aug 31, 2010Oracle Data Integrator 11gR1 includes a large number of features and enhancements to the 10gR3 release. In this blog series I will try to explain the major directions taken by the product and give a quick overview of these features and enhancements.

For a detailed list of features, the following documentation link can be used as a reference.

There are major areas of changes in this release:

New Architecture for Enterprise-Scale Deployment

New Design-Time Experience

New Run-Time Experience

Core Enhancements to E-LT and Declarative Design

Features that enhance developer's productivity, including the new ODI Studio are already detailed in Part 2. In this fourth and last part, I will focus on an overview of the changes related to the core product engine. These changes are enhancements to both the E-LT architecture and Declarative Design approach.

Datasets and Set-Based Operators

Datasets are a big leap in interface design in ODI 11g: Imagine that you want to union in a target table information coming from a set flat files with information from a set of tables (with different transformations, of course). With ODI 10g, you start thinking about multiple interfaces, procedures and packages. Well, ODI 11g allows you to do this in a single interface.

In an ODI 11g interface, Datasets represent flows of data that are merged into a single target. Each flow corresponds to a set of sources and has its own set of transformations (mappings, joins, filters, and so forth). These different datasets are merged using set-based operators (UNION, INTERSECT and so forth).

ODI is able to generate the code corresponding to all the datasets and merge all these data flows into a single flow that can be checked and then integrated using any of the existing integration strategies.

Top of Form

Bottom of Form

Figure 1: Datasets can be added and managed from the mapping tab. Each dataset appears as a sub-tab of the mapping and will contain different sources, joins, filters, mappings and so forth).Derived Select for Temporary Interfaces

When using a temporary interface as a source in another interface, it is possible not to persist the temporary datastore and generate instead a Derived Select (sub-select) statement. The temporary interface no longer needs to be executed to load the temporary datastore, and developments using temporary tables are greatly simplified.

Top of Form

Bottom of Form

Figure 2: If the Use Temporary Interface as Derived Table option is selected, the temporary datastore SRC_CUSTOMER will not be persisted and turned into a sub-select instead.

Lookups

Oracle Data Integrator 11g introduces the concept of Lookup in the interfaces. Lookups are created using a wizard, have a specific graphical artifact and a dedicated property inspector in the interface window. Lookups are generated by ODI for processing in the database engine in the form of a Left Outer Join in the FROM clause or as an expression in the SELECT clause (in-memory lookup with nested loop).

Top of Form

Bottom of Form

Figure 3: The Lookup Wizard simplifies lookup creation.

Other Changes

In addition to these three major changes, other improvements have been made to the core product to support more databases capabilities.

Partitioning

Partitioning information can be reverse-engineered in datastores, and specific partitions can be used when such datastore is used as a source or a target.

Top of Form

Bottom of Form

Figure 4: The partitions reverse-engineered with the TRG_CUSTOMER datastore can be selected when this datastore is used as a target (or a source) in an interface.Temporary Indexing

Joins and filters in interfaces can be automatically indexed for better performances. By selecting index types on a join or a filter, the user can request that ODI creates temporary indexes on the columns participating to the join or index when this interface runs.

Top of Form

Bottom of Form

Figure 5: Temporary indexing for a join. An index type has to be specified for both sides of the join.Native Sequences

ODI Sequences can now directly map to native sequences defined in a database. These sequence can be reverse-engineered. Such a sequence is used with an ODI sequence syntax (for example #PROJECT001.MYSEQUENCE) and is automatically converted to the database's syntax in the generated code.

Top of Form

Bottom of Form

Figure 6: A Native sequence is declared and selected from the list of sequences present in an Oracle schema.Natural Joins

Joins in interfaces now support the Natural Join type. This join does not require any join expression, and is handled by the execution engine, which matches automatically columns with the same name.

Top of Form

Bottom of Form

Oracle GoldenGate... | Main | Staying Sharp with...

Parallel Processing in ODI

By Christophe Dupupet on Nov 20, 2009This post assumes that you have some level of familiarity with ODI. The concepts of Packages, Interfaces, Procedures and Scenarios are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for detailed information.ODI: Parallel ProcessingA common question in ODI is how to run processes in parallel. When you look at a typical ODI package, all steps are described in a serial fashion and will be executed in sequence.

Top of Form

Bottom of Form

However, this same package can parallelize and synchronize processes if needed.

PARALLEL PROCESSES The first piece of the puzzle if you want to parallelize your executions is that a package can invoke other packages once they have been compiled into scenarios (the process of generation of scenarios is described later in this post). You can then have a master package that will orchestrate other scenarios. There is no limit as to how many levels of nesting you will have, as long as your processes are making sense: Your master package invokes a seconday package which, in turn invokes another package...

When you invoke these scenarios, you have two possible execution modes: synchronous and asynchronous.

Top of Form

Bottom of Form

A synchronous execution will serialize the scenario execution with other steps in the package: ODI executes the scenario, and only after its execution is completed, runs the next step.

An asynchronous execution will only invoke the scenario but will immediately execute the next step in the calling package: the scenario will then run in parallel with the next step. You can use this option to start multiple scenarios concurrently: they will all run in parallel, independently of one another.

SYNCHRONIZING PROCESSES Once we have started multiple processes in parallel, a common requirement is to synchronize these processes: some steps may run in parallel, but at times we will need all separate threads to be completed before we proceed with a final series of steps. ODI provides a tool for this: OdiWaitForChildSession.

Top of Form

Bottom of Form

An interesting feature is that as you start your different processes in parallel, they can each be assigned a keyword (this is just one of the parameters you can set when you start a scenario). When you synchronize the processes, you can select which processes will be synchronized based on a selection of keywords.

ADDING SCENARIOS TO YOUR PACKAGE FOR PARALLEL PROCESSINGTo add a scenario to your package, simply drag and drop the generated scenario in the package, and edit the execution parameters as needed. In particular, remember to set the execution mode to Asynchronous.

You can generate a scenario from a package, from an interface, or from a procedure. The last two will be more atomic (one interface or one procedure only per execution unit). The typical way to generate a scenario is to right-click on one of these objects and to select Generate Scenario.

The generation of scenarios can also be automated with ODI processes that would invoke the ODI tool OdiGenerateAllScen. The parameters of this tool will let you define which scenarios are being generated automatically.

In all cases, scenarios can be found in the object tree, under the object they were generated from - or in the Operator interface, in the Scenarios tab.

While you are developing your different objects, keep in mind that you can Regenerate existing scenarios. This is faster than deleting existing ones only to re-create them with the same version number. To re-generate a scenario, simply right-click on the existing version and select Regenerate ... .

From an execution perspective, you can specify that the scenario you will execute is version -1 (negative one) to ensure that the latest version number is always the one executed. This is a lot easier than editing the parameters with each new release.

DISPLAYING PARALLEL PROCESSINGYou will notice that as of 10.1.3.4, ODI does not graphically differentiate between serialized and parallelized executions: all are represented in a serial manner. One way to make parallel executions more visible is stack up the objects vertically, versus the more natural horizontal execution for serialized objects. (If we have electricians reading this, the layout will be very familiar to them, but this is only a coincidence...)

Top of Form

Bottom of Form

OTHER OBJECTS THAN SCENARIOSScenarios are not the only objects that will allow for parallel (or Asynchronous) execution. If you look at the ODI tool OdiOSCommand, you will notice a Synchronous option that will allow you to define if the external component you are executing will run in parallel with the current process, or if it will be serialized in your process. The same is true for the Data Quality tool OdiDataQuality.

EXECUTION LOGSAs you will start running more processes in parallel, be ready to see more processes being executed concurrently in the Operator interface. If you are only interested in seing the master processes though, the Hierarchy tab will allow you to limit your view to parent processes. Children processes will be listed under the entry Childres Sessions under each session.

Likewise, when you access the logs from the web front end, you can view the Parent processes only.

Enjoy!

Setup of ODI 11g Agents for High Availability

By Alex Kotopoulis on Dec 13, 2010Thanks to Sachin Thatte for contributing this article!Introduction

Oracle recently introduced the latest release of Oracle Data Integrator (ODI) Enterprise Edition 11g the summer of 2010. With this offering, Oracle raised the bar on performance, scalability and highly availability for data movement and transformation solution. With the unique E-LT (Extract, Load and Transform) approach, the total cost of ownership for ODI is a fraction of its competition.

In this article we will show how to take advantage of ODI's E-LT data movement and transformation capabilities in a highly scalable and highly available way.

ODI performs the execution and orchestration of the E-LT jobs via the lightweight ODI runtime Agent. The ODI Agent can be deployed in a standalone environment as well as on the industry leading WebLogic Server. It is with the WebLogic server deployment that ODI achieves scalability and high availability. For deploying on WebLogic server ODI ships with a JEE application and a domain configuration deployment template to assist in configuration of the agent. ODI also supports using Oracle RAC database for storing ODI Master and Work Repository data.

Deployment Topology

The following is a depiction of ODI agent deployed in a cluster, connected to Oracle RAC repository.

Top of Form

Bottom of Form

By deploying ODI Agent in a clustered WebLogic server, the incoming requests can be supported by a farm of machines each able to take on a slice of the incoming requests. The Proxy or Load Balancer that receives all the incoming requests can intelligently distribute the load across the managed servers to maximize the capacity. Depending upon your business needs the managed servers on which ODI Agents are deployed can be increased or decreased without disrupting your business.It is recommended that the ODI Repositories (Master and Work) should be deployed on Oracle RAC for load balancing and high availability on the database side. Using Oracle RAC will allow ODI to retry failed connections in case one of the RAC node goes down and continue the execution of running ODI Sessions without failing the E-LT tasks.

ODI agent supports an internal scheduling service. This scheduling service is used to execute the E-LT jobs automatically based on the schedule that a user can define. The schedule for the job can be one time or recurring. There are various fault handling options to handle any unforeseen errors that can be caused due to environmental problems. This scheduling service runs as a singleton when deployed on a WebLogic cluster. If for any reason the managed server where the Scheduling service is deployed goes down or is brought down for maintenance, the scheduling service automatically migrates to one of the available managed servers in the cluster. Thus it provides a fail-safe and highly available scheduling service for executing ODI schedules.

How do I set up ODI for HA?

1. Configure a named ODI Agent in ODI StudioODI Agents are declared and defined in ODI Studio Topology panel. Define the ODI Agent that will be used in WebLogic cluster for high availability and for scalable deployment. The host/port defined in the Agent configuration must match with the Load Balancer host/port address. All ODI Agent requests will be routed through this host/port address.

2. Generate ODI Agent dynamic template for deploymentUsing ODI Studio, generate a deployment template for ODI Agent. When generating the deployment template, you can choose the Data Servers that should get deployed as JEE Data Sources so that they are managed and pooled via WebLogic configuration.

3. Deploy ODI Agent using the dynamically generated Agent deployment template. Use WebLogic configuration wizard to deploy the template generated in previous step to a WebLogic cluster. This will allow you to create the set of managed servers that are part of a WebLogic cluster to which the ODI Agent should be deployed. You can also deploy the ODI Agent template on an existing WebLogic Domain.

4. Configure Coherence cache properties in JAVA_OPTIONS for managed server startup.

a. tangosol.coherence.localport configuration parameter defines the port which a node in the cluster can use for coherence cluster. It would be pinged by an agent nodes to detect coherence cluster existence and other coherence communication.

b. All the ODI Agents deployed on a cluster must be connected to the same Coherence cluster cloud. This enables the agents to share the knowledge of the tasks performed by each of them as well as allow for the Scheduling Service migration when needed. Following properties are introduced to configure the Coherence listen addresses.oracle.odi.coherence.wkaN : The host name of a Managed Serveroracle.odi.coherence.wkaN.port : Coherence Unicast port configured on that Managed ServerWhere N = 1..10

For Example:

Node 1: "-Dtangosol.coherence.localport=8095 -Doracle.odi.coherence.wka1= -Doracle.odi.coherence.wka1.port=8095

-Doracle.odi.coherence.wka2=

-Doracle.odi.coherence.wka2.port=8096"

Node 2: "-Dtangosol.coherence.localport=8096 -Doracle.odi.coherence.wka1= -Doracle.odi.coherence.wka1.port=8095 -Doracle.odi.coherence.wka2= -Doracle.odi.coherence.wka2.port=8096"

Such an ODI agent deployment will be highly available and will allow scalability to address the load on the Agent based on your business needs.

ODI11g: Creating and Scheduling an ODI Scenario

PurposeThis tutorial walks you through the steps that are needed to create and schedule an Oracle Data Integrator (ODI) scenario.Time to CompleteApproximately 30 minutes OverviewWhen a set of objects is complete and tested, a good practice is to create an ODI scenario for each object. After a scenario is created, it cannot be edited and the code for the object is regenerated and stored in the scenario. The created scenarios can also be scheduled for running virtually on any time interval within ODI. In this OBE, you create and schedule a scenario for the ODI procedure that was created in the OBE titled Creating an ODI Project and Procedure to Create and Populate a Relational Table.ScenarioLinda works as a database administrator for Global Enterprise. In Global Enterprise, Linda is responsible for performing database management and integration tasks on various resources within the organization. In particular, Linda is responsible for data loading, transformation, and validation. To begin working on her projects, she created the new Master repository and Work repository. Linda also created the project and the procedure to create a relational table and populate it with data. She set up and installed an ODI Agent as a service. Now Linda needs to create and schedule an ODI scenario to run the procedure at the appropriate time. Software and Hardware RequirementsThe following is a list of software requirements: The system should include the following installed products:

Oracle Database 11g

Oracle Data Integrator 11gR1

If not done before, start the services and components for Oracle Database 11g

PrerequisitesBefore you start the tasks, make sure that your system environment meets the following requirements:1 .Have installed Oracle Database 11g. If not done before, start the services and components for Oracle Database 11g

2 .Have installed Oracle Data Integrator 11gR1

3 .Before attempting this OBE, you should have successfully completed the following OBE:

ODI11g: Creating and Connecting to ODI Master and Work Repositories.

ODI11g: Creating and Connecting to ODI Agent ODI11g: Creating and ODI procdure to Create and Populate RDBMS TableTo access these OBEs, click HERE.

Create a New ODI Scenario with Oracle Data Integrator To create a new ODI Scenario with ODI, perform the following steps:

1 .Start ODI Designer: Start > Programs > Oracle > Oracle Data Integrator > ODI Studio . Select WORKREP1 from the Login Name drop-down list if not already selected. Enter SUPERVISOR in the User field and SUNOPSIS in the Password field. Click OK to login.

2 .Expand the PRD-create-populate_table procedure and expand Scenarios. Right-click the PRD_CREATE_POPULATE_TABLE scenario, and then click Generate Scenario as shown in the following screenshot.Note: The scenario has now been successfully created. You can now execute the scenario directly, use the scenario within a package, or schedule the scenario within ODI.

Scheduling the ODI Scenario Now you need to schedule an ODI scenario with ODI Agent. To schedule the scenario, perform the following steps:1.In Topology Navigator, open: Physical Architecture > Agents > localagent. Click Test to verify connection to ODI agent, as shown below.Note: If ODI Agent is not set up and running, you have to perform steps specified in OBE "Setting UP and Installing an ODI Agent".

2.Expand the PRD-create-populate-table procedure. Expand Scenarios > PRD_CREATE_POPULATE_TABLE Version 001. Right-click Scheduling and select New Scheduling.Note: To schedule a scenario, an ODI Agent must be set up. If an ODI Agent is not set up within the ODI Topology Manager, perform OBE: "Setting Up and Installing an ODI Agent.

3.On the screen that follows, select the agent where the scheduled scenario will run: localagent. Set Context as Global and log level to 5. Set Execution to Simple and click the button. Set the execution time to approximately 5 minutes from the current system time as shown in the following screenshot. Click Save button.

4 .Expand Scheduling and verify that the DEVELOPMENT / localagent entry is now inserted under Scheduling.

5 .Open Topology Navigator to review the scheduling of the Agent. In the Physical architecture, expand the Agents node, double-click localagent. On the localagent screen, click Update Schedule. On the screen that follows, click OK. Click OK again.

6 ..Click the View Schedule button. The screen that appears shows you the scheduling information.

Verify Execution of Scheduled ODI Scenario To verify the execution of the scheduled scenario, perform the following steps:

1.Click the ODI Operator tab to open ODI Operator. In ODI Operator, click the Session List tab. Wait until scheduled execution time to view the execution results, and then click the Refresh icon.

2.Expand: Physical Agent > localagent -1> PRD_CREATE_POPULATE_TABLE, and view the execution results for the PRD-create-populate-table procedure. Note the execution time of this scenario. That is the time that you scheduled with ODI Agent. You have now successfully scheduled a scenario with an agent scheduler.

Export Import DWR to DWR having same Master Repository

For this particular Example I am exporting Project, Models and all the other objects under it from ODI_DEV [ DWR ] environment to ODI_TEST [ DWR ] where DWR is Development Work Repository

Step 1 Export the Project

Specify the directory to be exported

Step 2 Export the models as shown above and you will find all the exported objects are created as XML Files as shown below.

Lets now move the objects to the other ODI_TEST [ DWR ] environment.

Step 1 Export the ModelsLogin into ODI_TEST [DWR] , Right click on the Import Model folder option under the Model Tab.

Go the Folder where the objects are created.

Use only INSERT_UPDATE option as this way it would maintain the internal ID of the objects.

Dont worry about this warning and click ok.

The models are imported . The next step is to import the project

Repeat the above step to import using the INSERT_UPDATE mode and select the project to be import and as you can see all the packages, interface , KM are also imported .

Now lets test the interface for its validity.

Voila it works

1.

For this example I have not used any variable in that case I would suggest this order for Import.

2. Import Models

3. Import Global Variables , Sequences , Functions

4. Import Project

Lets look at the Internal Id of both the environment, they are same , you can compare other objects too and you will find that their internal ID are the same.

Using ODI SolutionRight click on the Solution and click Insert Solution

As You can see that , the Project and the Model are created with the required versions.

Now Log into the other DWR environment and since the solution is stored in the Master Repository and we have the same Master Repository . The solution will be visible in the other DWR.

Right click on Restore All Elements to restore all the objects of the solution.

Click ok on all the warnings

Do so for all the other objects and finally all the objects will be restored to the required version.

I ran the package again and it worked.

Please find my future post on more best practices and method of Export and Import of ODI Objects.

ODI: Automating deployment of scenarios to production in Oracle Data Integrator

Posted by Uli Bethke on Oct 12, 2009 in Best Practice, Oracle Data Integrator (ODI)

In this post I will show you how you can automatically deploy scenarios in ODI.

It is rather cumbersome to manually deploy scenarios from a test/development to a production environment.

Typically it involves the following steps:

- Manually (re-)generate all scenarios that need to be deployed and any child scenarios referenced.- Manually export the (re-)generated scenarios- Log in to the Operator module for the production environment and manually import the scenarios.

You do the above once or twice manually before getting extremely fed up. Actually I was rather patient (unlike my normal self) and did this about ten times before I got very, very annoyed.

In this post I will show you how you can automate the deployment process. The proposed solution will allow you to logically group scenarios together via marker groups for automatic deployment, thus giving you more flexibility and allowing you to just deploy a subset of scenarios in your project.

To achieve our goal we will make use of the following Oracle Data Integrator features:

- Marker groups- ODIGenerateAllScen tool- ODIExportScen tool- OdiImportScen tool- Meta-information in the ODI work repository- Execution of scenarios from a Windows batch file

In a first step we create a new marker group. We will use the marker group to logically group together scenarios we want to deploy. We will subsequently use the marker group in the ODIGenerateAllScen, ODIExportScen, and OdiImportScen tools.

Log on to Designer > Expand Markers > Right click Markers > Select Insert Marker Group

As you can see from the figure above I have named the marker group Scenario and added three markers. One of the markers is named XLS (short for scenarios that load data from Excel sheets). Flagging these scenarios (or rather their packages) via a marker will allow us to deploy them separately.

Next we will add the XLS marker to those packages that we wish to logically group together for deployment. In our particular case, all of the Excel related packages.

Right click package > Add Marker > Scenario XLS

In the next step we will create a package that will (re-)generate all packages marked with XLS.

Create a new package, name it GENERATE_SCENARIOS_XLS, add the ODIGenerateAllScen tool to it, and use the following parameters for the tool:

Project: The name of your projectMode: Replace. This will overwrite the latest version of your scenario.Marker Group: ScenarioMarker: XLS

Leave the default values for the other parameters.

In a next step we need to export the (re-)generated scenarios to XML. Unfortunately, the OdiExportScen tool does not allow us to make use of marker groups to logically group together scenarios for export. To achieve our goal we need to make use of a workaround.

As ODI is a meta-driven ELT tool we can retrieve information about the marker groups from the ODI work repository.

The query below will exactly do this. It returns alls packages that are marked as XLS.

SELECT

scen_name AS pack_name

FROM (

SELECT

LAST_VALUE(d.scen_name) OVER

(PARTITION BY c.pack_name ORDER BY scen_version) AS scen_name,

MAX(scen_version) OVER

(PARTITION BY c.pack_name ) max_scen_version,

scen_version,

c.pack_name

FROM

snp_obj_state a

join snp_state2 b on (a.i_state = b.i_state)

JOIN snp_package c ON (a.i_instance = c.i_package)

JOIN snp_scen d ON (c.i_package = d.i_package)

WHERE

state_code = 'XLS'

)

WHERE

scen_version = max_scen_version

We will employ this query as an implicit cursor in an ODI procedure at the Command on Source. You will first need to create a data server to the ODI work repository in Topology Manager for this to work. As per figure below, set the Technology to the technology of your work repository (in my case Oracle) and set the schema to the logical schema of your work repository (in my case ORCL_ODIWORK_SRC).

We will then use the resultset together with the OdiExportScen tool to create XMLs for our scenario and write them to disk.

OdiExportScen "-SCEN_NAME=#pack_name" "-SCEN_VERSION=-1"

"-FILE_NAME=D:\ODI\SCEN_#pack_name Version 001.xml"

"-FORCE_OVERWRITE=YES" "-RECURSIVE_EXPORT=YES"

"-XML_VERSION=1.0" "-XML_CHARSET=ISO-8859-1"

"-JAVA_CHARSET=ISO8859_1"

Important parameters here are

SCEN_NAME: #pack_name. This is the bind variable from our Command on Source.SCEN_VERSION: -1. This means that we will export the scenario that was generated last.FILE_NAME: This is the path on your file system where the XMLs will be generated.

Make sure that you have set the Technology to Sunopsis API.

In a next step add the ODI procedure we just created to our package.

Finally, manually create a scenario for this procedure via ODI Designer.

Now we are in a position to import the exported scenarios from their XML files.

Once again we will use a procedure to achieve this

For command on source use the following query:

SELECT

scen_name AS pack_name

FROM (

SELECT

LAST_VALUE(d.scen_name) OVER

(PARTITION BY c.pack_name ORDER BY scen_version) AS scen_name,

MAX(scen_version) OVER

(PARTITION BY c.pack_name ) max_scen_version,

scen_version,

c.pack_name

FROM

snp_obj_state a

join snp_state2 b on (a.i_state = b.i_state)

JOIN snp_package c ON (a.i_instance = c.i_package)

JOIN snp_scen d ON (c.i_package = d.i_package)

WHERE

state_code = 'XLS'

)

WHERE

scen_version = max_scen_version

For Command on Target use the following command

OdiImportScen "-FILE_NAME=D:\ODI\SCEN_#pack_name Version 001.xml"

"-IMPORT_MODE=SYNONYM_INSERT_UPDATE"

Name the above procedure IMPORT_SCEN_XLS and generate a scenario for it.

We now have all the components that we need for automated scenario deployment. We will just have to glue them together. We will do this via a batch file. In my particular case this will be a Windows batch. Of course, you can achieve the same in a Linux etc. environment.

ODI allows you to execute scenarios via the startscen.bat. You can find this batch file in the oracledi\bin folder in your ODI home. Three parameters are mandatory for executing this batch:

%1: The name of the scenario. In our case these are the GENERATE_SCENARIOS_XLS and the IMPORT_SCEN_XLS scenarios.%2: The version number of the scenario. In our case -1, as we want to export the last version of the marked scenarios.%3: The execution context. In our case we deploy the scenarios from UAT to PRD. So for the export of our scenarios we will use the UAT context, as this is where we (re-)generate and export the marked scenarios. For the import of the marked scenarios we will use the PRD context as we want to import the exported XMLs into the production environment.

You will need to either have agents installed on the same server (one for each environment). I explain how you can install multiple agents on one server in a separate post. Alternatively, you should be able to use a shared folder that both agents can access (I havent tried this out).

@echo off

cls

d:

echo generate xls scenarios

cd D:\app\oracle\product\odi_home\oracledi\bin\

call startscen.bat GENERATE_SCENARIOS_XLS -1 UAT

echo import xls scenarios

cd D:\app\oracle\product\odi_home\oracledi\agent_prd\

call startscen.bat IMPORT_SCEN_XLS -1 PRD

I would like to hear from you how you deploy your scenarios.

Importing Scenarios to the Production Environment

This topic describes how to export the Integration scenarios from the Oracle Data Integrator development environment and import the scenarios into the production environment. It also describes how to modify scenario variables in the production environment. For detailed information on importing and exporting Oracle Data Integrator objects, see Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.This task is a step in Process of Implementing the Integration in an Oracle Data Integrator Production Environment.

When Oracle Data Integrator scenarios are generated in a development environment, the default values of the global and project variables used in the scenarios relate to the development environment. When Oracle Data Integrator is then deployed in the production environment, these default values might not apply.

To ensure that the values of the variables used in scenarios are appropriate when scenarios are transferred to the production environment, use one of the following methods:

Change the values of the Global and Project variables in the development environment to the correct values for the production environment, and then generate the scenarios. For more information, see Configuring Integration Variables.

Manually modify the Global and Project variables used in the scenarios in the production environment. For more information, see Modifying Scenario Variables in the Production Environment.

Pass the correct values of the Global and Project variables for the production environment to Oracle Data Integrator each time a scenario is run. Oracle Data Integrator overrides the default value of the variable with the value that is passed.

This option is available only if you have installed the Oracle Data Integrator Standalone Agent and can run scenarios from the command line. For additional information, see Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator 11g Release 1.Transferring Scenarios Between Repositories

The following procedure describes how to export scenarios from the Oracle Data Integrator development environment and import the scenarios into the production environment.

To transfer scenarios between repositories

1. Perform the following steps to export the scenarios from the development environment work repository:

a. After generating the Integration scenarios in a development environment as described in Generating Scenarios for Integration Packages, open the Operator navigator and select Scenarios.

b. Right-click the scenario that you want to export, then select the Export menu item.

c. In the Export Directory field, enter the path to the directory where you want to export the scenario, then click OK.

d. Repeat Stepc for each scenario that you want to export.

2. To import the scenarios into the production environment work repository, perform the following steps:

a. In the production environment, start ODI Studio, and connect to the ODI work repository.

b. Open the Operator navigator, right-click Scenarios, then select the Import Scenario menu item.

c. In the File import directory field, enter the name of the file import directory. Specify the name of the directory that you exported the scenario to in Step1.

d. Select the files that you want to import, and then click OK.

e. Click Yes to continue the import if a warning message appears stating that you are about to delete or replace objects.

When the import process is completed, a list of the imported objects is displayed.

Modifying Scenario Variables in the Production Environment

You must update the variables in the scenarios that are imported to the production environment by replacing any variable value that contains information specific to the development environment with the corresponding values for the production environment. The following procedure describes this task.

To modify scenario variables in the production environment

1. In the Operator navigator, navigate to Scenarios, Scenario Name, Variables, Variable Name

where:

Scenario Name is the name of the imported scenario

Variable Name is the name of the variable to be updated

2. In the Scenario Variable editor, change the value in the Default Value field to the appropriate value for the production environment.

3. For each of the imported scenarios, repeat Step1 and Step2 for the following variables:

FINS_BIB_Default_Organization

FINS_BIB_EIM_ErrorFlags

FINS_BIB_EIM_SQLFlags

FINS_BIB_EIM_TraceFlags

FINS_BIB_Enterprise_Server

FINS_BIB_Gateway_Server

FINS_BIB_Log_Path

FINS_BIB_Organization

FINS_BIB_Password

FINS_BIB_Siebel_Path

FINS_BIB_Siebel_Server

FINS_BIB_Username

Creating and Scheduling an ODI ScenarioOverviewWhen a set of objects is complete and tested, a good practice is to create an ODI scenario for each object. After a scenario is created, it cannot be edited and the code for the object is regenerated and stored in the scenario. The created scenarios can also be scheduled for running virtually on any time interval within ODI. In this OBE, you create and schedule a scenario for the ODI procedure that was created in the OBE titled Creating an ODI Project and Procedure to Create and Populate a Relational Table.

Back to Topic ListScenarioLinda works as a database administrator for Global Enterprise. In Global Enterprise, Linda is responsible for performing database management and integration tasks on various resources within the organization. In particular, Linda is responsible for data loading, transformation, and validation. To begin working on her projects, she created the new Master repository and Work repository. Linda also created the project and the procedure to create a relational table and populate it with data. She set up and installed an ODI Agent as a service. Now Linda needs to create and schedule an ODI scenario to run the procedure at the appropriate time.

Back to Topic ListVerifying the PrerequisitesBefore you start the tasks, make sure that the following requirements are met:

The system should include the following installed products: Oracle Database 10g XE

Oracle Data Integrator 10g (10.1.3.4)

You should have successfully completed the OBE titled Creating and Connecting to ODI Master and Work Repositories before attempting this OBE. To access this OBE, click HERE.

You should have successfully completed the OBE titled Creating an ODI Procedure to Create and Populate a Relational Table. To access this OBE, click HERE.

You should have successfully completed the OBE titled Setting Up and Installing an ODI Agent as a Background. To access this OBE, click HERE.

If not done before, start the services and components for Database 10g XE and Oracle Data Integrator 10g (10.1.3.4).

Back to Topic List

Creating a New Scenario with Oracle Data Integrator

To set up a new ODI Agent, perform the following steps:

1.Start ODI Designer: Start > All Programs > Oracle > Oracle Data Integrator > Designer. Select WORKREP from the Login Name drop-down list if not already selected. Enter SUPERVISOR as the User name and SUNOPSIS as the Password. Click OK to log in.

2.In ODI Designer, click the Projects tab. On the Projects tab, expand the project: ODIcreate_table > First Folder > Procedures. Right-click the PRGcreate-populate_table procedure and select Generate Scenario.

3.Name the scenario CREATE_AND_POP_SALES_PERSON. Set the Version to 001. Click OK.

4.Expand the PRGcreate-populate_table procedure and expand Scenarios. Right-click the CREATE_AND_POP_SALES_PERSON scenario to view the possible options as shown in the following screenshot.

Note: The scenario has now been successfully created. You can now execute the scenario directly, use the scenario within a package, or schedule the package within ODI.

Back to Topic List

Scheduling a New Scenario with Oracle Data Integrator To schedule the scenario, perform the following steps:

1.Open the Services window (select All Programs > Administrative Tools > Services) and stop the Oracle DI Agent service if started. Close the Services window.

2.Open (if not opened ) the Oracle XE Database Home page: Start > All Programs > Oracle Database 10g Express Edition > Go To Database Home Page. The Login screen appears. Log in to the database as user ODI_STAGE3. The password for this user is password.

3.On the Oracle Database 10g Express Edition Home screen, select SQL Commands > Enter Command from the SQL drop-down list. Enter and run the statement that is provided below. If the SRC_SALES_PERSON table exists, it is dropped. Otherwise, the message "table or view does not exist" is displayed.

drop table ODI_STAGE3.SRC_SALES_PERSON

4.Open the Command window and change the directory to the ODI_HOME\bin directory (for example, I:\ODI\oracledi\bin). To start ODI Agent as a scheduler, execute the agentservice.bat file by using the following command:

agentscheduler.bat NAME=localagent

Note: To perform this step, an ODI Agent must be set up. To set up an ODI Agent, see the OBE titled Setting Up and Installing an ODI Agent as a Background. To access this OBE, click HERE.

5.Expand the PRGcreate-populate_table procedure. Expand Scenarios > CREATE_AND_POP_SALES_PERSON. Right-click Scheduling, and then select Insert Scheduling.

Note: If an ODI Agent is not set up within the ODI Topology Manager, the following error message appears: "To use the scheduling, please set an agent in your repository." To schedule a scenario, an Agent must be set up. To set up an ODI Agent, see the OBE titled Setting Up and Installing an ODI Agent as a Background. To access this OBE, click HERE.

6.On the screen that follows, select the agent where the scheduled scenario will run: localagent. Set Context as Global and log level to 5. Set Execution to Simple and click the button. Set the execution time to approximately 5 minutes from the current system time as shown in the following screenshot. Click OK. Click OK again.

7.Expand Scheduling and verify that the localagent entry is now inserted under Scheduling.

8.Click the icon to start Topology Manager to review the scheduling of the Agent. Expand the Agents node, right-click localagent, and select Edit. On the Agent: localagent screen, click Update Scheduling. On the screen that follows, click OK.

9.Click the Scheduling information button. The screen that appears shows you the scheduling information.

Back to Topic List

Verifying the Execution of the Scheduled Scenario with Oracle Data IntegratorTo verify the execution of the scheduled scenario, perform the following steps:

1.To verify that your scheduled scenario was executed successfully, you need to open ODI Operator. Click the ODI Operator icon on the menu bar to start ODI Operator. In ODI Operator, click the Session List tab . Expand Date > Today > CREATE_AND_POP_SALES_PERSON, and view the execution results for the PRG_create-populate_table procedure.

2.Expand Physical Agent > localagent > CREATE_AND_POP_SALES_PERSON, and view the execution results for the PRG_create-populate_table procedure. Note the execution time of this scenario, that is, the time that you scheduled with ODI Agent. You have now successfully scheduled a scenario with an agent scheduler. The scenario was executed by the agent successfully.

Note: If you want to execute the procedure again, you must first drop the SRC_SALES_PERSON table as described in step 2 of the "Scheduling a New Scenario with Oracle Data Integrator" section.

Back to Topic List

Migrate Existing ODI from DEV to TEST toProduction

by nathalieroman In the previous posts on this Blog youve read about my experiences with Oracle Data Integrator and now a new important milestone was reached import and export the ODI interfaces, datastores, from DEV-environment to TST-environment.

Ive asked on OTN what the best way was to accomplish this and different views were given on how to solve this.

You could use import/export functionality from ODI itself, e.g. if you want to partially export a certain interface or datastore, you could export a specific object.If you want to migrate from DEV-environment to TEST, you could use the import/export functionality from the Oracle Database because my master- and work-repository are stored in an Oracle 10g Database.I needed to move my development environment onto the test-environment to be able to run the ODI functionality against real-time test data to be able to test the performance more accuratly. Another important raison also was that my development environment, my notebook, had crashed a couple of times already so I needed to back-up all the important data.What did I do to accomplish this import/export:- I exported the master- and work repository used by ODI (snpw and snpm-schemas)- I copied over the ODI-folder from my program-files folder, just to make sure

After backing-up all other stuff on my laptop, I needed to format my laptop and have a closer look at the problems I was facing concerning my hard-drive etc.

Alea iacta estA new laptop, a new environment lets start installing all needed software and then the time had come to get back up and running with my ODI environment.

First I tried to import the existing Master Repository but the wizard didnt really guide me a lot during this process so another approach was needed.

At last, the successfull steps to import/export from environment 1 to environment B such as DEV to TEST are the following: Export existing master- and work-repository in a dump-file using Oracle db export feature

Import the master- and work-repository into your new schema

Create a new connection to the imported master-repository when you connect to the Topology Manager in ODI

Create a new connection to the imported work-repository when you connect to the Designer manager in ODI

Now your work is done if you check out the model- and projects-tab in your Designer-view you will notice all interfaces, datastores, models, etc. are imported successfully.Repository Architecture Two or more Masters Part 3

Hi Friends,

I got some emails and comments asking for publish about how to work with more then one Master Repository and, you will see, it isnt socomplicated.

I will use, as base, the post Repository Architecture Just one Master Part 2 to show to have a multiple Master Repositories environment.

First:

Question: Why to have 2 or more Masters Repositories?

Answer: When there is any restriction, physical or by policy, about contacting between environments, i.e. Development Environment has no physical connection with Production. Very common architecture to financial institutions like banks.

Take a look in the following image, we will discuss it:

Three environments with no connection between them

Here is how it will work:

1. Development

One Master Repository (MR)

One Development Work Repository (DWR)

ODI modules -Designer and Operator -are used to developing, tests and log view

Topology has only connections to development database (source and target)

CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMASAND LOGICALAGENTSNEED TO BE DEFINED EXACTLY AS WILL BE DEFINED IN THE OTHERS ENVIRONMENTS

2. Test

One Master Repository (MR)

One Execution Work Repository (EWR)

Operator will be used to import and export scenario and to create scheduling process

Metadata Navigator will be used to test (manual execution from Business Analysts)

Topology has only connections totest database (source and target)

CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMAS AND LOGICALAGENTS NEED TO BE DEFINED EXACTLY AS WILL BE DEFINED IN THE OTHERS ENVIRONMENTS

3. Production

One Master Repository (MR)

One Execution Work Repository (EWR)

Operator will be used to import and export scenario and to create scheduling process

Metadata Navigator will be used to log view and manual execution from Final Users (when necessary)

Topology has only connections totest database (source and target)

CENTRAL POINT OF ARCHITECTURE: ALL LOGICAL SCHEMAS AND LOGICALAGENTS NEED TO BE DEFINED EXACTLY AS WASDEFINED IN THE OTHERS ENVIRONMENTS

About the CENTRAL POINT OF ARCHITECTURE

ODI works with LogicalObjects (Schemes and Agents)to separate the developing from physical changes, I mean, changes of IP, User, Password, hardware, etc.

Using this concept, once exactly the same Logical objects are declared(created) in the three environments, it is absolutely possible to migrate scenarios from Development to Test to Production.

The connection and agents (physical objects) created at Topology can be, with no problem, created each one in its respective environment with its own parameters once they will be different once each one go to distinct hardware. I mean, no ODI import/export is necessary for physical objects.

An other possible question is Why there is just one Development Work Repository? but this one I will answer in my next post. I hope to have helped you to obtain more knowledge about ODI Repositories.

See you soon!

ODI Incremental Update and Surrogate Key using DatabaseSequence

with 11 commentsI am writing this because it had me lose my sanity for some time. If,

Youre using Oracle Data Integrator

Youve defined a dimension table with a surrogate key even if it is not a SCD type 2 (recommended by Kimball)

The surrogate key is maintained through a database sequence (Oracle sequence in my case)

Your IKM in Incremental Update

Youre frustrated about how to control updates to a dimension having a sequenced surrogate key

Then read on for the solution that worked for me. The solution is pretty standard you only need to know what youre doing and setting the right options.

Step 1 Create the database sequence objectFor Oracle the command is similar to the following:

CREATE SEQUENCE . CACHE 20 MAXVALUE MINVALUE 1 INCREMENT BY 1 START WITH 1For example, to create the sequence named CUST_GROUP_SEQ in schema ABDW with a maximum value of 99999999, execute

CREATE SEQUENCE ABDW.CUST_GROUP_SEQ CACHE 20 MAXVALUE 99999999 MINVALUE 1 INCREMENT BY 1 START WITH 1Step 2 Reverse-engineer your modelsWell thats obvious, isnt it?

Step 3 Configure the target datastoreIn the Definition tab of your datastore, select Dimension as the OLAP TypeStep 4 Design your interface1. Create a new interface

2. Open the Diagram tab

3. Drag the target datastore in place

4. Drag the source datastore(s) in place

5. Except for the surrogate key, do all the required column mappings

6. Click on the name of the target datastore; in the properties panel, select as the Update Key

7. Select the surrogate key column of the target datastore

1. Select Execute On as Target

2. Make sure the Key checkbox is not selected

3. Clear the Check Not Null checkbox (as this is an auto-generated sequence)

4. Clear the Update checkbox (as this is an auto-generated sequence)

5. In the implementation box, write the following code (this is important): .nextval

8. [MOST IMPORTANT] Select the natural key or unique key column of the target datastore

1. Check the Key checkbox; this is the key that will be used by ODI to make comparisons during inserts/updates of the IKM

2. Also, clear the Update checkbox

3. If more than one column identify a row as unique then mark each one of them as Key columns in a similar fashion

9. Go through the other columns and mark or clear the Insert/Update checkboxes as appropriate for your need

10. Save the interface

Thats it! Execute the interface and everything should be fine. Let me know about your experiences.

Tidbits from my thoughts

ODI How to Reverse FlatFiles

with 10 commentsThe most important thing to remember before using flat files as datastores in ODI is that the Designer looks for them in the client PC where it runs. I havent tried using a UNC (\\computer\share\) but that should work.

ODI comes pre-configured with a folder for dealing with flat files in the Topology Manger, youll find the FILE_GENERIC data server set up to serve files from the ODI_HOME/oracledi/demo/file directory. But if you want your own file data server then follow these steps:

1. Open Topology Manager

2. In the Physical Architecture tab, right-click Technologies>File and choose Insert Data Server

3. Give a name in the Definition tab, then click on the JDBC tab

4. Choose Sunopsis File JDBC Driver (com.sunopsis.jdbc.driver.file.FileDriver) and jdbc:snps:dbfile as the Url

5. Test, and the close the dialog

6. Right-click the data server you just created and choose Insert Physical Schema

7. Decide on a directory in your PC (where ODI is being run) that you will use to keep all the flat files you want to use as datastores

8. Put the complete path (or one relative to ODI_HOME/oracledi) of the directory in Directory (Schema) and Directory (Work Schema) of the Definition tab

9. Make sure you check Default

10. Click on the Context tab

11. Click the small grid button to insert a new context (defaults to Global) and then give a name for the associated Logical Architecture ODI will create it for you

12. Close the dialog

13. Youre done!

Before you can reverse a flat (text) file, it should be in either of the following formats:

Delimited text files the field delimiter can be anything, e.g. (note the quotes around 456, Darling Street some sort of text delimiter is required if the value contains the field separator character)

ID,Name,DoB,Address 1,Himu,01/01/2000,123 Racoon City 2,Jumanji,01/01/2001,456, Darling Street Text files with fixed-width columns each column starts at the same position in each line, e.g.

ID Name DoB Address 1 Himu 01/01/2000 123 Racoon City 2 Jumanji 01/01/2001 456, Darling Street You need a model to reverse the files do the following:

1. Place the file in the directory you set up as the physical schema

2. Open Designer

3. Go to the Models tab and insert a new model

4. Give a name

5. In Technology, choose File

6. Select the Logical Schema you created earlier

7. Click the Reverse tab and choose the Context (usually its Global)

8. Click OK to save the model

Now the actual reversal process

1. Right-click the newly created model and choose Insert Datastore

2. Give a name of your choosing and a good alias if you want to

3. Click the ellipsis button next to the Resource Name text box and choose the file you want to reverse

4. Click the Files tab

5. Choose the file format Fixed or Delimited6. If the file has header lines (giving column names as in the examples above), then put the number of lines to be treated as headers in Heading (usually 1)

7. Choose the appropriate Field Separator (tab or comma is common)

8. If the contents have text delimiters as shown in example 1 above then specify it in Text Delimiter; this is commonly double quotes () if CSV text files are generated from Excel

9. Click the Apply button; ODI will ask if you want to lock the object; make sure you click No or the column reversal later might not work; also, dont close the dialog

10. Click the Columns tab

11. Click the Reverse button and you should get suggestions from ODI itself

12. Adjust the physical and logical column sizes as appropriate usually these should match with the target datastore you are planning to populate ultimately

Sometimes, ODI cannot reverse the columns when you click Reverse. If this happens, make sure the datastore is not in locked mode.

ODI Load Plan Exception Handling

While using package scheduling, you drag and drop package scenarios and at last add error mail step for exceptions and this keeps you informed about errors.

In load plan, you create a package with one step send mail.

20120511ODI Load Plan Exception Handling

While using package scheduling, you drag and drop package scenarios and at last add error mail step for exceptions and this keeps you informed about errors.

In load plan, you create a package with one step send mail.

Then, go to load plan exceptions tab, add exception step then drag and drop package scenario.

Select "Send Error Mail" at exception step of Load planroot step properties , exception behavior must be run exception and raise.

When load plan got error, then it sends you an email.

While using packages for etl, you can start an interface from outside of package that got error, then select done for that interface on package and then restart the package. For this behaviour on load plan, choose restart from failed children in load plan.