Accountable Care Transformation (ACT) Leadership May 20, 2011
45568786 7336983-transformation-20
-
Upload
naresh1245 -
Category
Technology
-
view
761 -
download
3
Transcript of 45568786 7336983-transformation-20
INFORMATICA 7.1.1
S.No
Transformation I O V R M L Important PAGE
1Source Qualifier SQ A/C * * Distinct,
8
2Filter FIL A/C * * Expression
4
3Expression EXP P/C * * * Group by port, Sorted input, Expression
4Sequence Generator SEQ P/C * * NEXTVAL,CURVAL
10
5Router RTR A/C * * Expression
6Union UN A/C * *
7Transactional control TC A/C * *
8Sorter SRT A/C * * Distinct
7
9Normalizer NRM A/C * * Sequence, restart
11Stored Procedure SP P/C-UC * * * Bad file created
12Joiner JNR A/C * * * Sorted Input
13Aggregator AGG A/C * * * Group by port, Sorted input
14Rank RNK A/C * * * * Group by port, Expression
15Look-up LKP A/C-UC * * * *
16XML Source Qualifier XML A/C
17Custom CT A-P/C * *
18External Procedure EP P/C-UC * * *
19Update Strategy UPD A/C * *
DD_INSERT, DD_UPDATE, DD_DELETE, DD_REJECT - UP as UP , UP as IN, UP else In , TrancEXPRESSION - Reject file created
20 Midstream XML Parser
21 Midstream XML Generator
22 Application Source Qualifier
23 MQ Series Qualifier
24Application Multi group Qualifier
M.SHANMUGAM Transformation Details Page 1 / 24
SHANMUGAM.M
M.SHANMUGAM Transformation Details Page 2 / 24
1. AGGREGATOR TRANSFORMATIONA/C - AGG
DEFINITION
The Aggregator transformation is Active and Connected.The Aggregator transformations allow you to perform to aggregate calculation.You can use the aggregator transformation to perform calculation on groups.The Row which meet the condition are passed to target. The doesn’t meet the condition, rejected row store rejected file or Bad file directory.
AGGREGATOR FUNCTIONS
AVG MAX STDDEVCOUNT MEDIAN SUMFIRST MIN VARIANCE LAST PERCENTILE
PORTS
INPORTS - (I) each input Receive dataOUTPORTS - (O) Pass the data to other transformationVRIABLE PORTS - (V) its stores the Intermediate result it can reference input ports
Not to out ports GROUP BY PORT -
PROPERTIES
Cache Directory - $PMCaheDirTracing Level - Normal ( Terse / Normal / Verbose initialization /
verbose data )Sorted Input - Aggregator Data Cache - 2000000 BytesAggregator Index Cache - 1000000 BytesTransformation Scope - All input (Transaction / All Input)
COMPONENT
Aggregate Expression - Non aggregate Expression / Conditional ClassAggregate Cache -Group by Port - Which column you want group by Eg. DeptSorted Input - Reduce the amount of data cached
AGGREGATOR CACHE:The PCS stores data in the aggregate cache until it complete the aggregator calculation Index Cache : It stores the group value, As Configured in the group by portData Cache : Stores calculation ( Row data Stores, output value) Based on group-by-ports
OPTIMIZATION− Group by simple columns like numbers instead of string or date− Use sorted input− Use incremental aggregation− Minimize the aggregate function− Before filter transformation best ( Reduce the Data)− Lookup unconnected & stored procedure – we can call
M.SHANMUGAM Transformation Details Page 3 / 24
2. EXPRESSION TRANSFORMATIONP/C - EXP
DEFINITION
Expression Transformation Passive and connected transformationThis can be calculate values in a single row before writing to the target.Row by row calculation,Perform the any non aggregate function
EXPRESSION FUNCTIONS
The Expression Transformation is used for data cleansing and scrubbingThere are over 80 functions within PowerCenter, such as salary, concatenate, instring, rpad,
ltrim and we use many of them in the Expression Transformation. We can also create derived columns and variables in the Expression Transformation.
COMPONENTExpression - we can call – Unconnected Stored Procedure and Unconnected Lookup
PORTS
INPORTS - (I) Each input port Receive dataOUTPORTS - (O) which provide the value to either target or next transformation
in the mapping is called output ports VARIABLE PORTS - (V) Its stores the Intermediate result it can reference input ports
( -Which stores the variable information )PROPERTIES
Tracing Level - Normal ( Terse / Normal / Verbose initialization / verbose data )
OPTIMIZATION
− Factoring out common logic− Minimizing aggregator function calls. For Eg.use SUM(A+B) instead of using SUM(A) +
SUM(B)− Replacing common sub expression with local variables− Choosing Numeric Vs String operation− Choose DECODE function Vs LOOK UP operation− Choose CONCAT operation for Eg use ||’|| instead of CONCAT (Fname, Lastname)− you can enter multiple expression in a Single Expression Transformation.
M.SHANMUGAM Transformation Details Page 4 / 24
3. FILTER TRANSFORMATIONA/C - FIL
DEFINITION− This is a type of active and connected Transformation which is used to filter of the source rows
based on a condition. − Only the row which meet the condition are pass through to target.− Any kind of source we can use filter Transformation− Filter condition drops data that does not match the condition− We can put one or more condition (more condition means we can use AND , OR operator)− Discards rows don’t appear in the session log or reject files
PORTS
INPORTS - (I) Receive data from sourceOUTPORTS - (O) Pass the data to other Transformation
PROPERTIES
Filter Condition : <put Condition>Tracing Level : Normal (Normal / Terse / Verbose init / Verbose data )
OPTIMIZATION (TIPS)
♦ Use the filter transformation early in the mapping (or) nearly in SQ♦ The filter condition is case sensitive, and queries in some database do not take this into account.
TROUBLESHOOTING
Case sensitivity : the filter condition is case sensitiveAppended spaces : use the RTRIM function to remove additional space
FUNCTIONS
- You can use one or more condition in filter transformation- AND , OR logical operator through
M.SHANMUGAM Transformation Details Page 5 / 24
4. JOINER TRANSFORMATIONA/C - JNR
DEFINITION- This is active and connected Transformation. - Can be used to join two sources coming form two different locations or same location.- We can use homo genius and hetero genius sources- Join a flat file and a relational sources or to join two flat files or to join a relational source
and a XML source.CONDITION
1). Two sources there must be at least one matching ports or columns2). Two sources there should have Primary key and Foreign key relationship
PORTS
INPORTS - (I) Receive data from sourceOUTPORTS - (O) Pass the data to other TransformationMASTERPORTS - (M) If checked master(small) otherwise details (large)
(to switch the Master Details relationship for the source )PROPERTIES 1 Cache sensitive String Comparison - (Character data only enable) 2 Cache Directory - $PMCacheDir 3 Join Condition - 4 Join Type - NORMAL (Normal - M.outer -D.outer- Full Outer) 5 Null ordering in Master - Null is highest value (Null is lowest value) 6 Null ordering in Detail - Null is highest value (Null is lowest value) 7 Tracing Level - Normal (Normal / Terse / Verbose init / Verbose data ) 8 Joiner Data cache size - 2000000 9 Joiner Index cache size - 1000000 10 Sorted Input - 11 Transformation Scope - All input (Transaction / All Input )
COMPONENTCase sensitive string comparison - (Character data only enable)Cache directory -Join condition -Joiner type - ( Normal, Master Outer, Detail Outer, Full outer)
CACHE Joiner Data cache size : Out put value onlyJoiner Index cache size : The index cache holds rows from the master source that are in
the join condition.
M.SHANMUGAM Transformation Details Page 6 / 24
Index cacheStores index values for the master source table as configured in the join condition.
Data CacheStores master source rows.
FUNCTIONS Following types of source can be used in a joiner
− Two relational tables existing in separate databases− Two flat files in potentially different file systems− Two different ODBC sources− Two instances of the same XML sources− A relational table and a Flat file source − A relational table and a XML source
A joiner cannot contain the following types of source − Both pipelines begin with the same original data sources.− Both input pipeline originate from the same source qualifier transformation− Both input pipeline originate from the same normalizer transformation.− Both input pipeline originate from the same joiner transformation.− Either input pipeline contains an update strategy transformations− Either input pipeline contains a connected or unconnected sequence Generator transformation.
PERFORMANCE− Use sorted input (flat file ,relational data,)− Minimizing the disk input and output − Use in front of sorted transformation− For an unsorted joiner transformation, designate as the master source the source with fewer rows− For an sorted joiner transformation, designate as the master source the source with fewer duplicate key
values− Following Transformation we can’t use before the joiner Transformation.
- Sequence Generator Transformation directly- Update strategy Transformation
TIPS- sorted input – improve the session performance.- Don’t use following transformation sort origin and joiner transformation
- Custom , Unsorted aggregator, Normalizer, Rank. - Sort order from both table( master & Detail)- Normal or Master outer join perform than a full outer or detail outer join.
Normal - Matched Rows form master and detail source Master - all rows data from the detail source and the matching rows from the master source Detail - all rows data from the master source and the matching rows from the detail sourceFull outer - all rows rows of data from both the master and detail sources
M.SHANMUGAM Transformation Details Page 7 / 24
5. RANK TRANSFORMATION A/C - RNK
DEFINITIONThis an Active and Connected TransformationWhich is used to identify the Top or Bottom rank of data based on condition.Rank transformation to return the largest or smallest numeric value in a port or groupWe can use a rank transformation to return the strings at the top or the bottom of a session sort order.
FUNCTIONSASCII - Binary sort order UNICODE - Session sort order in session properties (code Page)
Binary sort orderBinary value string and returns rows with the highest binary values for string
PORTSINPUT ( I) - minimum of one OUTPUT (O) - minimum of one VARIABLE (V) - Stores values or calculations to use in an expressionsRANK (R) - Only one (default port-only out put return only)EXPRESSION -GROUP BY PORT -
PROPERTIESCACHE DIRECTORY - $PMCacheDirTOP / BOTTOM - TOPNUMBER OF RANKS - CASE SENSITIVE STING COMPARISON -TRACING LEVEL - normalRANK DATA CACHE SIZE - 2000000RANK INDEX CACHE SIZE - 1000000TRANSFORMATION SCOPE - All input (All input/ Transformation)
CACHE Index Cache Data Cache
Stores group values as configured in the group by ports.
Stores ranking information based on the group by ports.Can must run the session on a 64bit PoweCen
PERFORMANCE - Configure ASCII mode
M.SHANMUGAM Transformation Details Page 8 / 24
6. SORTER TRANSFORMATIONA/C - SRT
DEFINITION♦ It allows to sort data either in ascending or descending according to a specify sort key (field)♦ Also used to configure for case- sensitive sorting and specify whether the output rows should
be distinct.
FUNCTIONS- Sort data from relational or flat file source.- The sorter transformation treats the data passing through each successive sort key port as a Secondary sort of the previous port.
COMPONENTDIRECTION (V) - Ascending or Descending
PORTS INPORTS (I) - Receive data from sourceOUTPORTS (O) - Pass the data to other TransformationKEY (V) - Which one u want to sort the A/D)
PROPERTIESSorter cache size :10000000 #input rows + [ (∑column size) + 16]Case sensitive : (enable)Uppercase higher than lower caseWork directory : #PMTempDir (Temp file store-sorting timeDistinct : enable– eliminate duplicate value in out putTracing level : normal ( Terse / Normal / Verb init / Verb data ) Null treated low : (enable–treat null values higher than any other value)Transformation scope :All input 1) Transaction 2) All Input
SORT DATAEach successive sort key port as a secondary sort of the previous port
FORMULA# input rows [( Σ column size ) + 16]
PERFORMANCE - Sorter transformation to sort data passing through an Aggregator transformation configured to use
sorted transformation- You should configure sort criteria to PCS applies to all sort key ports
M.SHANMUGAM Transformation Details Page 9 / 24
7. SOURCE QUALIFIER TRANSFORMATIONA/C - SQ
DEFINITIONWhen adding a relational or flat file source definition to a mapping it is must to connect it to a Source Qualifier Transformation.The Source Qualifier transformation represents the rows that the powerCenter server reads when
it runs a session.FUNCTIONS & Perform
Overriding the default SQL query - Only relationalFiltering the record - Only relationalJoin the data from two or more tables etc, - Same source database
IMPORTANT TOPICTarget load order - Constraint based load Parameter & variable - $$$ session start timeDefault query -SQL Overwrite - Override the default SQL query (user defined join, source filter, no of sorted ports, select distinct
settingPORTS
INPORTS (I) - Receive data from sourceOUTPORTS (O) - Pass the data to other Transformation
PROPERTIESSQL Query - (custom query replace the default query)User Defined join - (user defined join)Source filter - (filter condition)No of Sorted ports - 0 (order by includes no of ports-sort order)Tracing level - normal ( Terse / Normal / Verb init / Verb data )Select Distinct - (enable-unique values from source) only enable flat filePre SQL - (before reads to the source)Post SQL - (after it writes to the target)
OPTIMIZATION- Use the source qualifier to filer. The source qualifier limits the row set extracted from the source where as filter limits the row set sent to a target.
PERFORMANCE- Join data originating from the same source database- Filter rows when the PCS reads source data- Specify an outer rather than the default inner join- Specify sorted ports - Select only distinct values from the source - Create custom query to issue a special select statement for the PCS to read source data - Data type we can’t change, if you can change mapping is invalid.Target Load Order:
- Multiple SQ connected multiple target.- One SQL provide multiple target you can enable constraint based loading in a session to have the PCS load data based on target table PK & FK relationship.
Default Join:- PK – FK Relationship - Matching Data Type
Custom Join:- Custom don’t have PK & FK relationship- Data type of columns used for the join don’t match
Outer Join support :- Default query outer join statement nested query created -( left outer, right outer, full outer)
M.SHANMUGAM Transformation Details Page 10 / 24
8. ROUTER TRANSFORMATIONA/C - RTR
DEFINITION- This is an active and connected Transformation - Similar to Filter Transformation - Single input multiple Target opp to union Transformation.- Processing the incoming data only once and passes the output to multiple groups and routes data to
the default O/P group that do not meet the conditionFUNCTIONS
- Router Transformation in a mapping the PCS process the incoming data only once.- Router Transformation efficient of Filter Transformation- Router Transformation one input group multiple output group
(user define output group (many) & default output group (one only)
PORTS Input Port - (enable ) only input group Output Port - (not visible) - only output group ( because group only findout)
GROUPInput group - user define group to test a condition based on incoming data Output group- 1. user defined group
2. Default group - we can’t modify on delete output ports- only connected target group- out put group of sequential only default created.- If you want the PCS to drop all rows in the default group, don’t connect it to a
transformation or a Target in a mapping.- If rows meet more then one group filter condition, the PCS Passes this rows multiple time
PROPERTIESTracing Level -
COMPONENTInput and Output groups Input and Output portsGroup filter conditions
PERFORMANCE &TIPS- One group can be connected to more then one transformation or target- One output ports in a group can be connected to multiple transformation or targets.- Multiple output ports in one group can be connected to multiple transformations or targets- More than one group cannot be connected to one transformation or target- We can’t connect more then one group to multiple input group Transformation, except for joiner
transformations, when you connect each output group to different input group.
M.SHANMUGAM Transformation Details Page 11 / 24
9. SEQUENCE GENERATOR TRANSFORMATIONP/C - SEG
DEFINITIONSequence Generator Transformation generates the numeric values.SGT to create unique primary key values, cycle through a sequential range of numbersCommon Use: - SG when you perform multiple loads to a single target
- Replacing the missing values.We can’t connect to more then one transformation
FUNCTIONSCURRVAL - NEXTVAL + INCREMENTAL BY VALUE
One row in each block Currval port without connecting the nextval port PowerCenter server passes a constant value for each row One row in each block
NEXTVAL - primary key – down stream transformation Unique PK values formation to generate the sequential based on the current value
PORTS ( Both are default port )INPORTS - Receive data through unconnected TransformationOUTPORTS - Pass the data to other transformation
- 2 Default output ports 1. NEXTVAL , 2. CURVAL
PROPERTIESStart value - 0 cycle option (enter the value complete the cycle value after restart the
valueIncrement by - 1 D b/w 2 consecutive values from the nextval port the default values is 1End value - (1-2147483647) the maximum values powerCenter generates.
- sequence is not configured to cycle it fails the session.Current value- enter you want first value PC server to use in sequence
-must be generate than or equal to the start value and less than the end value.
Cycle - If selected –sequence range (up to limit)- If not selected – session failure with overflow error.
Number of cached values –1 no of cached values determine the number of values the PC server caches at one time
Reset - If selected, PC generates values based on the original current value for each session (other wise)Reflect the last – generated value(Reusable is disabled for reusable sequence generator Transformati
Tracing level - level of information
PERFORMANCE NON REUSABLE SEQUENCE GENERATOR
Cache enable limit grater than 0.- row skipped the value.- discards the unused values.
REUSABLE SEQUENCE GENERATOR-Cache enable some upto limit Eg.1000
OPTIMIZATION− Use reusable sequence generator if the same sequence generator is to be used in more than one sessions.− Optimize performance by connecting only the nextval port in a mapping − Sequential reusable and use it in multiple mapping.
M.SHANMUGAM Transformation Details Page 12 / 24
FUNCTIONS- Perform the following task with a sequence generator Transformation- Create keys- Replace missing the value- Cycle through a sequential range of numbers
M.SHANMUGAM Transformation Details Page 13 / 24
10. UPDATE STRATEGY TRANSFORMATION A/C - UPD
DEFINITIONThis is an active and connected transformation.It is used to update in data in target table, either to maintain history of data or recent changesIt is used to flag the records for Insert, update, Delete and Reject rows in the Target database.It is used in slowly changing dimension to update the target table.This transformation used to SCD-1,SCD-2 and SCD-3 type.
FUNCTIONSSet at a 2 levels
1. Within a session – treat all records in the same way
for example, treat all records as
DD_INSERT-0,DD_UPDATE-1,DD_DELETE-2,DD_REJECT-3,
2. Within a Mapping Levels – Flag records for insert, update, delete or reject
• Insert. Select this option to insert a row into a target table.
• Delete. Select this option to delete a row from a table.
• Update. You have the following options in this situation:
Update as update. Update each row flagged for update if it exists in the target table.
Update as insert. Inset each row flagged for update.
Update else Insert. Update the row if it exists. Otherwise, insert it.
• Truncate table. Select this option to truncate the target table before loading data.
PORTS INPORTS (I) - Receive data from sourceOUTPORTS (O) - Pass the data to other Transformation
PROPERTIESUpdate strategy expression -0 (DD_INSERT-0,DD_UPDATE-1,DD_DELETE-2,DD_REJECT-3,)
Forward rejected rows - enable - flags the rows for reject and writes them to the session reject file.Tracing level - normal ( Terse / Normal / Verb init / Verb data )
FUNCTIONSCD-1 : It keeps the most recent updated values in the targetSCD-2 : It keeps the full historical business information in the target
The full history is maintain by inserting the new record in the targetSCD-3 : It keeps previous value and current
PERFORMANCE1. Whenever use Dynamic cache - at the time you must use UPD transformation2. Dynamic lookup use you can must select
1. Select insert 2. select update as update 3. Don’t select delete3. UPD - > AGG Use only – Update , Insert , Delete4. AGG - > UPD Use only – Update , Insert , Delete, Reject
UPDATE STRATEGY Dynamic lookup – u must use UPD transformation Business Logic :
IFF((current date>Previousdate) , DD_reject, DD_update) Update, Insert , Delete
Update, Insert , Delete , Reject
M.SHANMUGAM Transformation Details Page 14 / 24
UPD AGG
AGG UPD
- Dynamic lookup use you can must select - Select Insert - Select Update as update - Don’t select delete
M.SHANMUGAM Transformation Details Page 15 / 24
11. LOOKUP TRANSFORMATIONP/C & UC - LKP
DEFINITION Look up Transformation in lookup data in flat file or a relational table, view or synonym
Get a related value Look up transformation is used to perform the following task.
Get a related valuePerform a calculationUpdate slowly changing dimension
FUNCTIONSRelational look up - dynamic cache useFlat file lookup - must use in static cache
- we can configure sorted inputPORTS
INPUT ( I) -OUTPUT (O) -LOOKUP (L) -RETURN (R) -
PROPERTIESLook up SQL Override (R) - Lookup table name (R) - Lookup caching enabled (R/F) -Lookup policy on multiple match ”-Lookup condition -Location information -Source type -Re cache if staleTracing levelLookup cache directory name Lookup cache initialize
COMPONENTLook up table PortsPropertiesCondition Metadata Extensions
LOOK UP CACHE Persistent cacheRe cache from databaseStatic cache Dynamic cacheShared cache
PERFORMANCECached lookup:By indexing the columns in the lookup Order by
M.SHANMUGAM Transformation Details Page 16 / 24
LOOK UP TRANSFORMATION
Look up To look up data in Flat File, Table, Synonym, View.Use multiple lookup transformation in a mappingPerforms the following tasks
Get related value Perform calculations Updated slowly changing dimension tables.
Connected LookupStatic cache : return value from the lookup query. Dynamic cache : Case 1 : No rows found in cache – inserts the record
Case 2 : Row found in cache - updates the recordsUnconnected Lookup
- Common use into update slowly changing dimension – tables- Returns one value into the return port of look up transformation
Connected or Unconnected - Receive input and send output in different ways
Relational or flat file lookup
Cached or uncached Dynamic - Relational Static - Flat fileCached - Performance ( store the value whenever you want lookup table refer onlyUncached - each time lookup the value.
Connected Lookup Transformation
Unconnected Lookup Transformation
Relational & flat files lookups Relational Lookups - Dynamic cache Flat file lookup - Can use sorted input
- Can use indirect file- Can sort null date high - Can use case sensitive string comparison
LOOK UP COMPONENTS Look source - cached lookup – order by Ports - Uncached lookup – select Properties - Conddtion -
PORTS INPUT PORT (I) - OUT PORT (O) -LOOKUP (L) -RETURN (R) - only in connected lookup transformation
PROPERTIES Lookup SQL override (R) - Lookup table name (R) - Table, Synonym, View Lookup caching enabled (R/F) - Lookup policy on multiple match (F/R)- enable mean (first, last, return an error) Look Condition (F/R) - Connection information (R) - Source Type (R/F) -
M.SHANMUGAM Transformation Details Page 17 / 24
Tracing Level (R/F) - Lookup cache Directory name (F/R) - Lookup cache Persistent (F/R) - Look up Data cache size (F/R) - Look up index cache size (F/R) - Dynamic lookup cache (R) - insert (or) updates – (only lookup cache enabled) Output old value on update (F/R) - use only with dynamic cache enabled Cache file name prefix (F/R) - use only with persistent lookup cache
- name prefix to use persistent lookup cache file Re cache from lookup source(F/R) - Rebuild the persistent cache file Insert else update (R) - use only with dynamic cache enabled Update else Insert (R) - ” Date Time format (F) - Thousand separator (F) - default no separator (‘,’ ‘.’) Decimal separator (F) - default period (, .) Case sensitive string comparison(F) - Null ordering (F) - Sorted Input (F) -
Configuring Lookup Properties in a Session Flat file lookups - (file name and file type) Relational Lookup - (u can define $source & $Target variable in session)
Configuring Properties Flat file Lookup in a Session Lookup source file directory - $LookupFileDir (default) Lookup source file name - Lookup source file name - Direct
Configuration Relational Lookups in a Session Choose any relational connection Connection variable , $DBconnection Specify database connection for $Source and $Target
Lookup QueryDefault lookup query
• SELECT - SQL override • ORDER BY - we can use enabled the cache ( u can’t view this )
Overriding the lookup Query• override the ORDER BY statement• A lookup table name(or) columns contain a reserved word - 'reserved word’• Use mapping parameter & variables• A Lookup column name contains a slash (/) character• Add where statement• Other
Overriding the ORDER BY StatementOrder by - -
Reserved words- lookup or column names contain a database reserved word such as Month,Year – session fails- resword.txt (PC initialization Directory)
Guideline to Overriding the Lookup Query- SQL override only lookup SQL query relational only- Cache not enable PCS doesn’t recognize the override- Default query or configure override – lookup / output port – add or subtract port from the
SELECT statement, the session fails,- Filter before lookup using dynamic cache when you add where clause to lookup SQL override
M.SHANMUGAM Transformation Details Page 18 / 24
- Override the ORDER BY statement – session fail (because doesn’t contain condition port)Don’t suppress order only
- Reserved word session fail if you want use quotes ( “ ”)Step overriding the Lookup Query
Properties Tab enter SQL overrideValidate to test
Lookup condition1 . Data type in condition must match2. Multiple condition – use AND, OR3. Flat file for sorted input – session fail (condition are not grouped so you select group column)
Uncached Static cache 1. =, >, <, >=, <= , !=2. multiple condition – use AND , OR3. more then one lookup condition (first meet all condition after another condition so you select
GROUP columns
Dynamic cache 1. Only = operator2. can’t handling for multiple matches – otherwise PCS fail
Lookup Cache Index cache - Condition value Data cache - Output value
1). Persistent cache -2). Recache from lookup source -3). Static cache -4). Dynamic cache -5). Shard cache -
Configuring Unconnected Lookup Transformation - :LKP – reference qualifier to call the lookup within another transformation- Calling the same lookup multiple time in one mapping - Syntax: :LKP.Lookup_transformation_name(argument,argument,…..)Unconnected use following kinds.- Add input port - more then one condition - Add the lookup condition- Designate a return value- Call the lookup from another transformation
Add input portDesign for source and target
Item_id outIN_Item_id in
Add Lookup ConditionItem_id = IN_Item_id
- return condition is false lookup return nullDesignate a return value
- Multiple input & single output only- Update strategy or filter expression
Call the lookup through an ExpressionEg. IFF(Isnull(:LKP.lkpitems_dim(item_id,price)),DD_Update,DD_Reject
Creating a Look Transformation1. Choose an existing table or file definition2. Import a definition from a relational or file3. skip a create a manual definition
TIPs
M.SHANMUGAM Transformation Details Page 19 / 24
1. Add an index to the column used in a lookup condition2. place condition with an equality operator(=)first
M.SHANMUGAM Transformation Details Page 20 / 24
LOOKUP CACHE
Index - PCS Condition valueData Cache - Output ValueDefault - $PMCacheDir
- Data doesn’t fit in the memory cache - PCS stores the overflow value in the cache file when the session complete- Flat file lookup for sorted input
1. Persistent Cache :- Save and reuse them the next time
2. Recache from source- persistent cache is not synchronized with the lookup table - Rebuild
3. Static Cache- Read only - Default cache- PCS doesn’t update the cache
4. Dynamic Cache- insert new rows or update existing row - Dynamic insert & update – pass data to target table - Can’t use flat file
5. Shared Cache- use can use multiple transformation
1. PERSISTENT CACHE :
- PCS save or delete lookup cache files after successful session based on the lookup cache persistent property.
- Lookup table doesn’t change between session you can configure the lookup transformation to use a persistent lookup cache.
- PCS saves & reuses cache files from session to session so eliminating time required to read the lookup table.
Non Persistent Cache- Enable caching in lookup transformation the PCS delete the cache files at the end of a session- Next time you run the session the PCS build the memory cache from the database.
Persistent cache- If you want save and reuse the cache files you can configure the transformation - Use persistent cache the lookup table doesn’t change between session runs- Lookup table changes occasionally, can override session properties to re caches the lookup from
the database- Use Persistent cache means you can specify a name for the cache file
PCS server handling of persistent caches
2. REBUILDING THE LOOKUP CACHES
- Rebuild the Lookup caches, lookup sources changed size the last time the PCS build the cache- When you rebuild the cache the PCS create new cache file overriding existing persistent cache
file- The PCS server write a message to the session log file when if rebuild the caches- Don’t choose the to recache the lookup source PCS automatically rebuild the persistent cache
M.SHANMUGAM Transformation Details Page 21 / 24
3. STATIC CACHE (or) UCACHED LOOKUP- The PCS builds the cache when it process the first lookup request - PCS doesn’t update the cache.- Condition true – connected lookup transformation return values represent by lookup/output
ports.- Condition true - unconnected lookup transformation return values represent by return ports.- Condition is not true – connected lookup transformation return values to output port.- Condition is not true – unconnected lookup transformation return null values to returns port.- Multiple partition means PCS create one memory cache for each partition .
4. DYNAMIC LOOKUP CACHE- Insert the row into the cache- Update the row in the cache- Makes no changes to the cache
- Some situation when you can use dynamic lookup cache1. Update a master customer table with new & updated customer information
- Static lookup cache - fact file - Dynamic lookup cache - Relational table
2. Loading data into a slowly changing dimension table & a fact table 3. Router or filter - use 4. Multiple partition in a pipeline that use a dynamic lookup cache the PCS create one me
memory cache and one disk cache for each transformation . New lookup row port
- Target table synchronized - Ignore Null input for updates- Ignore in comparison
• Ignore Null values Using the Associated Input port :
- You must associated each lookup/out port with an input/output port or a sequence ID - The PCS uses the data in the associated port to insert or update rows in the lookup cache.
Sequence ID Generate following Process- PCS create dynamic lookup cache – tracks the range of value in the cache associated with
any port using a sequence ID- Maximum value for a sequence ID is 2147483647.
Working with lookup Transformation values- Associated an input/output ports or a sequence ID with a Lookup/output port – following
Value match default.• Input Value - PCS passes into the Transformation • Lookup Values - PCS Passes insert into the cache.Input /output port output value – PCS Passes out of the Input/output port
- Out put old value on update – PCS output the value that existed in the cache before it updated the row.
- Out put new value on update – PCS output the updated value that it write in the cache- When the update a dynamic lookup cache & Target table.- PCS can handle the null values in the following ways.
• Insert Null values -• Ignore Null values - (Not null values)
- When you run a session that use a dynamic lookup cache PCS compares the value in all lookup ports with the value.
- If compare the value to determine whether or not to update the row in the lookup cache. Update strategy Transformation with a Dynamic chache.
1. Row entering the lookup Transformation : (By default) all row type all rows entering a lookup transformation is insert.2. Row Leaving the Lookup Transformation : PCS changed the lookup cache but it does not
change the row type- Update Strategy transformation & a dynamic lookup cache you must define certain
M.SHANMUGAM Transformation Details Page 22 / 24
session properties- PCS result the lookup cache & Target table might become unsynchronized.
5. SHARING THE LOOKUP CACHE - You can configure multiple lookup Transformation in a mapping to store a single lookup
cache. - You can share cache that are unnamed & named.
1. Unnamed Cache: - Compatible caching structure - The PCS share the cache by default you can share static unnamed caches.
2. Named Cache : - Use a persistent named cache - when you want to share cache files across mapping or share and a static
cache Sharing an Unnamed Lookup cache :
- When 2 Lookup transformation share an unnamed cahce.- You can share static unnamed cache.
Sharing a Named Lookup Cache.- We can share the cache between multiple lookup transformation by using a Persistent - We can share one cache between lookup Transformation is the same mapping (or) across
mapping- Named cache – cache directory for cache files with the same files name prefix.- Specify the cache file directory.- PCS rebuild the memory cache from the persisted file - PCS structure don’t match the PCS fails the session.- PCS process multiple session simultaneously when the lookup transformation only need
to read the cache files.- A named cache created by a dynamic lookup transformation with a lookup policy
TIPs- PCS then saves & reuses cache files from session to session, eliminating the time
required to read the lookup table
M.SHANMUGAM Transformation Details Page 23 / 24
14. UNION TRANSFORMATIONA/C - UN
DEFINITION- Union Transformation is a multiple input group transformation which is responsible for merging the
data coming from more then one source.- Union Transformation also merge the data Hetero geneous sources also.- Union Transformation is newly introduced in Informatica 7.1 version onwards.- Union Transformation to the UNION ALL statement - Union Transformation is developed using the custom Transformation.
FUNCTIONS- Create Multiple input groups but only one output groups- All Input groups and the out put groups must have matching port. The precision, data type, and scale
must be identical across all groups- Union Transformation doesn’t remove duplicate rows.- To remove duplicate rows you must add another transformation upstream from a union Transformation.- Can’t use sequence generator or update strategy transformation upstream from a Union Transformation.- Union Transformation doesn’t generate transaction.
COMPONENT Transformation Tab : you can rename the transformation and add a descriptionProperties Tab : you can specify the Tracing level Groups : you can create & delete input groups (Design displays groups you create
on the ports tabGroups ports tab : you can create & delete ports for the input groups
We can’t modify ports, Initialization properties, meta data Extension or port attributes definition Tab
PORTS Groups & ports :
Multiple input groups & one output groups, Design create output groups by default we can’t edit or delete the outputs groups
MAPPING- Union Transformation is a non blocking multiple input group Transformation - When you add a Union Transformation to a mapping you must verify that you connect the same ports in
all inputs groups. If you connect all ports in one input group but don’t connect a port in another input groups. If you connect all ports in one input groups, but don’t connect a port in another input group the PCS passes Nulls to the unconnected ports
PROPERTIES Mapping level - Session Level
Module identifies - PmuniontransFunction identifiers - pmunionfuncRuntime Location - enableTracing Level - Normal enableIs Partition able - Inputs Must Block -Is Active -Update Strategy Transfomat -Transformation Scope - RowGenerate Transformation -Output Repeatable - Never
PERFORMANCE
M.SHANMUGAM Transformation Details Page 24 / 24