Informatica Interview QnA

439
INFORMATICA INTERVIEW QUESTIONS – 1.Informatica - Why we use lookup transformations? QUESTION #1 Lookup Transformations can access data from relational tables that are not sources in mapping. With Lookup transformation, we can accomplish the following tasks: Get a related value-Get the Employee Name from Employee table based on the Employee IDPerform Calculation. Update slowly changing dimension tables - We can use unconnected lookup transformation to determine whether the records already exist in the target or not. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 01:12:33 #1 sithusithu Member Since: December 2005 Contribution: 161 RE: Why we use lookup transformations? ======================================= Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicates Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. Import a lookup definition from any relational database to which both the Informatica Client and Server can connect. You can use multiple Lookup transformations in a mapping

Transcript of Informatica Interview QnA

Page 1: Informatica Interview QnA

INFORMATICA INTERVIEW QUESTIONS –

1.Informatica - Why we use lookup transformations?QUESTION #1 Lookup Transformations can access data from relational tablesthat are not sources in mapping. With Lookup transformation, we canaccomplish the following tasks:Get a related value-Get the Employee Name from Employee table based on the EmployeeIDPerform Calculation.Update slowly changing dimension tables - We can use unconnected lookup transformation todetermine whether the records already exist in the target or not.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 01:12:33 #1sithusithu Member Since: December 2005 Contribution: 161RE: Why we use lookup transformations?=======================================Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicatesUse a Lookup transformation in your mapping to look up data in a relational table view or synonym.Import a lookup definition from any relational database to which both the Informatica Client and Servercan connect. You can use multiple Lookup transformations in a mappingCheersSithufile:///C|/Perl/bin/result.html (1 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Lookup Transformations used to search data from relational tables/FLAT Files that are not used inmapping.Types of Lookup:

Page 2: Informatica Interview QnA

1. Connected Lookup2. UnConnected Lookup=======================================The main use of lookup is to get a related value either from a relational sources or flat files=======================================The following reasons for using lookups.....1)We use Lookup transformations that query the largest amounts of data toimprove overall performance. By doing that we can reduce the number of lookupson the same table.2)If a mapping contains Lookup transformations we will enable lookup cachingif this option is not enabled .We will use a persistent cache to improve performance of the lookup wheneverpossible.We will explore the possibility of using concurrent caches to improve sessionperformance.We will use the Lookup SQL Override option to add a WHERE clause to the defaultSQL statement if it is not definedWe will add ORDER BY clause in lookup SQL statement if there is no order bydefined.We will use SQL override to suppress the default ORDER BY statement and enter anoverride ORDER BY with fewer columns. Indexing the Lookup TableWe can improve performance for the following types of lookups:For cached lookups we will index the lookup table using the columns in thelookup ORDER BY statement.For Un-cached lookups we will Index the lookup table using the columns in thelookup where condition.file:///C|/Perl/bin/result.html (2 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

3)In some cases we use lookup instead of Joiner as lookup is faster thanjoiner in some cases when lookup contains the master data only.4)This lookup helps in terms of performance tuning of the mappings also.=======================================Look up Transformation is like a set of Reference for the traget table.For example suppose you aretravelling by an auto ricksha..In the morning you notice that the auto driver showing you some card and

Page 3: Informatica Interview QnA

saying that today onwards there is a hike in petrol.so you have to pay more. So the card which he isshowing is a set of reference for there costumer..In the same way the lookup transformation works.These are of 2 types :a) Connected Lookupb) Un-connected lookupConnected lookup is connected in a single pipeline from a source to a target where as Un ConnectedLookup is isolated with in the mapping and is called with the help of a Expression Transformation.=======================================Look up tranformations are used toGet a related valueUpdating slowly changing dimensionCaluculating expressions=======================================

2.Informatica - While importing the relational source definitionfrom database, what are the meta data of source U iQUESTION #2 Source nameDatabase locationColumn namesDatatypesKey constraintsClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (3 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

September 28, 2006 06:30:08 #1srinvas vadlakondaRE: While importing the relational source defintion fr...=======================================source name data types key constraints database location=======================================

Page 4: Informatica Interview QnA

Relational sources are tables views synonyms. Source name Database location Column name DatatypeKey Constraints. For synonyms you will have to manually create the constraints.=======================================

3.Informatica - How many ways you can update a relationalsource defintion and what r they?QUESTION #3 Two ways1. Edit the definition2. Reimport the defintionClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 30, 2006 04:59:06 #1gazulas Member Since: January 2006 Contribution: 17RE: How many ways you can update a relational source d...=======================================in 2 ways we can do it1) by reimport the source definition2) by edit the source definition=======================================

4.Informatica - Where should U place the flat file to import thefile:///C|/Perl/bin/result.html (4 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

flat file defintion to the designer?QUESTION #4 Place it in local folderClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 13, 2005 08:42:59 #1rishiRE: Where should U place the flat file to import the f...=======================================There is no such restrication to place the source file. In performance point of view its better to place the

Page 5: Informatica Interview QnA

file in server local src folder. if you need path please check the server properties availble at workflowmanager.It doesn't mean we should not place in any other folder if we place in server src folder by default srcwill be selected at time session creation.=======================================file must be in a directory local to the client machine.=======================================Basically the flat file should be stored in the src folder in the informatica server folder.Now logically it should pick up the file from any location but it gives an error of invalid identifier ornot able to read the first row.So its better to keep the file in the src folder.which is already created when the informatica is installed=======================================We can place source file any where in network but it will consume more time to fetch data from sourcefile but if the source file is present on server srcfile then it will fetch data from source upto 25 timesfaster than previous.=======================================

5.Informatica - To provide support for Mainframes source data,which files r used as a source definitions?QUESTION #5 COBOL filesClick Here to view complete documentfile:///C|/Perl/bin/result.html (5 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer.October 07, 2005 11:49:42 #1Shaks KrishnamurthyRE: To provide support for Mainframes source data,whic...=======================================COBOL Copy-book files=======================================

Page 6: Informatica Interview QnA

The mainframe files are Used as VSAM files in Informatica by using the Normaliser transformation=======================================

6.Informatica - Which transformation should u need while usingthe cobol sources as source defintions?QUESTION #6 Normalizer transformaiton which is used to normalize the data.Since cobol sources r oftenly consists of Denormailzed data.Click Here to view complete documentSubmitted by: sithusithuNormalizer transformaitonCheers,SithuAbove answer was rated as good by the following members:ramonasiraj=======================================Normalizer transformaitonCheersSithufile:///C|/Perl/bin/result.html (6 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Normalizer transformaiton which is used to normalize the data=======================================

7.Informatica - How can U create or import flat file definition into the warehouse designer?QUESTION #7 U can not create or import flat file defintion in to warehousedesigner directly.Instead U must analyze the file in source analyzer,then drag itinto the warehouse designer.When U drag the flat file source defintion into

Page 7: Informatica Interview QnA

warehouse desginer workspace,the warehouse designer creates a relational targetdefintion not a file defintion.If u want to load to a file,configure the session towrite to a flat file.When the informatica server runs the session,it creates andloads the flatfile.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.August 22, 2005 03:23:12 #1PraveenRE: How can U create or import flat file definition in to the warehouse designer?=======================================U can create flat file definition in warehouse designer.in the warehouse designer u can create newtarget: select the type as flat file. save it and u can enter various columns for that created target byediting its properties.Once the target is created save it. u can import it from the mapping designer.=======================================Yes you can import flat file directly into Warehouse designer. This way it will import the fielddefinitions directly.=======================================1) Manually create the flat file target definition in warehouse designer2)create target definition from a source definition. This is done my dropping a source definition inwarehouse designer.3)Import flat file definitionusing a flat file wizard. ( file must be local to the client machine)=======================================While creating flatfiles manually we drag and drop the structure from SQ if the structure we need is thesame as of source for this we need to check-in the source and then drag and drop it into the Flatfile iffile:///C|/Perl/bin/result.html (7 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Page 8: Informatica Interview QnA

not all the columns in the source will be changed as primary keys.=======================================

8.Informatica - What is the maplet?QUESTION #8 Maplet is a set of transformations that you build in the mapletdesigner and U can use in multiple mapings.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 08, 2005 23:38:47 #1phaniRE: What is the maplet?=======================================For Ex:Suppose we have several fact tables that require a series of dimension keys.Then we can create amapplet which contains a series of Lkp transformations to find each dimension key and use it in eachfact table mapping instead of creating the same Lkp logic in each mapping.=======================================Part(sub set) of the Mapping is known as MappletCheersSithu=======================================Set of transforamations where the logic can be reusble=======================================A mapplet should have a mapplet input transformation which recives input values and a outputtransformation which passes the final modified data to back to the mapping.when the mapplet is displayed with in the mapping only input & output ports are displayed so that theinternal logic is hidden from end-user point of view.=======================================file:///C|/Perl/bin/result.html (8 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Reusable mapping is known as mapplet & reusable transformation with mapplet=======================================Maplet is a reusable business logic which can be used in mappings=======================================A maplet is a reusable object which contains one or more than one transformation which is used to

Page 9: Informatica Interview QnA

populate the data from source to target based on the business logic and we can use the same logic indifferent mappings without creating the mapping again.=======================================Mapplet Is In Mapplet Designer It is used to create Mapplets.=======================================A mapplet is a reusable object that represents a set of transformations. Mapplet can be designed usingmapping designer in informatica power center=======================================Basically mapplet is a subset of the mapping in which we can have the information of the eachdimension keys by keeping the different mappings created individually. If we want a series ofdimension keys in the final fact table we will use mapping designer.=======================================

9.Informatica - what is a transforamation?QUESTION #9 It is a repostitory object that generates,modifies or passes data.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 23, 2005 16:06:23 #1sirRE: what is a transforamation?=======================================a transformation is repository object that pass data to the next stage(i.e to the next transformation ortarget) with/with out modifying the data=======================================It is a process of converting given input to desired output.=======================================set of operationCheersSithufile:///C|/Perl/bin/result.html (9 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Transformation is a repository object of converting a given input to desired output.It can generates

Page 10: Informatica Interview QnA

modifies and passes the data.=======================================A TransFormation Is a Repository Object.That Generates Modifies or Passes Data.The Designer Provides a Set of Transformations That perform Specific Functions.For Example An AGGREGATOR Transformation Performs Calculations On Groups Of Data.=======================================

10.Informatica - What r the designer tools for creatingtranformations?QUESTION #10 Mapping designerTansformation developerMapplet designerClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 21, 2007 05:29:40 #1MANOJ KUMAR PANIGRAHIRE: What r the designer tools for creating tranformati...=======================================There r 2 type of tool r used 4 creating transformation......just likeMapping designerMapplet designer=======================================Mapping DesignerMaplet DesignerTransformation Deveoper - for reusable transformation=======================================

11.Informatica - What r the active and passive transforamtions?QUESTION #11 An active transforamtion can change the number of rows thatfile:///C|/Perl/bin/result.html (10 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

pass through it.A passive transformation does not change the number of rows

Page 11: Informatica Interview QnA

that pass through it.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 24, 2006 03:32:14 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the active and passive transforamtions?=======================================Transformations can be active or passive. An active transformation can change the number of rows thatpass through it such as a Filter transformation that removes rows that do not meet the filter condition.A passive transformation does not change the number of rows that pass through it such as anExpression transformation that performs a calculation on data and passes all rows through thetransformation.CheersSithu=======================================Active Transformation : A Transformation which change the number of rows when data is flowing fromsource to targetPassive Transformation : A transformation which does not change the number of rows when the data isflowing from source to target=======================================

12.Informatica - What r the connected or unconnectedtransforamations?QUESTION #12 An unconnected transforamtion is not connected to othertransformations in the mapping.Connected transforamation is connected toother transforamtions in the mapping.Click Here to view complete documentfile:///C|/Perl/bin/result.html (11 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Page 12: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit your answer.August 22, 2005 03:26:32 #1PraveenRE: What r the connected or unconnected transforamations?=======================================An unconnected transformation cant be connected to another transformation. but it can be called insideanother transformation.=======================================Here is the dealConnected transformation is a part of your data flow in the pipeline while unconnected Transformationis not.much like calling a program by name and by reference.use unconnected transforms when you wanna call the same transform many times in a single mapping.=======================================In addition to first answer uncondition transformation are directly connected and can/used in as many asother transformations. If you are using a transformation several times use unconditional. You get betterperformance.=======================================Connect Transformation :A transformation which participates in the mapping data flowConnectedtransformation can receive multiple inputs and provides multiple outputsUnconnected :A unconnected transformation does not participate in the mapping data flowIt can receive multiple inputs and provides single outputfile:///C|/Perl/bin/result.html (12 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

ThanksRekha=======================================

13.Informatica - How many ways u create ports?QUESTION #13 Two ways

Page 13: Informatica Interview QnA

1.Drag the port from another transforamtion2.Click the add buttion on the ports tab.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 28, 2006 06:31:21 #1srinivas.vadlakondaRE: How many ways u create ports?=======================================Two ways1.Drag the port from another transforamtion2.Click the add buttion on the ports tab.=======================================we can copy and paste the ports in the ports tab=======================================

14.Informatica - What r the reusable transforamtions?QUESTION #14 Reusable transformations can be used in multiple mappings.When u need to incorporate this transformation into maping,U add an instanceof it to maping.Later if U change the definition of the transformation ,allinstances of it inherit the changes.Since the instance of reusable transforamationis a pointer to that transforamtion,U can change the transforamation in thetransformation developer,its instances automatically reflect these changes.Thisfeature can save U great deal of work.Click Here to view complete documentSubmitted by: sithusithufile:///C|/Perl/bin/result.html (13 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

A transformation can reused, that is know as reusable transformationYou can design using 2 methods1. using transformation developer

Page 14: Informatica Interview QnA

2. create normal one and promote it to reusableCheersSithuAbove answer was rated as good by the following members:ramonasiraj=======================================A transformation can reused that is know as reusable transformationYou can design using 2 methods1. using transformation developer2. create normal one and promote it to reusableCheersSithu=======================================Hai to all friends out therethe transformation that can be reused is called a reusable transformation.as the property suggests it has to be reused:so for reusing we can do it in two different ways1) by creating normal transformation and making it reusable by checking it in the check box of theproperties of the edit transformation.2) by using transformation developer here what ever transformation is developed it is reusable and itcan be used in mapping designer where we can further change its properties as per our requirement.file:///C|/Perl/bin/result.html (14 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================1. A reusable transformation can be used in multiple transformations2.The designer stores each reusable transformation as metada separate fromany mappings that use the transformation.3. Every reusable transformation falls within a category of transformations available in the Designer4.one can only create an External Procedure transformation as a reusable transformation.=======================================

15.Informatica - What r the methods for creating reusabletransforamtions?QUESTION #15 Two methods

Page 15: Informatica Interview QnA

1.Design it in the transformation developer.2.Promote a standard transformation from the mapping designer.After U add atransformation to the mapping , U can promote it to the status of reusabletransformation.Once U promote a standard transformation to reusable status,U can demote it toa standard transformation at any time.If u change the properties of a reusable transformation in mapping,U can revertit to the original reusable transformation properties by clicking the revertbutton.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 12, 2005 12:22:21 #1Praveen VasudevRE: methods for creating reusable transforamtions?file:///C|/Perl/bin/result.html (15 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================PLEASE THINK TWICE BEFORE YOU POST AN ANSWER.Answer: Two methods1.Design it in the transformation developer. by default its a reusable transform.2.Promote a standard transformation from the mapping designer.After U add a transformation to themapping U can promote it to the status of reusable transformation.Once U promote a standard transformation to reusable status U CANNOT demote it to a standardtransformation at any time.If u change the properties of a reusable transformation in mapping U can revert it to the originalreusable transformation properties by clicking the revert button.=======================================You can design using 2 methods1. using transformation developer

Page 16: Informatica Interview QnA

2. create normal one and promote it to reusableCheersSithu=======================================

16.Informatica - What r the unsupported repository objects for amapplet?QUESTION #16 COBOL source definitionJoiner transformationsNormalizer transformationsNon reusable sequence generator transformations.Pre or post session stored proceduresTarget defintionsPower mart 3.5 style Look Up functionsXML source definitionsIBM MQ source defintionsfile:///C|/Perl/bin/result.html (16 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 04:23:12 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the unsupported repository objects for a ma...=======================================l Source definitions. Definitions of database objects (tables views synonyms) or files that providesource data.l Target definitions. Definitions of database objects or files that contain the target data.l Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.l Mappings. A set of source and target definitions along with transformations containing businesslogic that you build into the transformation. These are the instructions that the Informatica Server usesto transform and move data.

Page 17: Informatica Interview QnA

l Reusable transformations. Transformations that you can use in multiple mappings.l Mapplets. A set of transformations that you can use in multiple mappings.l Sessions and workflows. Sessions and workflows store information about how and when theInformatica Server moves data. A workflow is a set of instructions that describes how and when to runtasks related to extracting transforming and loading data. A session is a type of task that you can put ina workflow. Each session corresponds to a single mapping.CheersSithu=======================================HiThe following answer is from Informatica Help Documnetationl You cannot include the following objects in a mapplet:l Normalizer transformationsl Cobol sourcesl XML Source Qualifier transformationsl XML sourcesfile:///C|/Perl/bin/result.html (17 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

l Target definitionsl Pre- and post- session stored proceduresl Other mappletsShivaji Thaneru=======================================normaliser xml source qualifier and cobol sources cannot be used=======================================Normalizer transformationsCobol sourcesXML Source Qualifier transformationsXML sourcesTarget definitionsPre- and post- session stored proceduresOther mapplets-PowerMart 3.5-style LOOKUP functions-non reusable sequence generator=======================================

Page 18: Informatica Interview QnA

17.Informatica - What r the mapping paramaters and mapingvariables?QUESTION #17 Maping parameter represents a constant value that U can definebefore running a session.A mapping parameter retains the same value throughoutthe entire session.When u use the maping parameter ,U declare and use the parameter in a mapingor maplet.Then define the value of parameter in a parameter file for the session.Unlike a mapping parameter,a maping variable represents a value that canchange throughout the session.The informatica server saves the value of mapingvariable to the repository at the end of session run and uses that value next timeU run the session.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 12, 2005 12:30:13 #1Praveen Vasudevfile:///C|/Perl/bin/result.html (18 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: mapping varibles=======================================Please refer to the documentation for more understanding.Mapping variables have two identities:Start value and Current valueStart value Current value ( when the session starts the execution of the undelying mapping)Start value <> Current value ( while the session is in progress and the variable value changes in oneore more occasions)

Page 19: Informatica Interview QnA

Current value at the end of the session is nothing but the start value for the subsequent run of thesame session.=======================================You can use mapping parameters and variables in the SQL query user-defined join and source filter of aSource Qualifier transformation. You can also use the system variable $$$SessStartTime.The Informatica Server first generates an SQL query and scans the query to replace each mappingparameter or variable with its start value. Then it executes the query on the source database.CheersSithu=======================================Mapping parameter represents a constant value defined before mapping run.Mapping reusability can be achieved by using mapping parameters.file:///C|/Perl/bin/result.html (19 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Mapping variable represents a value that can be changed during the mapping run.Mapping variable can be used in incremental loading process.=======================================

18.Informatica - Can U use the maping parameters or variablescreated in one maping into another maping?QUESTION #18 NO.We can use mapping parameters or variables in any transformation of the samemaping or mapplet in which U have created maping parameters or variables.Click Here to view complete documentSubmitted by: RayNO. You might want to use a workflow parameter/variable if you want it to be visible with othermappings/sessionsAbove answer was rated as good by the following members:ramonasiraj

Page 20: Informatica Interview QnA

=======================================NO. You might want to use a workflow parameter/variable if you want it to be visible with othermappings/sessions=======================================HiThe following sentences extracted from Informatica help as it is.Did it support the above to answers.After you create a parameter you can use it in the Expression Editor of any transformation in a mappingor mapplet. You can also use it in Source Qualifier transformations and reusable transformations.Shivaji Thaneru=======================================I differ on this; we can use global variable in sessions as well as in mappings. This provision isprovided in Informatica 7.1.x versions; I have used it. Please check this in properties.Regards-Vaibhav=======================================hifile:///C|/Perl/bin/result.html (20 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Thanx Shivaji but the statement does not completely answer the question.a mapping parameter can be used in reusable transformationbut does it mean u can use the mapping parameter whereever the instances of the reusabletransformation are used?=======================================The scope of a mapping variable is the mapping in which it is defined. A variable Var1 defined inmapping Map1 can only be used in Map1. You cannot use it in another mapping say Map2.=======================================

19.Informatica - Can u use the maping parameters or variablescreated in one maping into any other reusable transform

Page 21: Informatica Interview QnA

QUESTION #19 Yes.Because reusable tranformation is not contained with anymaplet or maping.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 02, 2007 17:06:04 #1mahesh4346 Member Since: January 2007 Contribution: 6RE: Can u use the maping parameters or variables creat...=======================================But when one cant use Mapping parameters and variables of one mapping in another Mapping then howcan that be used in reusable transformation when Reusable transformations themselves can be usedamong multiple mappings?So I think one cant use Mapping parameters and variables in reusabletransformationsPlease correct me if i am wrong=======================================Hi you can use the mapping parameters or variables in a reusable transformation. And when you use thexformation in a mapping during execution of the session it validates if the mapping parameter that isused in the xformation is defined with this mapping or not. If not the session fails.=======================================

20.Informatica - How can U improve session performance inaggregator transformation?QUESTION #20 Use sorted input.file:///C|/Perl/bin/result.html (21 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 12, 2005 12:34:09 #1Praveen VasudevRE:=======================================use sorted input:1. use a sorter before the aggregator

Page 22: Informatica Interview QnA

2. donot forget to check the option on the aggregator that tell the aggregator that the input is sorted onthe same keys as group by.the key order is also very important.=======================================hiYou can use the following guidelines to optimize the performance of an Aggregator transformation.Use sorted input to decrease the use of aggregate caches.Sorted input reduces the amount of data cached during the session and improves session performance.Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation.Limit connected input/output or output ports.Limit the number of connected input/output or output ports to reduce the amount of data the Aggregatortransformation stores in the data cache.Filter before aggregating.file:///C|/Perl/bin/result.html (22 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

If you use a Filter transformation in the mapping place the transformation before the Aggregatortransformation to reduce unnecessary aggregation.Shivaji T=======================================Following are the 3 ways with which we can improve the session performance:-a) Use sorted input to decrease the use of aggregate caches.b) Limit connected input/output or output portsc) Filter before aggregating (if you are using any filter condition)=======================================By using Incrimental aggrigation also we can improve performence.Becaue it passes the new data to themapping and uses historical data to perform aggrigation=======================================to improve session performance in aggregator transformation enable the session option IncrementalAggregation=======================================-Use sorted input to decrease the use of aggregate caches.-Limit connected input/output or output ports.

Page 23: Informatica Interview QnA

Limit the number of connected input/output or output ports to reduce the amount of data the Aggregatortransformation stores in the data cache.-Filter the data before aggregating it.=======================================

21.Informatica - What is aggregate cache in aggregatortransforamtion?QUESTION #21 The aggregator stores data in the aggregate cache until itcompletes aggregate calculations.When u run a session that uses an aggregatortransformation,the informatica server creates index and data caches in memoryto process the transformation.If the informatica server requires more space,itstores overflow values in cache files.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (23 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 19, 2006 05:00:00 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is aggregate cache in aggregator transforamti...=======================================When you run a workflow that uses an Aggregator transformation the Informatica Server creates indexand data caches in memory to process the transformation. If the Informatica Server requires more spaceit stores overflow values in cache files.CheersSithu=======================================Aggregate cache contains data values while aggregate calculations are being performed. Aggregatecache is made up of index cache and data cache. Index cache contains group values and data cache

Page 24: Informatica Interview QnA

consists of row values.=======================================when server runs the session with aggregate transformation it stores data in memory until it completesthe aggregationwhen u partition a source the server creates one memory cache and one disk cache for each partition .itroutes the data from one partition to another based on group key values of the transformation=======================================

22.Informatica - What r the diffrence between joinertransformation and source qualifier transformation?QUESTION #22 U can join hetrogenious data sources in joiner transformationwhich we can not achieve in source qualifier transformation.U need matching keys to join two relational sources in source qualifiertransformation.Where as u doesn’t need matching keys to join two sources.Two relational sources should come from same datasource in sourcequalifier.Ucan join relatinal sources which r coming from diffrent sources also.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (24 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 27, 2006 01:45:56 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the diffrence between joiner transformation...=======================================Source qualifier Homogeneous sourceJoiner Heterogeneous source

Page 25: Informatica Interview QnA

CheersSithu=======================================HiThe Source Qualifier transformation provides an alternate way to filter rows. Rather than filtering rowsfrom within a mapping the Source Qualifier transformation filters rows when read from a source. Themain difference is that the source qualifier limits the row set extracted from a source while the Filtertransformation limits the row set sent to a target. Since a source qualifier reduces the number of rowsused throughout the mapping it provides better performance.However the Source Qualifier transformation only lets you filter rows from relational sources while theFilter transformation filters rows from any type of source. Also note that since it runs in the databaseyou must make sure that the filter condition in the Source Qualifier transformation only uses standardSQL.Shivaji Thaneru=======================================hi as per my knowledge you need matching keys to join two relational sources both in Source qualifieras well as in Joiner transformation. But the difference is that in Source qualifier both the keys must haveprimary key - foreign key relation Whereas in Joiner transformation its not needed.file:///C|/Perl/bin/result.html (25 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================source qualifier is used for reading the data from the database where as joiner transformation is used forjoining two data tables.source qualifier can also be used to join two tables but the condition is that both the table should befrom relational database and it should have the primary key with the same data structure.using joiner we can join data from two heterogeneous sources like two flat files or one file fromrelational and one file from flat.

Page 26: Informatica Interview QnA

=======================================

23.Informatica - In which condtions we can not use joinertransformation(Limitaions of joiner transformation)?QUESTION #23 Both pipelines begin with the same original data source.Both input pipelines originate from the same Source Qualifier transformation.Both input pipelines originate from the same Normalizer transformation.Both input pipelines originate from the same Joiner transformation.Either input pipelines contains an Update Strategy transformation.Either input pipelines contains a connected or unconnected Sequence Generatortransformation.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 25, 2006 12:18:35 #1SurendraRE: In which condtions we can not use joiner transform...=======================================This is no longer valid in version 7.2Now we can use a joiner even if the data is coming from the same source.SK=======================================You cannot use a Joiner transformation in the following situations(according to infa 7.1):file:///C|/Perl/bin/result.html (26 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

©Either input pipeline contains an Update Strategy transformation.©You connect a Sequence Generator transformation directly before the Joinertransformation.=======================================

Page 27: Informatica Interview QnA

I don't understand the second one which says we have a sequence generator? Please can you explain onthat one?=======================================Can you please let me know the correct and clear answer for Limitations of joiner transformation?swapna=======================================You cannot use a Joiner transformation in the following situations(according to infa 7.1): When Youconnect a Sequence Generator transformation directly before the Joiner transformation.for more information check out the informatica manual7.1=======================================What about join conditions. Can we have a ! condition in joiner.=======================================No in a joiner transformation you can only use an equal to ( ) as a join conditionAny other sort of comparison operators is not allowed> < ! <> etc are not allowed as a join conditionUtsav=======================================Yes joiner only supports equality conditionThe Joiner transformation does not match null values. For example if both EMP_ID1 and EMP_ID2from the example above contain a row with a null value the PowerCenter Server does not consider thema match and does not join the two rows. To join rows with null values you can replace null input withdefault values and then join on the default values.=======================================We cannot use joiner transformation in the following two conditions:-1. When our data comes through Update Strategy transformation or in other words after Update strategywe cannot add joiner transformation2. We cannot connect a Sequence Generator transformation directly before the Joiner transformation.=======================================

24.Informatica - what r the settiings that u use to cofigure the

Page 28: Informatica Interview QnA

joiner transformation?file:///C|/Perl/bin/result.html (27 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

QUESTION #24 Master and detail sourceType of joinCondition of the joinClick Here to view complete documentSubmitted by: sithusithul Master and detail sourcel Type of joinl Condition of the jointhe Joiner transformation supports the following join types, which you set in the Properties tab:l Normal (Default)l Master Outerl Detail Outerl Full OuterCheers,SithuAbove answer was rated as good by the following members:vivek1708=======================================l Master and detail sourcel Type of joinl Condition of the jointhe Joiner transformation supports the following join types which you set in the Properties tab:l Normal (Default)l Master Outerl Detail Outerl Full Outerfile:///C|/Perl/bin/result.html (28 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

CheersSithu=======================================There are number of properties that you use to configure a joinertransformation are:1) CASE SENSITIVE STRING COMPARISON: To join the string based on the case

Page 29: Informatica Interview QnA

sensitive basis.2) WORKING DIRECTORY: Where to create the caches.3) JOIN CONDITION : Like join on a.s v.n4) JOIN TYPE: (Normal or master or detail or full)5) NULL ORDERING IN MASTER6) NULL ORDERING IN DETAIL7) TRACING LEVEL: Level of detail about the operations.8) INDEX CACHE: Store group value of the input if any.9) DATA CACHE: Store value of each row of data.10) SORTED INPUT: Check box will be there and will have to check it if theinput to the joiner is sorted.file:///C|/Perl/bin/result.html (29 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

11) TRANSFORMATION SCOPE: The data to taken into consideration. (transcationor all input) transaction if it depends only on the processed rows while look upif it depends on other data when it processes a row. Ex-joiner using the samesource in the pipeline so data is within the scope so transaction. using alookup so it depends on other data or if dynamic cache is enabled it has toprocess on the other incoming data so will have to go for all input.=======================================

25.Informatica - What r the join types in joiner transformation?QUESTION #25 Normal (Default)Master outerDetail outerFull outerClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 12, 2005 12:38:39 #1Praveen VasudevRE:=======================================Normal (Default) -- only matching rows from both master and detailMaster outer -- all detail rows and only matching rows from masterDetail outer -- all master rows and only matching rows from detailFull outer -- all rows from both master and detail ( matching or non matching)=======================================

Page 30: Informatica Interview QnA

follw this1. In the Mapping Designer choose Transformation-Create. Select the Joiner transformation. Entera name click OK.The naming convention for Joiner transformations is JNR_TransformationName. Enter a description forthe transformation. This description appears in the Repository Manager making it easier for you orothers to understand or remember what the transformation does.file:///C|/Perl/bin/result.html (30 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

The Designer creates the Joiner transformation. Keep in mind that you cannot use a Sequence Generatoror Update Strategy transformation as a source to a Joiner transformation.1. Drag all the desired input/output ports from the first source into the Joiner transformation.The Designer creates input/output ports for the source fields in the Joiner as detail fields by default.You can edit this property later.1. Select and drag all the desired input/output ports from the second source into the Joinertransformation.The Designer configures the second set of source fields and master fields by default.1. Double-click the title bar of the Joiner transformation to open the Edit Transformations dialogbox.1. Select the Ports tab.1. Click any box in the M column to switch the master/detail relationship for the sources. Changethe master/detail relationship if necessary by selecting the master source in the M column.Tip: Designating the source with fewer unique records as master increases performance during a join.1. Add default values for specific ports as necessary.Certain ports are likely to contain NULL values since the fields in one of the sources may be empty.You can specify a default value if the target database does not handle NULLs.1. Select the Condition tab and set the condition.

Page 31: Informatica Interview QnA

1. Click the Add button to add a condition. You can add multiple conditions. The master and detailports must have matching datatypes. The Joiner transformation only supports equivalent ( ) joins:10. Select the Properties tab and enter any additional settings for the transformations.1. Click OK.1. Choose Repository-Save to save changes to the mapping.Cheersfile:///C|/Perl/bin/result.html (31 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Sithu=======================================

26.Informatica - What r the joiner caches?QUESTION #26 When a Joiner transformation occurs in a session, theInformatica Server reads all the records from the master source and builds indexand data caches based on the master rows.After building the caches, the Joiner transformation reads records from the detailsource and perform joins.Click Here to view complete documentSubmitted by: bneha15For version 7.x and above :When the PowerCenter Server processes a Joiner transformation, it reads rows from both sourcesconcurrently and builds the index and data cache based on the master rows. The PowerCenter Serverthen performs the join based on the detail source data and the cache data. To improve performance foran unsorted Joiner transformation, use the source with fewer rows as the master source. To improveperformance for a sorted Joiner transformation, use the source with fewer duplicate key values as themaster.Above answer was rated as good by the following members:vivek1708=======================================

Page 32: Informatica Interview QnA

From a performance perspective.always makes the smaller of the two joining tables to be the master.=======================================Specifies the directory used to cache master records and the index to these records. By default thecached files are created in a directory specified by the server variable $PMCacheDir. If you override thedirectory make sure the directory exists and contains enough disk space for the cache files. Thedirectory can be a mapped or mounted drive.CheersSithu=======================================For version 7.x and above :When the PowerCenter Server processes a Joiner transformation it reads rows from both sourcesconcurrently and builds the index and data cache based on the master rows. The PowerCenter Serverthen performs the join based on the detail source data and the cache data. To improve performance foran unsorted Joiner transformation use the source with fewer rows as the master source. To improvefile:///C|/Perl/bin/result.html (32 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

performance for a sorted Joiner transformation use the source with fewer duplicate key values as themaster.=======================================

27.Informatica - what is the look up transformation?QUESTION #27 Use lookup transformation in u’r mapping to lookup data in arelational table,view,synonym.Informatica server queries the look up table based on the lookup ports in thetransformation.It compares the lookup transformation port values to lookup

Page 33: Informatica Interview QnA

table column values based on the look up condition.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 09, 2005 00:06:38 #1phaniRE: what is the look up transformation?=======================================Using it we can access the data from a relational table which is not a source in the mapping.For Ex:Suppose the source contains only Empno but we want Empname also in the mapping.Theninstead of adding another tbl which contains Empname as a source we can Lkp the table and get theEmpname in target.=======================================A lookup is a simple single-level reference structure with no parent/child relationships. Use a lookupwhen you have a set of reference members that you do not need to organize hierarchically.=======================================In DecisionStream a lookup is a simple single-level reference structure with no parent/childrelationships. Use a lookup when you have a set of reference members that you do not need to organizehierarchically.HTH=======================================Use a Lookup transformation in your mapping to look up data in a relational table view or synonym.Import a lookup definition from any relational database to which both the Informatica Client and Servercan connect. You can use multiple Lookup transformations in a mapping.file:///C|/Perl/bin/result.html (33 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

CheersSithu=======================================Lookup transformation in a mapping is used to look up data in a flat file or a relational table view or

Page 34: Informatica Interview QnA

synonym. You can import a lookup definition from any flat file or relational database to which both thePowerCenter Client and Server can connect. You can use multiple Lookup transformations in amapping.I hope this would be helpful for you.CheersSridhar=======================================

28.Informatica - Why use the lookup transformation ?QUESTION #28 To perform the following tasks.Get a related value. For example, if your source table includes employee ID, butyou want to include the employee name in your target table to make yoursummary data easier to read.Perform a calculation. Many normalized tables include values used in acalculation, such as gross sales per invoice or sales tax, but not the calculatedvalue (such as net sales).Update slowly changing dimension tables. You can use a Lookup transformationto determine whether records already exist in the target.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.August 21, 2006 22:26:47 #1sambaRE: Why use the lookup transformation ?file:///C|/Perl/bin/result.html (34 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Lookup table is nothing but the lookup on a table view synonym and falt file.

Page 35: Informatica Interview QnA

by using lookup we can get a related value with join conditon and performs caluclationstwo types of lookups are there1)connected2)Unconnectedconnected lookup is with in pipeline only but unconnected lookup is not connected to pipelineunconneceted lookup returns single column value onlylet me know if you want to know any additions informationcheerssamba=======================================hey with regard to look up is there a dynamic look up and static look up ? if so how do you set it. and isthere a combinationation of dynamic connected lookups..and static unconnected look ups.?=======================================look up has two types Connected and Unconnected..Usually we use look-up so as to get the relatedValue from a table ..it has Input port Output Port Lookup Port and Return Port..where Lookup Portlooks up the corresponding column for the value and resturn port returns the value..usually we use whenthere are no columns in common=======================================For maintaining the slowly changing diamensions=======================================HiThe ans to ur question is YesThere are 2 types of lookups: Dynamic and Normal (which you have termed as Static).To configure just double click on the lookup transformation and go to properties tabfile:///C|/Perl/bin/result.html (35 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

There'll be an option - dynamic lookup cache. select that...If u dont select this option then the lookup is merely a normal lookup.Please let me know if there are any questions.Thanks.=======================================

Page 36: Informatica Interview QnA

29.Informatica - What r the types of lookup?QUESTION #29 Connected and unconnectedClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 08, 2005 18:44:53 #1swatiRE: What r the types of lookup?=======================================i>connectedii>unconnectediii>cachediv>uncached=======================================1.Connected lookup2.Unconnected lookup1.file:///C|/Perl/bin/result.html (36 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Persistent cache2.Re-cache from database3.Static cache4.Dynamic cache5.Shared cacheCheersSithu=======================================hello boss/madamonly two types of lookup are there they are:1) Connected lookup2) Unconnected lookup.

Page 37: Informatica Interview QnA

I don't understand why people are specifying the cache types I want to know that now a days caches arealso taken into this category of lookup.If yes do specify on the answer listthankyou=======================================

30.Informatica - Differences between connected and unconnectedlookup?QUESTION #30Connected lookup Unconnected lookupReceives input values diectlyfrom the pipe line.Receives input values from the result of a lkpexpression in a another transformation.U can use a dynamic or staticcacheU can use a static cache.Cache includes all lookupcolumns used in the mapingCache includes all lookup out put ports in thelookup condition and the lookup/return port.Support user defined defaultvaluesDoes not support user defiend default valuesfile:///C|/Perl/bin/result.html (37 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 03, 2006 03:25:15 #1PrasannaRE: Differences between connected and unconnected look...=======================================In addition:Connected Lookip can return/pass multiple rows/groups of data whereas unconnected can return onlyone port.=======================================

Page 38: Informatica Interview QnA

In addition to this: In Connected lookup if the condition is not satisfied it returns '0'. In UnConnectedlookup if the condition is not satisfied it returns 'NULL'.=======================================HiDifferences Between Connected and Unconnected LookupsConnected Lookup Unconnected LookupReceives input values directly from the pipeline.Receives input values from the result of a :LKPexpression in another transformation.You can use a dynamic or static cache. You can use a static cache.Cache includes all lookup columns used in themapping (that is lookup source columns included inthe lookup condition and lookup source columnslinked as output ports to other transformations).Cache includes all lookup/output ports in thelookup condition and the lookup/return port.Can return multiple columns from the same row orinsert into the dynamic lookup cache.Designate one return port (R). Returns onecolumn from each row.file:///C|/Perl/bin/result.html (38 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

If there is no match for the lookup condition thePowerCenter Server returns the default value for alloutput ports. If you configure dynamic caching thePowerCenter Server inserts rows into the cache orleaves it unchanged.If there is no match for the lookup conditionthe PowerCenter Server returns NULL.If there is a match for the lookup condition thePowerCenter Server returns the result of the lookupcondition for all lookup/output ports. If youconfigure dynamic caching the PowerCenter Servereither updates the row the in the cache or leaves therow unchanged.If there is a match for the lookup condition thePowerCenter Server returns the result of thelookup condition into the return port.Pass multiple output values to another

Page 39: Informatica Interview QnA

transformation. Link lookup/output ports to anothertransformation.Pass one output value to anothertransformation. The lookup/output/return portpasses the value to the transformation calling :LKP expression.Supports user-defined default values. Does not support user-defined default values.Shivaji Thaneru=======================================

31.Informatica - what is meant by lookup caches?QUESTION #31 The informatica server builds a cache in memory when itprocesses the first row af a data in a cached look up transformation.It allocatesmemory for the cache based on the amount u configure in the transformation orsession properties.The informatica server stores condition values in the indexcache and output values in the data cache.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 28, 2006 06:34:33 #1srinivas vadlakondaRE: what is meant by lookup caches?file:///C|/Perl/bin/result.html (39 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================lookup cache is the temporary memory that is created by the informatica server to hold the lookup dataand to perform the lookup conditions=======================================A LookUP Cache is a Temporary Memory Area which is Created by the Informatica Server. whichstores the Lookup data based on certain Conditions. The Caches are of Three types 1) Persistent 2)Dynamic 3) Static and 4) Shared Cache.

Page 40: Informatica Interview QnA

=======================================

32.Informatica - What r the types of lookup caches?QUESTION #32 Persistent cache: U can save the lookup cache files and reusethem the next time the informatica server processes a lookup transformationconfigured to use the cache.Recache from database: If the persistent cache is not synchronized with helookup table,U can configure the lookup transformation to rebuild the lookupcache.Static cache: U can configure a static or readonly cache for only lookup table.Bydefault informatica server creates a static cache.It caches the lookup table andlookup values in the cache for each row that comes into the transformation.whenthe lookup condition is true,the informatica server does not update the cachewhile it prosesses the lookup transformation.Dynamic cache: If u want to cache the target table and insert new rows intocache and the target,u can create a look up transformation to use dynamic cache.The informatica server dynamically inerts data to the target table.shared cache: U can share the lookup cache between multiple transactions.U canshare unnamed cache between transformations inthe same maping.Click Here to view complete document

Page 41: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit your answer.December 13, 2005 06:02:36 #1Sithufile:///C|/Perl/bin/result.html (40 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: What r the types of lookup caches?=======================================Cache1. Static cache2. Dynamic cache3. Persistent cacheSithu=======================================Cache are three types namely Dynamic cache Static cache Persistent cacheCheersSithu=======================================Dynamic cachePersistence CacheRe cacheShared Cache=======================================hi could any one get me information where you would use these caches for look ups and how do you setthem.file:///C|/Perl/bin/result.html (41 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

thanksinfoseeker=======================================There are 4 types of lookup cache -Persistent Recache Satic & Dynamic.ByeStephen=======================================Types of Caches are :1) Dynamic Cache2) Static Cache3) Persistent Cache4) Shared Cache

Page 42: Informatica Interview QnA

5) Unshared Cache=======================================There are five types of caches such asstatic cachedynamic cachepersistant cacheshared cache etc...=======================================

33.Informatica - Difference between static cache and dynamiccacheQUESTION #33Static cache Dynamic cacheU can not insert or update the cacheU can insert rows into the cache as upass to the targetThe informatic server returns a value from the lookuptable or cache when the condition is true.When thecondition is not true, informatica server returns thedefault value for connected transformations and null forunconnected transformations.The informatic server inserts rowsinto cache when the condition is false.This indicates that the the row is notin the cache or target table. U canpass these rows to the target tableClick Here to view complete documentSubmitted by: vpfile:///C|/Perl/bin/result.html (42 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

lets say for example your lookup table is your target table. So when you create the Lookup selecting thedynamic cache what It does is it will lookup values and if there is no match it will insert the row in boththe target and the lookup cache (hence the word dynamic cache it builds up as you go along), or if thereis a match it will update the row in the target. On the other hand Static caches dont get updated whenyou do a lookup.

Page 43: Informatica Interview QnA

Above answer was rated as good by the following members:ssangi, ananthece=======================================lets say for example your lookup table is your target table. So when you create the Lookup selecting thedynamic cache what It does is it will lookup values and if there is no match it will insert the row in boththe target and the lookup cache (hence the word dynamic cache it builds up as you go along) or if thereis a match it will update the row in the target. On the other hand Static caches dont get updated whenyou do a lookup.=======================================

34.Informatica - Which transformation should we use tonormalize the COBOL and relational sources?QUESTION #34 Normalizer Transformation.When U drag the COBOL source in to the mapping Designer workspace,thenormalizer transformation automatically appears,creating input and outputports for every column in the source.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 01:08:06 #1sithusithu Member Since: December 2005 Contribution: 161RE: Which transformation should we use to normalize th...file:///C|/Perl/bin/result.html (43 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================The Normalizer transformation normalizes records from COBOL and relational sources allowing you toorganize the data according to your own needs. A Normalizer transformation can appear anywhere in adata flow when you normalize a relational source. Use a Normalizer transformation instead of the

Page 44: Informatica Interview QnA

Source Qualifier transformation when you normalize a COBOL source. When you drag a COBOLsource into the Mapping Designer workspace the Normalizer transformation automatically appearscreating input and output ports for every column in the sourceCheersSithu=======================================

35.Informatica - How the informatica server sorts the stringvalues in Ranktransformation?QUESTION #35 When the informatica server runs in the ASCII data movementmode it sorts session data using Binary sortorder.If U configure the seeion to usea binary sort order,the informatica server caluculates the binary value of eachstring and returns the specified number of rows with the higest binary values forthe string.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 09, 2005 00:25:27 #1phaniRE: How the informatica server sorts the string values...file:///C|/Perl/bin/result.html (44 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================When Informatica Server runs in UNICODE data movement mode then it uses the sort order configuredin session properties.=======================================

36.Informatica - What is the Rankindex in Ranktransformation?QUESTION #36 The Designer automatically creates a RANKINDEX port for

Page 45: Informatica Interview QnA

each Rank transformation. The Informatica Server uses the Rank Index port tostore the ranking position for each record in a group. For example, if you create aRank transformation that ranks the top 5 salespersons for each quarter, the rankindex numbers the salespeople from 1 to 5:Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 12, 2006 04:41:57 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is the Rankindex in Ranktransformation?=======================================Based on which port you want generate Rank is known as rank port the generated values are known asrank index.CheersSithu=======================================

37.Informatica - What is the Router transformation?QUESTION #37 A Router transformation is similar to a Filter transformationbecause both transformations allow you to use a condition to test data.However, a Filter transformation tests data for one condition and drops the rowsof data that do not meet the condition. A Router transformation tests data forone or more conditions and gives you the option to route rows of data that donot meet any of the conditions to a default output group.file:///C|/Perl/bin/result.html (45 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Page 46: Informatica Interview QnA

If you need to test the same input data based on multiple conditions, use aRouter Transformation in a mapping instead of creating multiple Filtertransformations to perform the same task.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 04:46:42 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is the Router transformation?=======================================A Router transformation is similar to a Filter transformation because both transformations allow youto use a condition to test data. A Filter transformation tests data for one condition and drops the rows ofdata that do not meet the condition. However a Router transformation tests data for one or moreconditions and gives you the option to route rows of data that do not meet any of the conditions to adefault output group.CheersSithu=======================================Note:- i think the definition and purpose of Router transformation define by sithusithu sithu is not clearand not fully correct as they of have mentioned<A Router transformation tests data for one or more conditions >sorry sithu and sithusithubut i want to clarify is that in Filter transformation also we can give so many conditions together. eg.empno 1234 and sal>25000 (2conditions)Actual Purposes of Router Transformation are:-1. Similar as filter transformation to sort the source data according to the condition applied.2. When we want to load data into different target tables from same source but with different conditionas per target tables requirement.file:///C|/Perl/bin/result.html (46 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Page 47: Informatica Interview QnA

e.g. From emp table we want to load data in three(3) different target tables T1(where deptno 10) T2(where deptno 20) and T3(where deptno 30).For this if we use filter transformation we need three(3) filter transformationsSo instead of using three(3) filter transformation we will use only one(1) Router transformation.Advantages:-1. Better Performance because in mapping the Router transformation Informatica server processes theinput data only once instead of three as in filter transformation.2. Less complexity because we use only one router transformation instead of multiple filtertransformation.Router Transformation is :- Active and Connected.=======================================

38.Informatica - What r the types of groups in Routertransformation?QUESTION #38 Input group Output groupThe designer copies property information from the input ports of the input groupto create a set of output ports for each output group.Two types of output groupsUser defined groupsDefault groupU can not modify or delete default groups.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 09, 2005 00:35:44 #1phaniRE: What r the types of groups in Router transformatio...file:///C|/Perl/bin/result.html (47 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Input group contains the data which is coming from the source.We can create as many user-defined

Page 48: Informatica Interview QnA

groups as required for each condition we want to specify.Default group contains all the rows of datathat doesn't satisfy the condition of any group.=======================================A Router transformation has the following types of groups:l Inputl OutputInput GroupThe Designer copies property information from the input ports of the input group to create a set ofoutput ports for each output group.Output GroupsThere are two types of output groups:l User-defined groupsl Default groupYou cannot modify or delete output ports or their properties.CheersSithu=======================================

39.Informatica - Why we use stored procedure transformation?QUESTION #39 For populating and maintaining data bases.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (48 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 19, 2006 04:41:34 #1sithusithu Member Since: December 2005 Contribution: 161RE: Why we use stored procedure transformation?=======================================A Stored Procedure transformation is an important tool for populating and maintaining databases.Database administrators create stored procedures to automate time-consuming tasks that are toocomplicated for standard SQL statements.CheersSithu

Page 49: Informatica Interview QnA

=======================================You might use stored procedures to do the following tasks:l Check the status of a target database before loading data into it.l Determine if enough space exists in a database.l Perform a specialized calculation.l Drop and recreate indexes.Shivaji Thaneru=======================================You might use stored procedures to do the following tasks:l Check the status of a target database before loading data into it.l Determine if enough space exists in a database.l Perform a specialized calculation.l Drop and recreate indexes.Shivaji Thaneru=======================================we use a stored procedure transformation to execute a stored procedure which in turn might do thefile:///C|/Perl/bin/result.html (49 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

above things in a database and more.=======================================can you give me a real time scenario please?=======================================

40.Informatica - What is source qualifier transformation?QUESTION #40 When U add a relational or a flat file source definition to amaping,U need to connect it toa source qualifer transformation.The source qualifier transformation represnetsthe recordsthat the informatica server reads when it runs a session.Click Here to view complete documentSubmitted by: Rama Rao B.Source qualifier is also a table, it acts as an intermediator in between source and target metadata. And, italso generates sql, which creating mapping in between source and target metadatas.

Page 50: Informatica Interview QnA

Thanks,Rama RaoAbove answer was rated as good by the following members:him.life=======================================When you add a relational or a flat file source definition to a mapping you need to connect it to aSource Qualifier transformation. The Source Qualifier represents the rows that the Informatica Serverreads when it executes a session.l Join data originating from the same source database. You can join two or more tables withprimary-foreign key relationships by linking the sources to one Source Qualifier.l Filter records when the Informatica Server reads source data. If you include a filter condition theInformatica Server adds a WHERE clause to the default query.l Specify an outer join rather than the default inner join. If you include a user-defined join theInformatica Server replaces the join information specified by the metadata in the SQL query.l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds anORDER BY clause to the default SQL query.l Select only distinct values from the source. If you choose Select Distinct the Informatica Serveradds a SELECT DISTINCT statement to the default SQL query.file:///C|/Perl/bin/result.html (50 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

l Create a custom query to issue a special SELECT statement for the Informatica Server to readsource data. For example you might use a custom query to perform aggregate calculations or execute astored procedure.CheersSithu=======================================When you add a relational or a flat file source definition to a mapping you need to connect it to aSource Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server

Page 51: Informatica Interview QnA

reads when it executes a session.CheersSithu=======================================Source qualifier is also a table it acts as an intermediator in between source and target metadata. And italso generates sql which creating mapping in between source and target metadatas.ThanksRama Rao=======================================Def:- The Transformation which Converts the source(relational or flat) datatype to Informatica datatype.So it works as an intemediator between and source and informatica server.Tasks performed by qualifier transformation:-1. Join data originating from the same source database.2. Filter records when the Informatica Server reads source data.3. Specify an outer join rather than the default inner join.4. Specify sorted ports.5. Select only distinct values from the source.6. Create a custom query to issue a special SELECT statement for the Informatica Server to read sourcedata.=======================================file:///C|/Perl/bin/result.html (51 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Source Qualifier Transformation is the beginning of the pipeline of any transformation the mainpurpose of this transformation is that it is reading the data from the relational or flat file and is passingthe data ie. read into the mapping designed so that the data can be passed into the other transformations=======================================Source Qualifier is a transformation with every source definiton if the source is Relational Database.Source Qualifier fires a Select statement on the source db.With every Source Definition u will get a source qualifier without Source qualifier u r mapping will beinvalid and u cannot define the pipeline to the other instance.If the source is Cobol then for that source definition u will get a normalizer transormation not the

Page 52: Informatica Interview QnA

Source Qualifier.=======================================

41.Informatica - What r the tasks that source qualifier performs?QUESTION #41 Join data originating from same source data base.Filter records when the informatica server reads source data.Specify an outer join rather than the default inner joinspecify sorted records.Select only distinct values from the source.Creating custom query to issue a special SELECT statement for the informaticaserver to readsource data.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 24, 2006 03:42:08 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the tasks that source qualifier performs?file:///C|/Perl/bin/result.html (52 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================l Join data originating from the same source database. You can join two or more tables withprimary-foreign key relationships by linking the sources to one Source Qualifier.l Filter records when the Informatica Server reads source data. If you include a filter condition theInformatica Server adds a WHERE clause to the default query.l Specify an outer join rather than the default inner join. If you include a user-defined join theInformatica Server replaces the join information specified by the metadata in the SQL query.l Specify sorted ports. If you specify a number for sorted ports the Informatica Server adds anORDER BY clause to the default SQL query.

Page 53: Informatica Interview QnA

l Select only distinct values from the source. If you choose Select Distinct the Informatica Serveradds a SELECT DISTINCT statement to the default SQL query.l Create a custom query to issue a special SELECT statement for the Informatica Server to readsource data. For example you might use a custom query to perform aggregate calculations or execute astored procedure.CheersSithu=======================================

42.Informatica - What is the target load order?QUESTION #42 U specify the target loadorder based on source qualifiers in amaping.If u have the multiplesource qualifiers connected to the multiple targets,U can designatethe order inwhich informaticaserver loads data into the targets.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.March 01, 2006 14:27:34 #1sarithaRE: What is the target load order?file:///C|/Perl/bin/result.html (53 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================A target load order group is the collection of source qualifiers transformations and targets linkedtogether in a mapping.=======================================

43.Informatica - What is the default join that source qualifierprovides?QUESTION #43 Inner equi join.

Page 54: Informatica Interview QnA

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 24, 2006 03:40:28 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is the default join that source qualifier pro...=======================================The Joiner transformation supports the following join types which you set in the Properties tab:l Normal (Default)l Master Outerl Detail Outerl Full OuterCheersSithu=======================================Equijoin on a key common to the sources drawn by the SQ.=======================================

44.Informatica - What r the basic needs to join two sources in afile:///C|/Perl/bin/result.html (54 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

source qualifier?QUESTION #44 Two sources should have primary and Foreign key relationships.Two sources should have matching data types.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 14, 2005 10:32:44 #1rishiRE: What r the basic needs to join two sources in a so...=======================================The both the table should have a common feild with same datatype.Its not neccessary both should follow primary and foreign relationship. If any relation ship exists thatwill help u in performance point of view.=======================================

Page 55: Informatica Interview QnA

Also of you are using a lookup in your mapping and the lookup table is small then try to join that looupin Source Qualifier to improve perf.RegardsSK=======================================Both the sources must be from same database.=======================================

45.Informatica - what is update strategy transformation ?QUESTION #45 This transformation is used to maintain the history data or justmost recent changes in to targettable.file:///C|/Perl/bin/result.html (55 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 04:33:23 #1sithusithu Member Since: December 2005 Contribution: 161RE: what is update strategy transformation ?=======================================The model you choose constitutes your update strategy how to handle changes to existing rows. InPowerCenter and PowerMart you set your update strategy at two different levels:l Within a session. When you configure a session you can instruct the Informatica Server toeither treat all rows in the same way (for example treat all rows as inserts) or use instructionscoded into the session mapping to flag rows for different database operations.l Within a mapping. Within a mapping you use the Update Strategy transformation to flag rowsfor insert delete update or reject.ChreesSithu=======================================Update strategy transformation is used for flagging the records for insertupdate delete and reject

Page 56: Informatica Interview QnA

In Informatica power center u can develop update strategy at two levelsuse update strategy T/R in the mapping designtarget table options in the sessionthe following are the target table optionsInsertfile:///C|/Perl/bin/result.html (56 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

UpdateDeleteUpdate as InsertUpdate else InsertThanksRekha=======================================

46.Informatica - What is the default source option for updatestratgey transformation?QUESTION #46 Data driven.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.March 28, 2006 05:03:53 #1GyaneshwarRE: What is the default source option for update strat...=======================================DATA DRIVEN=======================================

47.Informatica - What is Datadriven?QUESTION #47 The informatica server follows instructions coded into updatestrategy transformations with in the session maping determine how to flagrecords for insert, update, delete or reject. If u do not choose data driven optionsetting,the informatica server ignores all update strategy transformations in themapping.

Page 57: Informatica Interview QnA

Click Here to view complete documentfile:///C|/Perl/bin/result.html (57 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer.January 19, 2006 04:36:22 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is Datadriven?=======================================The Informatica Server follows instructions coded into Update Strategy transformations within thesession mapping to determine how to flag rows for insert delete update or reject.If the mapping for the session contains an Update Strategy transformation this field is marked DataDriven by default.CheersSithu=======================================When Data driven option is selected in session properties it the code will consider the update strategy(DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping and not the optionsselected in the session properties.=======================================

48.Informatica - What r the options in the target session of updatestrategy transsformatioin?QUESTION #48 InsertDeleteUpdateUpdate as updateUpdate as insertUpdate esle insertTruncate tableClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 03, 2006 03:46:07 #1

Page 58: Informatica Interview QnA

Prasannafile:///C|/Perl/bin/result.html (58 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: What r the options in the target session of update...=======================================Update as Insert:This option specified all the update records from source to be flagged as inserts in the target. In otherwords instead of updating the records in the target they are inserted as new records.Update else Insert:This option enables informatica to flag the records either for update if they are old or insert if they arenew records from source.=======================================

49.Informatica - What r the types of maping wizards that r to beprovided in Informatica?QUESTION #49 The Designer provides two mapping wizards to help you createmappings quickly and easily. Both wizards are designed to create mappings forloading and maintaining star schemas, a series of dimensions related to a centralfact table.Getting Started Wizard. Creates mappings to load static fact and dimensiontables, as well as slowly growing dimension tables.Slowly Changing Dimensions Wizard. Creates mappings to load slowlychanging dimension tables based on the amount of historical dimension data youwant to keep and the method you choose to handle historical dimension data.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 09, 2006 02:43:25 #1

Page 59: Informatica Interview QnA

sithusithu Member Since: December 2005 Contribution: 161RE: What r the types of maping wizards that r to be pr...file:///C|/Perl/bin/result.html (59 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Simple Pass throughSlowly Growing TargetSlowly Changing the DimensionType1Most recent valuesType2Full HistoryVersionFlagDateType3Current and one previous=======================================Inf designer :Mapping -> wizards --> 1) Getting started -->Simple pass through mapping-->Slowly growing target2) slowly changing dimensions---> SCD 1 (only recent values)--->SCD 2(HISTORY using flag or version or time)--->SCD 3(just recent values)one important point is dimensions are 2 types1)slowly growing targetsfile:///C|/Perl/bin/result.html (60 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

2)slowly changing dimensions.=======================================

50.Informatica - What r the types of maping in Getting StartedWizard?QUESTION #50 Simple Pass through maping :Loads a static fact or dimension table by inserting all rows. Use this mappingwhen you want to drop all existing data from your table before loading newdata.

Page 60: Informatica Interview QnA

Slowly Growing target :Loads a slowly growing fact or dimension table by inserting new rows. Use thismapping to load new data when existing data does not require updates.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 09, 2006 02:46:25 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the types of maping in Getting Started Wiza...=======================================1. Simple Pass through2. Slowly Growing TargetCheers Sithu=======================================

51.Informatica - What r the mapings that we use for slowlychanging dimension table?QUESTION #51 Type1: Rows containing changes to existing dimensions areupdated in the target by overwriting the existing dimension. In the Type 1Dimension mapping, all rows contain current dimension data.Use the Type 1 Dimension mapping to update a slowly changing dimension tablewhen you do not need to keep any previous versions of dimensions in the table.Type 2: The Type 2 Dimension Data mapping inserts both new and changeddimensions into the target. Changes are tracked in the target table by versioningthe primary key and creating a version number for each dimension in the table.Use the Type 2 Dimension/Version Data mapping to update a slowly changing

Page 61: Informatica Interview QnA

file:///C|/Perl/bin/result.html (61 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

dimension table when you want to keep a full history of dimension data in thetable. Version numbers and versioned primary keys track the order of changes toeach dimension.Type 3: The Type 3 Dimension mapping filters source rows based on user-definedcomparisons and inserts only those found to be new dimensions to the target.Rows containing changes to existing dimensions are updated in the target. Whenupdating an existing dimension, the Informatica Server saves existing data indifferent columns of the same row and replaces the existing data with theupdatesClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.June 03, 2006 09:39:20 #1mamathaRE: What r the mapings that we use for slowly changing...=======================================hello siri want whole information on slowly changing dimension.and also want project on slowly changingdimension in informatica.Thanking you sirmamatha.=======================================1.Up Date strategy Transfermation2.Look up Transfermation.=======================================file:///C|/Perl/bin/result.html (62 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the

Page 62: Informatica Interview QnA

existing dimension. In the Type 1 Dimension mapping all rows contain current dimension data. Use theType 1 Dimension mapping to update a slowly changing dimension table when you do not need to keepany previous versions of dimensions in the table. Type 2: The Type 2 Dimension Data mapping insertsboth new and changed dimensions into the target. Changes are tracked in the target table by versioningthe primary key and creating a version number for each dimension in the table. Use the Type 2Dimension/Version Data mapping to update a slowly changing dimension table when you want to keepa full history of dimension data in the table. Version numbers and versioned primary keys track theorder of changes to each dimension. Type 3: The Type 3 Dimension mapping filters source rows basedon user-defined comparisons and inserts only those found to be new dimensions to the target. Rowscontaining changes to existing dimensions are updated in the target.=======================================SCD:Source to SQ - 1 mappingSQ to LKP - 2 mappingSQ_LKP to EXP - 3 MappingEXP to FTR - 4 MappingFTR to UPD - 5 MappingUPD to TGT - 6 MappingSQGen to TGT - 7 Mapping.I think these are the 7 mapping used for SCD in general;For type 1: The mapping will be doubled that is one for insert and other for update and total as 14.For type 2 : The mapping will be increased thrice one for insert 2nd for update and 3 to keep the oldone. (here the history stores)For type three : It will be doubled for insert one row and also to insert one column to keep the previousdata.CheersPrasath=======================================

Page 63: Informatica Interview QnA

52.Informatica - What r the different types of Type2 dimensionmaping?QUESTION #52 Type2 Dimension/Version Data Maping: In this maping theupdated dimension in the source will gets inserted in target along with a newversion number.And newly added dimensionfile:///C|/Perl/bin/result.html (63 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

in source will inserted into target with a primary key.Type2 Dimension/Flag current Maping: This maping is also used for slowlychanging dimensions.In addition it creates a flag value for changed or newdimension.Flag indiactes the dimension is new or newlyupdated.Recent dimensions willgets saved with cuurent flag value 1. And updated dimensions r saved with thevalue 0.Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2maping used for slowly changing dimensions.This maping also inserts both newand changed dimensions in to the target.And changes r tracked by the effectivedate range for each version of each dimension.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 04, 2006 05:31:39 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the different types of Type2 dimension mapi...

Page 64: Informatica Interview QnA

=======================================Type21. Version number2. Flag3.DateCheersSithu=======================================file:///C|/Perl/bin/result.html (64 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

53.Informatica - How can u recognise whether or not the newlyadded rows in the source r gets insert in the target ?QUESTION #53 In the Type2 maping we have three options to recognise thenewly added rowsVersion numberFlagvalueEffective date RangeClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 14, 2005 10:43:31 #1rishiRE: How can u recognise whether or not the newly added...=======================================If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of all the insertstatements and Updates you need to use session log file where you configure it to verbose.You will get complete set of data which record was inserted and which was not.=======================================Just use debugger to know how the data from source moves to target it will show how many new rowsget inserted else updated.=======================================

Page 65: Informatica Interview QnA

54.Informatica - What r two types of processes that informaticaruns the session?QUESTION #54 Load manager Process: Starts the session, creates the DTMprocess, and sends post-session email when the session completes.The DTM process. Creates threads to initialize the session, read, write, andtransform data, and handle pre- and post-session operations.Click Here to view complete documentfile:///C|/Perl/bin/result.html (65 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer.September 17, 2007 08:17:02 #1rasmi Member Since: June 2007 Contribution: 20RE: What r two types of processes that informatica run...=======================================When the workflow start to runThen the informatica server process startsTwo process:load manager process and DTM Process;The load manager process has the following tasks1. lock the workflow and read the properties of workflow2.create workflow log file3. start the all tasks in workfolw except session and worklet4.It starts the DTM Process.5.It will send the post session Email when the DTM abnormally terminatedThe DTM process involved in the following tasks1. read session properties2.create session log file3.create Threades such as master thread read write transformation threateds4.send post session Email.5.run the pre and post shell commands .6.run the pre and post stored procedures.=======================================

Page 66: Informatica Interview QnA

55.Informatica - Can u generate reports in Informatcia?QUESTION #55 Yes. By using Metadata reporter we can generate reports ininformatica.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 05:05:46 #1sithusithu Member Since: December 2005 Contribution: 161file:///C|/Perl/bin/result.html (66 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: Can u generate reports in Informatcia?=======================================It is a ETL tool you could not make reports from here but you can generate metadata report that is notgoing to be used for business analysisCheersSithu=======================================can u pls tell me how generate metadata reports?=======================================

56.Informatica - Define maping and sessions?QUESTION #56 Maping: It is a set of source and target definitions linked bytransformation objects that define the rules for transformation.Session : It is a set of instructions that describe how and when to move datafrom source to targets.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 04, 2006 15:07:09 #1PavaniRE: Define maping and sessions?file:///C|/Perl/bin/result.html (67 of 363)4/1/2009 7:50:58 PM

Page 67: Informatica Interview QnA

file:///C|/Perl/bin/result.html

=======================================Mapping:A set of source and target definitions linked by diffrent transformation that define the rules for datatransformation.Session:A session identifies the mapping created with in the mapping designer.andIdentification of the mapping by the informatica server is done with the help of session.-Pavani.=======================================

57.Informatica - Which tool U use to create and manage sessionsand batches and to monitor and stop the informatica sQUESTION #57 Informatica server manager.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.May 16, 2006 12:55:46 #1LeninformaticaRE: Which tool U use to create and manage sessions and...file:///C|/Perl/bin/result.html (68 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Informatica Workflow Managar and Informatica Worlflow Monitor=======================================

58.Informatica - Why we use partitioning the session ininformatica?QUESTION #58 Partitioning achieves the session performance by reducing thetime period of reading the source and loading the data into target.Click Here to view complete document

Page 68: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit your answer.September 30, 2005 00:26:04 #1khadarbasha Member Since: September 2005 Contribution: 2RE: Why we use partitioning the session in informatica...=======================================Performance can be improved by processing data in parallel in a single session by creating multiplepartitions of the pipeline.Informatica server can achieve high performance by partitioning the pipleline and performing theextract transformation and load for each partition in parallel.=======================================

59.Informatica - How the informatica server increases the sessionperformance through partitioning the source?QUESTION #59 For a relational sources informatica server creates multipleconnections for each parttion of a single source and extracts seperate range ofdata for each connection.Informatica server reads multiple partitions of a singlesource concurently.Similarly for loading also informatica server creates multipleconnections to the target and loads partitions of data concurently.For XML and file sources,informatica server reads multiple files concurently.Forloading the data informatica server creates a seperate file for each partition(of afile:///C|/Perl/bin/result.html (69 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

source file).U can choose to merge the targets.Click Here to view complete document

Page 69: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit your answer.February 13, 2006 08:00:53 #1durgaRE: How the informatica server increases the session p...=======================================fine explanation=======================================

60.Informatica - What r the tasks that Loadmanger process willdo?QUESTION #60 Manages the session and batch scheduling: Whe u start theinformatica server the load maneger launches and queries the repository for a listof sessions configured to run on the informatica server.When u configure thesession the loadmanager maintains list of list of sessions and session start times.When u sart a session loadmanger fetches the session information from therepository to perform the validations and verifications prior to starting DTMprocess.Locking and reading the session: When the informatica server starts a sessionlodamaager locks the session from the repository.Locking prevents U starting thesession again and again.Reading the parameter file: If the session uses a parameter files,loadmanagerreads the parameter file and verifies that the session level parematers aredeclared in the file

Page 70: Informatica Interview QnA

Verifies permission and privelleges: When the sesson starts load manger checkswhether or not the user have privelleges to run the session.Creating log files: Loadmanger creates logfile contains the status of session.Click Here to view complete documentfile:///C|/Perl/bin/result.html (70 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer.August 17, 2005 02:08:34 #1AnjiReddyRE: What r the tasks that Loadmanger process will do?=======================================How can you determine whether informatica server is running or not with out using event viewer byusing shell command. I would appreciate the solution for this one. Feel free to mail me at [email protected]=======================================

61.Informatica - What r the different threads in DTM process?QUESTION #61 Master thread: Creates and manages all other threadsMaping thread: One maping thread will be creates for each session.Fectchssession and maping information.Pre and post session threads: This will be created to perform pre and post sessionoperations.Reader thread: One thread will be created for each partition of a source.It readsdata from source.Writer thread: It will be created to load data to the target.

Page 71: Informatica Interview QnA

Transformation thread: It will be created to tranform data.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.October 12, 2006 00:56:46 #1KillerRE: What r the different threads in DTM process?file:///C|/Perl/bin/result.html (71 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Yupz this make sense ! )=======================================

62.Informatica - Can u copy the session to a different folder orrepository?QUESTION #62 Yes. By using copy session wizard u can copy a session in adifferent folder or repository.But thattarget folder or repository should consists of mapping of that session.If target folder or repository is not having the maping of copying session ,u should have to copy that maping first before u copy the sessionClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 03, 2006 03:56:14 #1PrasannaRE: Can u copy the session to a different folder or re...=======================================In addition you can copy the workflow from the Repository manager. This will automatically copy themapping associated source targets and session to the target folder.=======================================

Page 72: Informatica Interview QnA

63.Informatica - What is batch and describe about types ofbatches?QUESTION #63 Grouping of session is known as batch.Batches r two typesSequential: Runs sessions one after the otherConcurrent: Runs session at same time.If u have sessions with source-target dependencies u have to go for sequentialbatch to start thesessions one after another.If u have several independent sessions u can useconcurrent batches.Whch runs all the sessions at the same time.Click Here to view complete documentfile:///C|/Perl/bin/result.html (72 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer.February 15, 2006 13:13:03 #1sangrooverRE: What is batch and describe about types of batches?...=======================================Batch--- is a group of any thingDifferent batches ----Different groups of different things.=======================================

64.Informatica - Can u copy the batches?QUESTION #64 NOClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 16, 2007 14:26:30 #1dl_mstr Member Since: November 2007 Contribution: 26RE: Can u copy the batches?=======================================Yes I think workflows can be copied from one folder/repository to another=======================================

Page 73: Informatica Interview QnA

It should be definitely yes.Without that :1.the migration of the workflows from dev to test and to Production would not make any sense2. For the similar logics we have to do same cumbersome jobThere might be some limitations while copying the batches like we might not be able to copy theoverwritten properties of the workflow.=======================================there is a slight correction in the above answer.file:///C|/Perl/bin/result.html (73 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

We might not be able to copy the overriden (wrote overwritten in above answer) properties=======================================

65.Informatica - When the informatica server marks that a batch isfailed?QUESTION #65 If one of session is configured to "run if previous completes" andthat previous session fails.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 01, 2008 06:05:46 #1Vani_AT Member Since: December 2007 Contribution: 16RE: When the informatica server marks that a batch is failed?=======================================A batch fails when the sessions in the workflow are checked with the property"Fail if parent fails"and any of the session in the sequential batch fails.=======================================

66.Informatica - What r the different options used to configure thesequential batches?QUESTION #66 Two options

Page 74: Informatica Interview QnA

Run the session only if previous session completes sucessfully. Always runs thesession.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.August 16, 2007 13:17:03 #1rasmi Member Since: June 2007 Contribution: 20RE: What r the different options used to configure the...file:///C|/Perl/bin/result.html (74 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================HiWhere we have to specify these options.=======================================You have to specify those options in the workflow designer. You can double click the pipeline whichconnects two sessions and there you can define prevtaskstatus succeeded. Then only the next sessionruns. You can also go edit the session and check 'fail parent if this task fails' which means it will markthe workflow as failed. If the workflow is failed it won't run the remaining sessions.=======================================I would like to make a small correction for the above answer.Even if a session fails with the above property setall the following sessions of the workflow still run and succeed depending on the validity/correctness ofthe individual sessions.The only difference this property makes is it marks the workflow as failed.=======================================

67.Informatica - In a sequential batch can u run the session ifprevious session fails?QUESTION #67 Yes.By setting the option always runs the session.Click Here to view complete document

Page 75: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit your answer.June 26, 2008 01:03:25 #1prade Member Since: May 2008 Contribution: 6RE: In a sequential batch can u run the session if previous session fails?file:///C|/Perl/bin/result.html (75 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Yes you can.Start ---L1-------> S1 --------L2-------> S2suppose S1 fails and we still want to run S2 thenL2 condition --> $S1.status FAILED or $S1.status SUCCEEDED=======================================

68.Informatica - Can u start a batches with in a batch?QUESTION #68 U can not. If u want to start batch that resides in a batch,createa new independent batch and copy the necessary sessions into the new batch.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.February 15, 2006 13:13:47 #1sangrooverRE: Can u start a batches with in a batch?=======================================Logically Yes=======================================Logically Yes we can craete worklets and call the batch=======================================Logically yes.I have not worked with worklets. But as we can start a single session within a workflow; similarly weshould be able to start a worklet within a workflow.=======================================

69.Informatica - Can u start a session inside a batch idividually?

Page 76: Informatica Interview QnA

QUESTION #69 We can start our required session only in case of sequentialbatch.in case of concurrent batchwe cant do like this.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (76 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

April 01, 2008 06:28:12 #1Vani_AT Member Since: December 2007 Contribution: 16RE: Can u start a session inside a batch idividually?=======================================Yes we can do this in any case. Sequential or concurrent doesn't matter.Ther is no absolute concurrent workflow. Every workflow starts with a "start" task and hence theworkflow is a hybrid.=======================================Yes we can do=======================================

70.Informatica - How can u stop a batch?QUESTION #70 By using server manager or pmcmd.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.May 28, 2007 04:38:06 #1VEMBURAJ.PRE: How can u stop a batch?=======================================by using menu command or pmcmd.=======================================In workflow monitor1. click on the workflow name2. click on stop=======================================

71.Informatica - What r the session parameters?QUESTION #71

Page 77: Informatica Interview QnA

Session parameters r like maping parameters,represent values U might want tochange betweensessions such as database connections or source files.Server manager also allows U to create userdefined session parameters.file:///C|/Perl/bin/result.html (77 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Following r user definedsession parameters.Database connectionsSource file names: use this parameter when u want to change the name orlocation ofsession source file between session runsTarget file name : Use this parameter when u want to change the name orlocation ofsession target file between session runs.Reject file name : Use this parameter when u want to change the name orlocation ofsession reject files between session runs.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 01, 2008 06:40:17 #1Vani_AT Member Since: December 2007 Contribution: 16RE: What r the session parameters?=======================================In addition to this we provide the lookup file name.The values for these variables are provided in the parameter file and the parameters start with a $$(double dollar symbol). There is a predefined format for specifying session parameter. For eg. If we

Page 78: Informatica Interview QnA

want to use a parameter for source file then it must be prefixed with $$SrcFile_<any string optional>.=======================================a small correction it starts with single dollar symbol . double dollar symbol is used for mappingparameters.=======================================

72.Informatica - What is parameter file?QUESTION #72 Parameter file is to define the values for parameters andvariables used in a session.A parameterfile is a file created by text editor such as word pad or notepad.U can define the following values in parameter fileMaping parametersMaping variablesfile:///C|/Perl/bin/result.html (78 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

session parametersClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 01:27:04 #1sithusithu Member Since: December 2005 Contribution: 161RE: What is parameter file?=======================================When you start a workflow you can optionally enter the directory and name of a parameter file. TheInformatica Server runs the workflow using the parameters in the file you specify.For UNIX shell users enclose the parameter file name in single quotes:-paramfile '$PMRootDir/myfile.txt'For Windows command prompt users the parameter file name cannot have beginning or trailing spaces.If the name includes spaces enclose the file name in double quotes:-paramfile $PMRootDir\my file.txtNote: When you write a pmcmd command that includes a parameter file located on another machineuse the backslash (\) with the dollar sign ($). This ensures that the machine where the variable is defined

Page 79: Informatica Interview QnA

expands the server variable.pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -w wSalesAvg -paramfile '\$PMRootDir/myfile.txt'CheersSithu=======================================

73.Informatica - What is difference between partioning ofrelatonal target and partitioning of file targets?file:///C|/Perl/bin/result.html (79 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

QUESTION #73 If u parttion a session with a relational target informaticaserver creates multiple connectionsto the target database to write target data concurently.If u partition a sessionwith a file targetthe informatica server creates one target file for each partition.U can configuresession propertiesto merge these target files.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.June 13, 2006 19:10:59 #1UmaBojjaRE: What is difference between partioning of relatonal...=======================================Partition's can be done on both relational and flat files.Informatica supports following partitions1.Database partitioning2.RoundRobin3.Pass-through4.Hash-Key partitioning5.Key Range partitioning

Page 80: Informatica Interview QnA

All these are applicable for relational targets.For flat file only database partitioning is not applicable.Informatica supports Nway partitioning.U can just specify the name of the target file and create thepartitions rest will be taken care by informatica session.file:///C|/Perl/bin/result.html (80 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Could you please tell me how can we partition the session and target?=======================================

74.Informatica - Performance tuning in Informatica?QUESTION #74 The goal of performance tuning is optimize session performanceso sessions run during the available load window for the Informatica Server.Increase the session performance by following.The performance of the Informatica Server is related to network connections.Data generally moves across a network at less than 1 MB per second, whereas alocal disk moves data five to twenty times faster. Thus network connectionsofteny affect on session performance.So aviod netwrok connections.Flat files: If u’r flat files stored on a machine other than the informatca server,move those files to the machine that consists of informatica server.Relational datasources: Minimize the connections to sources ,targets andinformatica server toimprove session performance.Moving target database into server system mayimprove sessionperformance.

Page 81: Informatica Interview QnA

Staging areas: If u use staging areas u force informatica server to performmultiple datapasses.Removing of staging areas may improve session performance.U can run the multiple informatica servers againist the same repository.Distibuting the session load to multiple informatica servers may improvesession performance.Run the informatica server in ASCII datamovement mode improves the sessionperformance.Because ASCII datamovement mode stores a character value in onebyte.Unicode mode takes 2 bytes to store a character.If a session joins multiple source tables in one Source Qualifier, optimizing thequery may improve performance. Also, single table select statements with anORDER BY or GROUP BY clause may benefit from optimization such as addingindexes.file:///C|/Perl/bin/result.html (81 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

We can improve the session performance by configuring the network packet size,which allowsdata to cross the network at one time.To do this go to server manger ,chooseserver configure database connections.If u r target consists key constraints and indexes u slow the loading of data.Toimprove the session performance in this case drop constraints and indexes before

Page 82: Informatica Interview QnA

u run the session and rebuild them after completion of session.Running a parallel sessions by using concurrent batches will also reduce the timeof loading thedata.So concurent batches may also increase the session performance.Partittionig the session improves the session performance by creating multipleconnections to sources and targets and loads data in paralel pipe lines.In some cases if a session contains a aggregator transformation ,u can useincremental aggregation to improve session performance.Aviod transformation errors to improve the session performance.If the sessioin containd lookup transformation u can improve the sessionperformance by enabling the look up cache.If U’r session contains filter transformation ,create that filter transformationnearer to the sourcesor u can use filter condition in source qualifier.Aggreagator,Rank and joiner transformation may oftenly decrease the sessionperformance .Because they must group data before processing it.To improvesession performance in this case use sorted ports option.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 05, 2007 10:20:40 #1Infoseek Member Since: January 2007 Contribution: 4

Page 83: Informatica Interview QnA

file:///C|/Perl/bin/result.html (82 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: Performance tuning in Informatica?=======================================thanks for your above answer hey would like to know how we can partition a big datafile(flat) around1Gig..and what are the options to set for the same. in Power centre V7.X=======================================Hey Thank You so much for the information. That was one of the best answers I have read on thiswebsite. Descriptive yet to the point and highly useful in real world. I appreciate your effort.=======================================

75.Informatica - What is difference between maplet and reusabletransformation?QUESTION #75 Maplet consists of set of transformations that is reusable.Areusable transformation is asingle transformation that can be reusable.If u create a variables or parameters in maplet that can not be used in anothermaping or maplet.Unlike the variables that r created in a reusabletransformation can be usefull in any other maping or maplet.We can not include source definitions in reusable transformations.But we canadd sources to a maplet.Whole transformation logic will be hided in case of maplet.But it is transparentin case of reusable transformation.We cant use COBOL source qualifier,joiner,normalizer transformations inmaplet.Where as we can make them as a reusable transformations.

Page 84: Informatica Interview QnA

Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 01:15:34 #1sithusithu Member Since: December 2005 Contribution: 161file:///C|/Perl/bin/result.html (83 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: What is difference between maplet and reusable tra...=======================================Maplet: one or more transformationsReusable transformation: only one transformationCheersSithu=======================================Mapplet is a group of reusable transformation.The main purpose of using Mapplet is to hide the logicfrom end user point of view...It works like a function in C language.We can use it N number of times.Itsa reusable object.Reusable transformation is a single transformation.=======================================

76.Informatica - Define informatica repository?QUESTION #76 The Informatica repository is a relational database that storesinformation, or metadata, used by the Informatica Server and Client tools.Metadata can include information such as mappings describing how totransform source data, sessions indicating when you want the InformaticaServer to perform the transformations, and connect strings for sources andtargets.The repository also stores administrative information such as usernames and

Page 85: Informatica Interview QnA

passwords, permissions and privileges, and product version.Use repository manager to create the repository.The Repository Managerconnects to the repository database and runs the code needed to create therepository tables.Thsea tablesstores metadata in specific format the informatica server,client tools use.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (84 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 10, 2006 06:49:01 #1sithusithu Member Since: December 2005 Contribution: 161RE: Define informatica repository?=======================================Infromatica Repository:The informatica repository is at the center of the informatica suite. You createa set of metadata tables within the repository database that the informatica application and tools access.The informatica client and server access the repository to save and retrieve metadata.CheersSithu=======================================

77.Informatica - What r the types of metadata that stores inrepository?QUESTION #77Following r the types of metadata that stores in the repositoryDatabase connectionsGlobal objectsMappingsMapplets

Page 86: Informatica Interview QnA

Multidimensional metadataReusable transformationsSessions and batchesShort cutsSource definitionsTarget defintionsTransformationsClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (85 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 19, 2006 01:40:54 #1sithusithu Member Since: December 2005 Contribution: 161RE: What r the types of metadata that stores in reposi...=======================================l Source definitions. Definitions of database objects (tables views synonyms) or files that providesource data.l Target definitions. Definitions of database objects or files that contain the target data.l Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.l Mappings. A set of source and target definitions along with transformations containing businesslogic that you build into the transformation. These are the instructions that the Informatica Server usesto transform and move data.l Reusable transformations. Transformations that you can use in multiple mappings.l Mapplets. A set of transformations that you can use in multiple mappings.l Sessions and workflows. Sessions and workflows store information about how and when theInformatica Server moves data. A workflow is a set of instructions that describes how and when to runtasks related to extracting transforming and loading data. A session is a type of task that you can put ina workflow. Each session corresponds to a single mapping.Cheers

Page 87: Informatica Interview QnA

Sithu=======================================

78.Informatica - What is power center repository?QUESTION #78 The PowerCenter repository allows you to share metadataacross repositories to create a data mart domain. In a data mart domain, youcan create a single global repository to store metadata used across an enterprise,and a number of local repositories to share the global metadata as needed.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 19, 2006 01:44:05 #1sithusithu Member Since: December 2005 Contribution: 161file:///C|/Perl/bin/result.html (86 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: What is power center repository?=======================================l Standalone repository. A repository that functions individually unrelated and unconnected to otherrepositories.l Global repository. (PowerCenter only.) The centralized repository in a domain a group of connectedrepositories. Each domain can contain one global repository. The global repository can contain commonobjects to be shared throughout the domain through global shortcuts.l Local repository. (PowerCenter only.) A repository within a domain that is not the global repository.Each local repository in the domain can connect to the global repository and use objects in its sharedfolders.CheersSithu=======================================

Page 88: Informatica Interview QnA

79.Informatica - How can u work with remote database ininformatica?did u work directly by using remote connections?QUESTION #79 To work with remote datasource u need to connect it withremote connections.But it is notpreferable to work with that remote source directly by using remote connections .Instead u bring that source into U r local machine where informatica serverresides.If u work directly with remote source the session performance willdecreases by passing less amount of data across the network in a particular time.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.January 27, 2006 02:18:13 #1sithusithu Member Since: December 2005 Contribution: 161RE: How can u work with remote database in informatica...file:///C|/Perl/bin/result.html (87 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================You can work with remoteBut you have toConfigure FTPConnection detailsIP addressUser authenticationCheersSithu=======================================

80.Informatica - What is tracing level and what r the types oftracing level?

Page 89: Informatica Interview QnA

QUESTION #80 Tracing level represents the amount of information thatinformatcia server writes in a log file.Types of tracing levelNormalVerboseVerbose initVerbose dataClick Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 16, 2007 03:19:08 #1MinnuRE: What is tracing level and what r the types of trac...file:///C|/Perl/bin/result.html (88 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Its1) Terse2) Normal3) Verbose Init4) Verbose data=======================================

81.Informatica - If a session fails after loading of 10,000 records into the target.How can u load the records fromQUESTION #81 As explained above informatcia server has 3 methods torecovering the sessions.Use performing recovery to load the records from wherethe session fails.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 08, 2007 11:17:03 #1ggk.krishna Member Since: February 2007 Contribution: 12

Page 90: Informatica Interview QnA

RE: If a session fails after loading of 10,000 records...=======================================HiWe can restart the session using session recovery option in workflow manager and workflowmonitor. Then the loading starts from 10001 th row.If you start the session normally it starts from 1 st row.If you define target load type as "Bulk" session recovery is not possible.=======================================

82.Informatica - If i done any modifications for my table in backend does it reflect in informatca warehouse or mapiQUESTION #82 NO. Informatica is not at all concern with back end data base.Itdisplays u all the informationthat is to be stored in repository.If want to reflect back end changes toinformatica screens,file:///C|/Perl/bin/result.html (89 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

again u have to import from back end to informatica by valid connection.And uhave to replace the existing files with imported files.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.August 04, 2006 05:35:19 #1vidyanandRE: If i done any modifications for my table in back e...=======================================Yes It will be reflected once u refresh the mapping once again.=======================================It does matter if you have SQL override - say in the SQ or in a Lookup you override the default sql.

Page 91: Informatica Interview QnA

Then if you make a change to the underlying table in the database that makes the override SQLincorrect for the modified table the session will fail.If you change a table - say rename a column that is in the sql override statement then session will fail.But if you added a column to the underlying table after the last column then the sql statement in theoverride will still be valid. If you make change to the size of columns the sql will still be valid althoughyou may get truncation of data if the database column has larger size (more characters) than the SQ orsubsequent transformation.=======================================

83.Informatica - After draging the ports of three sources(sql server,oracle,informix) to a single source qualifier, cQUESTION #83 NO.Unless and until u join those three ports in source qualifieru cannot map them directly. Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.December 14, 2005 10:37:10 #1rishifile:///C|/Perl/bin/result.html (90 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: After draging the ports of three sources(sql serve...=======================================if u drag three hetrogenous sources and populated to target without any join means you are entertainingCarteisn product. If you don't use join means not only diffrent sources but homegeous sources are showsame error.If you are not interested to use joins at source qualifier level u can add some joins sepratly.=======================================Yes it possible...

Page 92: Informatica Interview QnA

=======================================I don't think dragging three heterogeneous sources in a single source qualifier is valid.Whenever we drag multiple sources in same source qualifier1. There must be a joining key between the tables.2. The SQL needs to be executed in the database to join the three tables.To use a single source qualifier for multiple sources the data source for all the sources should be same.For Heterogeneous join a Joiner transformation has to be used.The first part of the question itself is not possible.=======================================Sources from heterogenous databases cannot be pulled into a single source qualifier. They can only bejoined using a joiner. And then can be written to the target=======================================

84.Informatica - What is Data cleansing..?QUESTION #84 The process of finding and removing or correcting data that isincorrect, out-of-date, redundant, incomplete, or formatted incorrectly. Click Hereto view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 27, 2005 11:12:05 #1neethaRE: What is Data cleansing..?file:///C|/Perl/bin/result.html (91 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Data cleansing is a two step process including DETECTION and then CORRECTION of errors in adata set.=======================================This is nothing but polising of data. For example of one of the sub system store the Gender as M and F.The other may store it as MALE and FEMALE. So we need to polish this data clean it before it is addto Datawarehouse. Other typical example can be Addresses. The all sub systesms maintinns the

Page 93: Informatica Interview QnA

customer address can be different. We might need a address cleansing to tool to have the customersaddresses in clean and neat form.=======================================Data cleansing means remove the inconistance data and transfer the data correct way and correct manner=======================================it meansprocess of removing data inconsistancies and reducing data in accuracies.=======================================

85.Informatica - how can we partition a session in Informatica?QUESTION #85No best answer available. Please pick the good answer available or submit youranswer.July 08, 2005 18:12:42 #1Kevin BRE: how can we partition a session in Informatica?file:///C|/Perl/bin/result.html (92 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentThe Informatica PowerCenter Partitioning option optimizes parallel processing on multiprocessorhardware by providing a thread-based architecture and built-in data partitioning.GUI-based tools reduce the development effort necessary to create data partitions and streamlineongoing troubleshooting and performance tuning tasks while ensuring data integrity throughout theexecution process. As the amount of data within an organization expands and real-time demand forinformation grows the PowerCenter Partitioning optionenables hardware and applications to provide outstanding performance and jointly scale to handle largevolumes of data and users.=======================================Download the Document=======================================

Page 94: Informatica Interview QnA

86.Informatica - what is a time dimension? give an example.QUESTION #86No best answer available. Please pick the good answer available or submit youranswer.August 04, 2005 07:48:36 #1SakthiRE: what is a time dimension? give an example.Click Here to view complete documentTime dimension is one of important in Datawarehouse. Whenever u genetated the report that time uaccess all data from thro time dimension.eg. employee time dimensionFields : Date key full date day of wek day month quarter fiscal year=======================================In a relational data model for normalization purposes year lookup quarter lookup month lookup andweek lookups are not merged as a single table. In a dimensional data modeling(star schema) these tableswould be merged as a single table called TIME DIMENSION for performance and slicing data.This dimensions helps to find the sales done on date weekly monthly and yearly basis. We can have atrend analysis by comparing this year sales with the previous year or this week sales with the previousweek.=======================================A TIME DIMENSION is a table that contains the detail information of the time at which a particularfile:///C|/Perl/bin/result.html (93 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

'transaction' or 'sale' (event) has taken place.The TIME DIMENSION has the details ofDAY WEEK MONTH QUARTER YEAR=======================================

87.Informatica - Diff between informatica repositry server &

Page 95: Informatica Interview QnA

informatica serverQUESTION #87No best answer available. Please pick the good answer available or submit youranswer.August 11, 2005 02:05:13 #1Nagi R AnumandlaRE: Diff between informatica repositry server & informatica serverClick Here to view complete documentInformatica Repository Server:It's manages connections to the repository from client application.Informatica Server:It's extracts the source data performs the data transformation and loads thetransformed data into the target=======================================

88.Informatica - Explain the informatica Architecture in detailQUESTION #88No best answer available. Please pick the good answer available or submit youranswer.January 13, 2006 07:47:31 #1chiranth Member Since: December 2005 Contribution: 1RE: Explain the informatica Architecture in detail...file:///C|/Perl/bin/result.html (94 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentinformatica server connects source data and target data using nativeodbc driversagain it connect to the repository for running sessions and retriveing metadata informationsource------>informatica server--------->target||REPOSITORY=======================================repositoryRepositoryÕRepository ser.adm. controlserver

Page 96: Informatica Interview QnA

×sourceinformatica serverÕtarget-------------× × ×designer w.f.manager w.f.monitor=======================================

89.Informatica - Discuss the advantages & Disadvantages of star& snowflake schema?QUESTION #89No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (95 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.August 25, 2005 02:24:19 #1prasad nallapatiRE: Discuss the advantages & Disadvantages of star & snowflake schema?Click Here to view complete documentIn a star schema every dimension will have a primary key.In a star schema a dimension table will not have any parent table.Whereas in a snow flake schema a dimension table will have one or more parent tables.Hierarchies for the dimensions are stored in the dimensional table itself in star schema.Whereas hierachies are broken into separate tables in snow flake schema. These hierachies helps to drilldown the data from topmost hierachies to the lowermost hierarchies.=======================================In a STAR schema there is no relation between any two dimension tables whereas in a SNOWFLAKEschema there is a possible relation between the dimension tables.=======================================star schema consists of single fact table surrounded by some dimensional table.In snowflake schema thedimension tables are connected with some subdimension table.In starflake dimensional ables r denormalized in snowflake dimension tables r normalized.star schema is used for report generation snowflake schema is used for cube.

Page 97: Informatica Interview QnA

The advantage of snowflake schema is that the normalized tables r easier to maintain.it also saves thestorage space.The disadvantage of snowflake schema is that it reduces the effectiveness of navigation across thetables due to large no of joins between them.=======================================It depends upon the clients which they are following whether snowflake or star schema.file:///C|/Perl/bin/result.html (96 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Snowflakes are an addition to the Kimball Dimensional system to enable that system to handlehierarchial data. When Kimball proposed the dimensional data warehouse it was not first recogonizedthat hierarchial data could not be stored.Commonly every attempt is made not to use snowflakes by flattening hierarchies but when either this isnot possible or practical the snowflake design solves the problem.Snowflake tables are ofter called "outlyers" by data modelers because they must share a key with adiminsion that directly connects to a fact table.SD2 can have "outlyers" but these are very difficult to instantiate.=======================================

90.Informatica - Waht are main advantages and purposeofusingNormalizer Transformation in Informatica?QUESTION #90No best answer available. Please pick the good answer available or submit youranswer.August 25, 2005 02:27:10 #1prasad nallapatiRE: Waht are main advantages and purpose of using Normalizer Transformation in Informatica?Click Here to view complete document

Page 98: Informatica Interview QnA

Narmalizer Transformation is used mainly with COBOL sources where most of the time data is storedin de-normalized format. Also Normalizer transformation can be used to create multiple rows from asingle row of data=======================================hi By using Normalizer transformation we can conver rows into columns and columns into rows andalso we can collect multile rows from one row=======================================vamshidharHow do u convert rows to columns in Normalizer? could you explain us??Normally its used to convert columns to rows but for converting rows to columns we need anaggregator and expression and little effort is needed for coding. Denormalization is not possible with aNormalizer transformation.file:///C|/Perl/bin/result.html (97 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

91.Informatica - How to read rejected data or bad data from badfile and reload it to target?QUESTION #91No best answer available. Please pick the good answer available or submit youranswer.October 04, 2005 23:16:28 #1ravi kumar guturiRE: How to read rejected data or bad data from bad fil...Click Here to view complete documentcorrection the rejected data and send to target relational tables using loadorder utility. Find out therejected data by using column indicatior and row indicator.=======================================Design a trap to a file or table by the use of a filter transformation or a router transformation. Routerworks well for this.=======================================

Page 99: Informatica Interview QnA

92.Informatica - How do youtransfert the data from datawarehouse to flatfile?QUESTION #92No best answer available. Please pick the good answer available or submit youranswer.November 09, 2005 17:02:07 #1paul luthraRE: How do you transfert the data from data wareh...file:///C|/Perl/bin/result.html (98 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentYou can write a mapping with the flat file as a target using a DUMMY_CONNECTION. A flat filetarget is built by pulling a source into target space using Warehouse Designer tool.=======================================

93.Informatica - At the max how many tranformations can be usin a mapping?QUESTION #93No best answer available. Please pick the good answer available or submit youranswer.September 27, 2005 12:32:26 #1sangeethaRE: At the max how many tranformations can be us in a ...Click Here to view complete documentn number of transformations=======================================22 transformation expression joiner aggregator router stored procedure etc. You can find on Informaticatransformation tool bar.=======================================In a mapping we can use any number of transformations depending on the project and the includedtransformations in the perticular related transformatons.

Page 100: Informatica Interview QnA

=======================================There is no such limitation to use this number of transformations. But in performance point of viewusing too many transformations will reduce the session performance.My idea is if needed more tranformations to use in a mapping its better to go for some stored procedure.=======================================Always remember when designing a mapping: less for moredesign with the least number of transformations that can do the most jobs.file:///C|/Perl/bin/result.html (99 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

94.Informatica - What is the difference between Narmal load andBulk load?QUESTION #94No best answer available. Please pick the good answer available or submit youranswer.September 09, 2005 09:01:19 #1sureshRE: What is the difference between Narmal load and Bulk load?Click Here to view complete documentwhat is the difference between powermart and power center?=======================================when we go for unconnected lookup transformation?=======================================bulk load is faster than normal load. In case of bulk load informatica server by passes the data base logfile so we can not roll bac the transactions. Bulk load is also called direct loading.=======================================Normal Load: Normal load will write information to the database log file so that if any recorvery isneeded it is will be helpful. when the source file is a text file and loading data to a table in such caseswe should you normal load only else the session will be failed.Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery isneeded we can't do any thing in such cases.

Page 101: Informatica Interview QnA

compartivly Bulk load is pretty faster than normal load.=======================================Rule of thumbFor small number of rows use Normal loadfile:///C|/Perl/bin/result.html (100 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

For volume of data use bulk load=======================================also remember that not all databases support bulk loading. and bulk loading fails a session if yourmapping has primary keys=======================================

95.Informatica - what is a junk dimensionQUESTION #95No best answer available. Please pick the good answer available or submit youranswer.October 17, 2005 06:22:53 #1prasad NallapatiRE: what is a junk dimensionClick Here to view complete documentA junk dimension is a collection of random transactional codes flags and/or text attributes that areunrelated to any particular dimension. The junk dimension is simply a structure that provides aconvenient place to store the junk attributes. A good example would be a trade fact in a company thatbrokers equity trades.=======================================A junk dimension is a used for constrain queary purpose based on text and flag values.Some times a few dimensions discarded in a major dimensions That time we kept in to one place the alldiscarded dimensions that is called junk dimensions.=======================================Junk dimensions are particularly useful in Snowflake schema and one of the reasons why snowfalke ispreferred over the star schema.

Page 102: Informatica Interview QnA

There are dimensions that are frequently updated. So from the base set of the dimensions that arealready existing we pull out the dimensions that are frequently updated and put them into a separatetable.This dimension table is called the junk dimension.=======================================

96.Informatica - can we lookup a table from a source qualiferfile:///C|/Perl/bin/result.html (101 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

transformation-unconnected lookupQUESTION #96No best answer available. Please pick the good answer available or submit youranswer.November 22, 2005 01:29:07 #1nandam Member Since: November 2005 Contribution: 1RE: can we lookup a table from a source qualifer trans...Click Here to view complete documentno we cant lookup data=======================================No. we can't do.I will explain you why.1) Unless you assign the output of the source qualifier to another transformation or to target no way itwill include the feild in the query.2) source qualifier don't have any variables feilds to utalize as expression.=======================================No it's not possible. source qualifier don't have any variables fields to utilize as expression.=======================================

97.Informatica - how to get the first 100 rows from the flat fileinto the target?QUESTION #97

Page 103: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.October 04, 2005 01:07:33 #1raviRE: how to get the first 100 rows from the flat file i...file:///C|/Perl/bin/result.html (102 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentby usning shell script=======================================please check this onetask ----->(link) session (workflow manager)double click on link and type $$source sucsess rows(parameter in session variables) 100it should automatically stops session.=======================================I'd copy first 100 records to new file and load.Just add this Unix command in session properties --> Components --> Pre-session Commandhead -100 <source file path> > <new file name>Mention new file name and path in the Session --> Source properties.=======================================1. Use test download option if you want to use it for testing.2. Put counter/sequence generator in mapping and perform it.Hope it helps.=======================================Use a sequence generator set properties as reset and then use filter put condition as NEXTVAL< 100 .=======================================

98.Informatica - can we modify the data in flat file?QUESTION #98No best answer available. Please pick the good answer available or submit youranswer.October 04, 2005 23:19:25 #1ravikumar guturifile:///C|/Perl/bin/result.html (103 of 363)4/1/2009 7:50:58 PM

Page 104: Informatica Interview QnA

file:///C|/Perl/bin/result.html

RE: can we modify the data in flat file?Click Here to view complete documentcan't=======================================Manually Yes.=======================================yes=======================================yes=======================================Can you please explian how to do this without modifying the flat file manually?=======================================Just open the text file with notepad change what ever you want (but datatype should be the same)Cheerssithu=======================================yes by open a text file and edit=======================================You can generate a flat file with program....mean not manually.=======================================Let's not discuss about manually modifying the data of flat file.Let's assume that the target is a flat file. I want to update the data in the flat file target based on the inputsource rows. Like we use update strategy/ target properties in case of relational targets for update; dowe have any options in the session or maaping to perform a similar task for a flat file target?I have heard about the append option in INFA 8.x. This may be helpful for incremental load in the flatfile.But this is not a workaround for updating the rows.Please post your views.=======================================You can modify the flat file using shell scripting in unix ( awk grep sed ).Hope this helps.file:///C|/Perl/bin/result.html (104 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

Page 105: Informatica Interview QnA

99.Informatica - difference between summary filter and detailsfilter?QUESTION #99No best answer available. Please pick the good answer available or submit youranswer.December 01, 2005 09:52:34 #1renukaRE: difference between summary filter and details filt...Click Here to view complete documentHiSummary Filter --- we can apply records group by that contain common values.Detail Filter --- we can apply to each and every record in a database.=======================================Will it be correct to say thatSummary Filter ----> HAVING Clause in SQLDetails Filter ------> WHERE Clause in SQL=======================================

100.Informatica - what are the difference between view andmaterialized view?QUESTION #100No best answer available. Please pick the good answer available or submit youranswer.Sorting Options Page 1 of 3 « First 1 2 3 > Last »file:///C|/Perl/bin/result.html (105 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

September 30, 2005 00:34:20 #1khadarbasha Member Since: September 2005 Contribution: 2RE: what are the difference between view and materiali...Click Here to view complete documentMaterialized views are schema objects that can be used to summarize precompute replicate anddistribute data. E.g. to construct a data warehouse.

Page 106: Informatica Interview QnA

A materialized view provides indirect access to table data by storing the results of a query in a separateschema object. Unlike an ordinary view which does not take up any storage space or contain any data=======================================view is a tailriad representaion of data its access data from existing table it have logical structure cantspace occupation.but meterailzedview stores precaluculated data its have physical structure space occupation=======================================view is a tailraid representation of data but metereialized view is stores precaluculated data view is alogical structure but mview is physical structure view is cant occupie the space bu mview is occpiesspace.=======================================Diffence between View Materialized viewIf you (Change) update or insert in view the corresponding table will affect. but changes will not affectmaterialized view.=======================================materialized views to store copies of data or aggregations.Materialized views can be used to replicateall or part of a single tableor part of single table or to replicate the result of a query against multiple tables.refreshes of thereplicated daa can be done automatically by the database at time intervals.file:///C|/Perl/bin/result.html (106 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================A view do no derive the change made to it master table after the view is created.A materialized view immediately carries the change done to its mater table even after the materializedview is created.=======================================view- the select query is stored in the db. whenever u use select from view the stored query is executed.Effectively u r calling the stored query. In case u want use the query repetadly or complex queries we

Page 107: Informatica Interview QnA

store the queries in Db using View.where as materialized view stores the data as well. like table. here storage parameters are required.=======================================A view is just a stored query and has no physical part. Once a view is instantiated performance can bequite good until it is aged out of the cache. A materialized view has a physical table associated with it; itdoesn't have to resolve the query each time it is queried. Depending on how large a result set and howcomplex the query a materialized view should perform better.=======================================In materialized view we cann't perform DML operation but the reverse is true in case of simple view.=======================================In case of materialised view we can perform DML but reverse is not true in case of simple view.=======================================

101.Informatica - What is the difference between summary filterand detail filterQUESTION #101No best answer available. Please pick the good answer available or submit youranswer.November 23, 2005 15:07:50 #1sirRE: what is the difference between summary filter and ...file:///C|/Perl/bin/result.html (107 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentsummary filter can be applieid on a group of rows that contain a common value.where as detail filterscan be applied on each and every rec of the data base.=======================================

102.Informatica - CompareData Warehousing Top-Down

Page 108: Informatica Interview QnA

approach withBottom-up approachQUESTION #102No best answer available. Please pick the good answer available or submit youranswer.October 04, 2005 12:22:13 #1ravi kumar guturiRE: in datawarehousing approach(top/down) or (bottom/u...Click Here to view complete documentbottom approach is the best because in 3 tier architecture datatier is the bottom one.=======================================Top/down approach is better in datawarehousing=======================================Rajesh: Bottom/Up approach is better=======================================At the time of software intragartion buttom/up is good but implimentatino time top/down is good.=======================================top downODS-->ETL-->Datawarehouse-->Datamart-->OLAPBottom upODS-->ETL-->Datamart-->Datawarehouse-->OLAPCheersfile:///C|/Perl/bin/result.html (108 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Sithu=======================================in top down approch: first we have to build dataware house then we will build data marts. which willneed more crossfunctional skills and timetaking process also costly.in bottom up approach: first we will build data marts then data warehuse. the data mart that is firstbuild will remain as a proff of concept for the others. less time as compared to above and less cost.=======================================Nothing wrong with any of these approaches. It all depends on your business requirements and what isin place already at your company. Lot of folks have a hybrid approach. For more info read Kimball vs

Page 109: Informatica Interview QnA

Inmon..=======================================

103.Informatica - Discuss which is better among incremental load,Normal Load and Bulk loadQUESTION #103No best answer available. Please pick the good answer available or submit youranswer.October 20, 2005 03:06:53 #1ravi guturiRE: incremental loading ? normal load and bulk load?file:///C|/Perl/bin/result.html (109 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentnormal load is the best.=======================================normal is always preffered over bulk.=======================================It depends on the requirement. Otherwise Incremental load which can be better as it takes onle that datawhich is not available previously on the target.=======================================If the database supports bulk load option from Infromatica then using BULK LOAD for intial loadingthe tables is recommended.Depending upon the requirment we should choose between Normal and incremental loading strategies.=======================================Normal Loading is Better=======================================Rajesh:Normal load is Better=======================================RE: incremental loading ? normal load and bulk load?<b...=======================================if supported by the database bulk load can do the loading faster than normal load.(incremental loadconcept is differnt dont merge with bulk load mormal load)=======================================

Page 110: Informatica Interview QnA

104.Informatica - What is the difference between connected andunconnected stored procedures.QUESTION #104No best answer available. Please pick the good answer available or submit youranswer.September 25, 2005 20:02:14 #1file:///C|/Perl/bin/result.html (110 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

sangeethaRE: What is the difference between connected and uncon...Click Here to view complete documentUnconnected:The unconnected Stored Procedure transformation is not connected directly to the flow of the mapping.It either runs before or after the session or is called by an expression in another transformation in themapping.connected:The flow of data through a mapping in connected mode also passes through the Stored Proceduretransformation. All data entering the transformation through the input ports affects the stored procedure.You should use a connected Stored Procedure transformation when you need data from an input portsent as an input parameter to the stored procedure or the results of a stored procedure sent as an outputparameter to another transformation.=======================================Run a stored procedure before or after your session. UnconnectedRun a stored procedure once during your mapping such as pre- or postsession.UnconnectedRun a stored procedure every time a row passes through the StoredProcedure transformation.Connected or UnconnectedRun a stored procedure based on data that passes through the mappingsuch as when a specific port does not contain a null value.Unconnected

Page 111: Informatica Interview QnA

Pass parameters to the stored procedure and receive a single outputparameter.Connected or UnconnectedPass parameters to the stored procedure and receive multiple outputparameters.Note: To get multiple output parameters from an unconnected StoredProcedure transformation you must create variables for each outputparameter. For details see Calling a Stored Procedure From anExpression.Connected or UnconnectedRun nested stored procedures. Unconnectedfile:///C|/Perl/bin/result.html (111 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Call multiple times within a mapping. UnconnectedCheersSithu=======================================

105.Informatica - Differences between Informatica 6.2 andInformatica 7.0Yours sincerely,Rushi.QUESTION #105No best answer available. Please pick the good answer available or submit youranswer.October 04, 2005 01:17:06 #1raviRE: Differences between Informatica 6.2 and Informati...Click Here to view complete documentin 7.0 intorduce custom transfermation and union transfermation and also flat file lookup condition.=======================================Features in 7.1 are :1.union and custom transformation2.lookup on flat file3.grid servers working on different operating systems can coexist on same server4.we can use pmcmdrep5.we can export independent and dependent rep objects6.we ca move mapping in any web application7.version controlling

Page 112: Informatica Interview QnA

file:///C|/Perl/bin/result.html (112 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

8.data profilling=======================================Can someone guide me on1) Data profiling.2) Exporting independent and dependent objects.Thanks in advance.-Azhar=======================================

106.Informatica - whats the diff between Informatica powercenterserver, repositoryserver and repository?QUESTION #106 Powercenter server contains the sheduled runs at which timedata should load from source to targetRepository contains all the definitions of the mappings done in designer.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 08, 2005 01:59:02 #1Gokulnath_J Member Since: November 2005 Contribution: 3RE: whats the diff between Informatica powercenter ser...file:///C|/Perl/bin/result.html (113 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Repository is a database in which all informatica componets are stored in the form of tables. Thereposiitory server controls the repository and maintains the data integrity and Consistency across therepository when multiple users use Informatica. Powercenter Server/Infa Server is responsible forexecution of the components (sessions) stored in the repository.=======================================hiRepository is nothing but a set of tables created in a DB.it stores all metadata of the infa objects.

Page 113: Informatica Interview QnA

Repository server is one which communicates with the repository i.e DB. all the metadata is retrivedfrom the DB through Rep server.All the client tools communicate with the DB through Rep server.Infa server is one which is responsible for running the WF tasks etc... Infa server also communicateswith the DB through Rep server.=======================================power center server-power center server does the extraction from the source and loaded it to the target.repository server-it takes care of the connection between the power center client and repository.repository-it is a place where all the metadata information is stored.repository server and power centerserver access the repository for managing the data.=======================================

107.Informatica - how to create the staging area in your databaseQUESTION #107 client having database throught that data base u get allsources Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 02, 2005 11:56:42 #1Chandranfile:///C|/Perl/bin/result.html (114 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: how to create the staging area in your database=======================================If You have defined all the staging tables as tragets use option Targets--> Generate sql in warehousedesigner=======================================A Staging area in a DW is used as a temporary space to hold all the records from the source system. Somore or less it should be exact replica of the source systems except for the laod startegy where we usetruncate and reload options.

Page 114: Informatica Interview QnA

So create using the same layout as in your source tables or using the Generate SQL option in theWarehouse Designer tab.=======================================creating of staging tables/area is the work of data modellor/dba.just like create table <tablename>......the tables will be created. they will have some name to identified as staging like dwc_tmp_asset_eval.tmp-----> indicate temparary tables nothing but staging=======================================

108.Informatica - what does the expression n filtertransformations do in Informatica Slowly growing target wizard?QUESTION #108No best answer available. Please pick the good answer available or submit youranswer.November 02, 2005 23:10:06 #1sivapreddy Member Since: November 2005 Contribution: 1RE: what does the expression n filter transformations ...file:///C|/Perl/bin/result.html (115 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentdontknow=======================================EXPESSION transformation detects and flags the rows from source.Filter transformation filters the rows that are not flagged and passes the flagged rows to the Updatestrategy transformation=======================================Expression finds the Primary key is or not and calculates new flagBased on that New Flag filter transformation filters the DataCheersSithu=======================================You can use the Expression transformation to calculate values in a single row before you write to the

Page 115: Informatica Interview QnA

target. For example you might need to adjust employee salaries concatenate first and last names orconvert strings to numbers.=======================================

109.Informatica - Briefly explian the Versioning Concept inPower Center 7.1.QUESTION #109No best answer available. Please pick the good answer available or submit youranswer.November 29, 2005 11:29:11 #1Manoj Kumar PanigrahiRE: Briefly explian the Versioning Concept in Power Ce...file:///C|/Perl/bin/result.html (116 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentIn power center 7.1 use 9 Tem server i.e add in Look up. But in power center 6.x use only 8 tem server.and add 5 transformation . in 6.x anly 17 transformation but 7.x use 22 transformation.=======================================Hi manojI appreciate ur response But can u be a bit clearthankssri=======================================When you create a version of a folder referenced by shortcuts all shortcuts continue to reference theiroriginal object in the original version. They do not automatically update to the current folder version.For example if you have a shortcut to a source definition in the Marketing folder version 1.0.0 then youcreate a new folder version 1.5.0 the shortcut continues to point to the source definition in version 1.0.0.Maintaining versions of shared folders can result in shortcuts pointing to different versions of thefolder. Though shortcuts to different versions do not affect the server they might prove more difficult to

Page 116: Informatica Interview QnA

maintain. To avoid this you can recreate shortcuts pointing to earlier versions but this solution is notpractical for much-used objects. Therefore when possible do not version folders referenced by shortcuts.CheersSithu=======================================

110.Informatica - How to join two tables without using the JoinerTransformation.QUESTION #110No best answer available. Please pick the good answer available or submit youranswer.December 01, 2005 07:49:58 #1file:///C|/Perl/bin/result.html (117 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

thiyagarajanc Member Since: November 2005 Contribution: 4RE: How to join two tables without using the Joiner Tr...Click Here to view complete documentItz possible to join the two or more tables by using source qualifier.But provided the tables should haverelationship.When u drag n drop the tables u will getting the source qualifier for each table.Delete all the sourcequalifiers.Add a common source qualifier for all.Right click on the source qualifier u will find EDITclick on it.Click on the properties tab u will find sql query in that u can write ur sqls=======================================The Joiner transformation requires two input transformations from two separate pipelines. An inputtransformation is any transformation connected to the input ports of the current transformation.CheersSithu=======================================can do using source qualifer but some limitations are there.Cheers

Page 117: Informatica Interview QnA

Sithu=======================================joiner transformation is used to join n (n>1) tables from same or different databases but source qualifiertransformation is used to join only n tables from same database .=======================================simpleIn the session property user defined options is there by using this we can join with out joinerfile:///C|/Perl/bin/result.html (118 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================use Source Qualifier transformation to join tables on the SAME database. Under its properties tab youcan specify the user-defined join. Any select statement you can run on a database.. you can do also inSource Qualifier.Note: you can only join 2 tables with Joiner Transformation but you can join two tables from differentdatabases.CheersRay Anthony=======================================hiu can join 2 RDBMS sources of same database using a SQ by specifying user defined joins.u can also join two tables of same kind using a lookup.=======================================

111.Informatica - Identifying bottlenecks in various componentsof Informatica and resolving them.QUESTION #111No best answer available. Please pick the good answer available or submit youranswer.December 20, 2005 08:13:47 #1kalyanRE: Identifying bottlenecks in various components of I...file:///C|/Perl/bin/result.html (119 of 363)4/1/2009 7:50:58 PM

Page 118: Informatica Interview QnA

file:///C|/Perl/bin/result.html

Click Here to view complete documenthaiThe best way to find out bottlenecks is writing to flat file and see where the bottle neck is .kalyan=======================================

112.Informatica - How do you decide whether you need ti doaggregations at database level or at Informatica level?QUESTION #112No best answer available. Please pick the good answer available or submit youranswer.December 05, 2005 04:45:35 #1RishiRE: How do you decide whether you need ti do aggregati...Click Here to view complete documentIt depends upon our requirment only.If you have good processing database you can create aggregationtable or view at database level else its better to use informatica. Here i'm explaing why we need to useinformatica.what ever it may be informatica is a thrid party tool so it will take more time to process aggregationcompared to the database but in Informatica an option we called Incremental aggregation which willhelp you to update the current values with current values +new values. No necessary to process entirevalues again and again. Unless this can be done if nobody deleted that cache files. If that happend totalaggregation we need to execute on informatica also.In database we don't have Incremental aggregation facility.=======================================hifile:///C|/Perl/bin/result.html (120 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Page 119: Informatica Interview QnA

see informatica is basically a integration tool.it all depends on the source u have and ur requirment.if uhave a EMS Q or flat file or any source other than RDBMS u need info to do any kind of agg functions.if ur source is a RDBMS u r not only doing the aggregation using informatica right?? there will be abussiness logic behind it. and u need to do some other things like looking up against some table orjoining the agg result with the actual source. etc...if in informatica if u r asking whether to do it in the mapping level or at DB level then fine its alwaysbetter to do agg at the DB level by using SQL over ride in SQ if only aggr is the main purpose of urmapping. it definetly improves the performance.=======================================

113.Informatica - Source table has 1000 rows. In sessionconfiguration --- target Load option-- \QUESTION #113No best answer available. Please pick the good answer available or submit youranswer.April 10, 2007 23:51:51 #1Surendra KumarRE: Source table has 1000 rows. In session configurati...Click Here to view complete documentIf you use bulk mode then it will be fast to load...=======================================If ur database supported bulk option Then choose the bulk option. other wise go for normal option=======================================

114.Informatica - what is the procedure to write the query to listthe highest salary of three employees?QUESTION #114

Page 120: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.December 01, 2005 07:31:41 #1thiyagarajanc Member Since: November 2005 Contribution: 4file:///C|/Perl/bin/result.html (121 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: what is the procedure to write the query to list t...Click Here to view complete documentselectsalfrom(select sal from emp order by sal desc)whererownum < 3;=======================================SELECT salFROM (SELECT sal FROM my_table ORDER BY sal DESC)WHERE ROWNUM < 4;=======================================haithere is max function in Informatica use it.kalyan=======================================since this is informatica.. you might as well use the Rank transformation. check out the help file on howto use it.CheersRay Anthonyfile:///C|/Perl/bin/result.html (122 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================select max(sal) from emp;=======================================the following is the query to find out the top three salariesin ORACLE:--(take emp table)select * from emp e where 3>(select count (*) from emp wheree.sal>emp.sal) order by sal desc.in SQL Server:-(take emp table)select top 10 sal from emp=======================================

Page 121: Informatica Interview QnA

You can write the query as follows.SQL> select * from 2 (select ename sal from emp order by sal desc)3 where rownum< 3;=======================================

115.Informatica - which objects are required by the debugger tocreate a valid debug session?QUESTION #115No best answer available. Please pick the good answer available or submit youranswer.December 05, 2005 03:15:32 #1RishiRE: which objects are required by the debugger to crea...file:///C|/Perl/bin/result.html (123 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentIntially the session should be valid session.source target lookups expressions should be availble min 1 break point should be available for debuggerto debug your session.=======================================hiWe can create a valid debug session even without a single break-point. But we have to give validdatabase connection details for sources targets and lookups used in the mapping and it should containvalid mapplets (if any in the mapping).=======================================Informatica server must run=======================================Informatica Server Object is must.CheersSithu=======================================

116.Informatica - What is the limit to the number of sources and

Page 122: Informatica Interview QnA

targets you can have in a mappingQUESTION #116No best answer available. Please pick the good answer available or submit youranswer.December 05, 2005 03:21:24 #1RishiRE: What is the limit to the number of sources and tar...file:///C|/Perl/bin/result.html (124 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentAs per my knowledge there is no such restriction to use this number of sources or targets inside amapping.Question is if you make N number of tables to participate at a time in processing what is the position ofyour database. I orginzation point of view it is never encouraged to use N number of tables at a time Itreduces database and informatica server performance=======================================the restriction is only on the database side. how many concurrent threads r u allowed to run on the dbserver?=======================================there is one formula..no.of bloccks 0.9*( DTM buffer size/block size)*no.of partitions.here no.of blocks (source+targets)*2=======================================

117.Informatica - What is difference between IIF and DECODEfunctionQUESTION #117No best answer available. Please pick the good answer available or submit youranswer.December 16, 2005 10:27:07 #1VJRE: What is difference between IIF and DECODE function...

Page 123: Informatica Interview QnA

file:///C|/Perl/bin/result.html (125 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentYou can use nested IIF statements to test multiple conditions. The following example tests for variousconditions and returns 0 if sales is zero or negative:IIF( SALES > 0 IIF( SALES < 50 SALARY1 IIF( SALES < 100 SALARY2 IIF( SALES < 200SALARY3 BONUS))) 0 )You can use DECODE instead of IIF in many cases. DECODE may improve readability. The followingshows how you can use DECODE instead of IIF :SALES > 0 and SALES < 50 SALARY1SALES > 49 AND SALES < 100 SALARY2SALES > 99 AND SALES < 200 SALARY3SALES > 199 BONUS)=======================================u can use decode in conditioning coloumns also while we cann't use iff but u can use case. but by usingdecode retrieveing data is quick=======================================Decode can be used in select statement whereas IIF cannot be used.=======================================

118.Informatica - What are variable ports and list two situationswhen they can be used?QUESTION #118No best answer available. Please pick the good answer available or submit youranswer.December 19, 2005 20:41:34 #1RajeshRE: What are variable ports and list two situations wh...file:///C|/Perl/bin/result.html (126 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentWe have mainly tree ports Inport Outport Variable port. Inport represents data is flowing into

Page 124: Informatica Interview QnA

transformation. Outport is used when data is mapped to next transformation. Variable port is used whenwe mathematical caluculations are required. If any addition i will be more than happy if you can share.=======================================you can also use as for example consider price and quantity and total as a varaible we can mak a sum onthe total_amt by givingsum(tatal_amt)=======================================For example if you are trying to calculate bonus from emp tableBonus sal*0.2Totalsal sal+comm.+bonus=======================================variable port is used to break the complex expression into simplerand also it is used to store intermediate values=======================================Variable Ports usually carry intermediate data (values) and can be used in Expression transformation.=======================================

119.Informatica - How does the server recognise the source andtarget databases?QUESTION #119No best answer available. Please pick the good answer available or submit youranswer.January 01, 2006 00:53:24 #1reddeppafile:///C|/Perl/bin/result.html (127 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: How does the server recognise the source and targe...Click Here to view complete documentby using ODBC connection.if it is relational.if is flat file FTP connection..see we can make sure withconnection in the properties of session both sources && targets.=======================================

Page 125: Informatica Interview QnA

120.Informatica - How to retrive the records from a rejected file.explane with syntax or exampleQUESTION #120No best answer available. Please pick the good answer available or submit youranswer.January 01, 2006 00:51:13 #1reddeppaRE: How to retrive the records from a rejected file. e...Click Here to view complete documentthere is one utility called reject Loader where we can findout the reject records.and able to refine andreload the rejected records..=======================================ya. every time u run the session one reject file will be created and all the reject files will be there in thereject file. u can modify the records and correct the things in the records and u can load them to thetarget directly from the reject file using Regect loader.if i am wrong pls. correct.kumar.V=======================================can you explain how to load rejected rows thro informatica=======================================During the execution of workflow all the rejected rows will be stored in bad files(where yourinformatica server get installed;C:\Program Files\Informatica PowerCenter 7.1\Server) These bad filescan be imported as flat a file in source then thro' direct maping we can load these files in desired format.=======================================

121.Informatica - How to lookup the data on multiple tabels.QUESTION #121No best answer available. Please pick the good answer available or submit your

Page 126: Informatica Interview QnA

answer.file:///C|/Perl/bin/result.html (128 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

January 05, 2006 12:05:06 #1reddeppaHow to lookup the data on multiple tabels.Click Here to view complete documentwhy using SQL override..we can lookup the Data on multiple tables.See in the properties..tab..=======================================HiThanks for your responce. But my question isI have two sources or target tables i want to lookup that two sources or target tables. How can i. It ispossible to SQL Override.=======================================just check with import option=======================================How to lookup the data on multiple tabels.=======================================if u want to lookup data on multiple tables at a time u can do one thing join the tables which u want thenlookup that joined table. informatica provieds lookup on joined tables hats off to informatica.=======================================HiYou can do it.When you create lookup transformation that time INFA asks for table name so you can choose eithersource target import and skip.So click skip and the use the sql overide property in properties tab to join two table for lookup.=======================================file:///C|/Perl/bin/result.html (129 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

join the two source by using the joiner transformation and then apply a look up on the resaulting table=======================================hiwhat ever my friends have answered earlier is correct. to be more specific

Page 127: Informatica Interview QnA

if the two tables are relational then u can use the SQL lookup over ride option to join the two tables inthe lookup properties.u cannot join a flat file and a relatioanl table.eg: lookup default query will be select lookup table column_names from lookup_table. u can nowcontinue this query. add column_names of the 2nd table with the qualifier and a where clause. if u wantto use a order by then use -- at the end of the order by.hope this is more clear=======================================

122.Informatica - What is the procedure to load the fact table.Givein detail?QUESTION #122No best answer available. Please pick the good answer available or submit youranswer.January 19, 2006 14:26:22 #1GuestRE: What is the procedure to load the fact table.Give ...file:///C|/Perl/bin/result.html (130 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentBased on the requirement to your fact table choose the sources and data and transform it based onyour business needs. For the fact table you need a primary key so use a sequence generatortransformation to generate a unique key and pipe it to the target (fact) table with the foreign keysfrom the source tables.Please correct if I am wrong.=======================================we use the 2 wizards (i.e) the getting started wizard and slowly changing dimension wizard to load thefact and dimension tables by using these 2 wizards we can create different types of mappings accordingto the business requirements and load into the star schemas(fact and dimension tables).=======================================

Page 128: Informatica Interview QnA

first dimenstion tables need to be loaded then according to the specifications the fact tables should beloaded. dont think that fact tables r different in case of loading it is general mapping as we do for othertables. specifications will play important role for loading the fact.=======================================hiusually source records are looked up with the records in the dimension table.DIM tables are calledlookup or reference table. all the possible values are stored in DIM table. e.g product all the existingprod_id will be in DIM table. when data from source is looked up against the dim table thecorresponding keys are sent to the fact table.this is not the fixed rule to be followed it may vary as perur requirments and methods u follow.some times only the existance check will be done and the prod_iditself will be sent to the fact.=======================================

123.Informatica - What is the use of incremental aggregation?Explain me in brief with an example.QUESTION #123No best answer available. Please pick the good answer available or submit youranswer.January 29, 2006 11:58:51 #1file:///C|/Perl/bin/result.html (131 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

gazulas Member Since: January 2006 Contribution: 17RE: What is the use of incremental aggregation? Explai...Click Here to view complete documentits a session option. when the informatica server performs incremental aggr. it passes new source datathrough the mapping and uses historical chache data to perform new aggregation caluculationsincrementaly. for performance we will use it.=======================================

Page 129: Informatica Interview QnA

Incremental aggregation is in session properties i have 500 records in my source and again i got 300records if u r not using incremental aggregation what are calculation r using on 500 records again thatcalculation will be done on 500+ 300 records if u r using incremental aggregation calculation will bedone one only what are new records (300) that will be calculated dur to this one performance willincreasing.=======================================

124.Informatica - How to delete duplicate rows in flat files sourceis any option in informaticaQUESTION #124 Submitted by: gazulasuse a sorter transformation , in that u will have a "distinct" option make use ofit .Above answer was rated as good by the following members:sn3508 Click Here to view complete documentuse a sorter transformation in that u will have a distinct option make use of it .=======================================file:///C|/Perl/bin/result.html (132 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

hiu can use a dynamic lookup or an aggregator or a sorter for doing this.=======================================Instead we can use 'select distinct' query in Source qualifier of the source Flat file. Correct me if I amwrong.=======================================You cannot write SQL override for flat file=======================================

125.Informatica - how to use mapping parameters and what istheir useQUESTION #125

Page 130: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.January 29, 2006 11:47:14 #1gazulas Member Since: January 2006 Contribution: 17RE: how to use mapping parameters and what is their us...Click Here to view complete documentin designer u will find the mapping parameters and variables options.u can assign a value to them indesigner. comming to there uses suppose u r doing incremental extractions daily. suppose ur sourcesystem contains the day column. so every day u have to go to that mapping and change the day so thatthe particular data will be extracted . if we do that it will be like a layman's work. there comes theconcept of mapping parameters and variables. once if u assign a value to a mapping variable then it willchange between sessions.=======================================mapping parameters and variables make the use of mappings more flexible.and also it avoids creatingof multiple mappings. it helps in adding incremental data.mapping parameters and variables has tocreate in the mapping designer by choosing the menu option as Mapping ----> parameters and variablesand the enter the name for the variable or parameter but it has to be preceded by $$. and choose type asparameter/variable datatypeonce defined the variable/parameter is in the any expression for example inSQ transformation in the source filter prop[erties tab. just enter filter condition and finally create aparameter file to assgn the value for the variable / parameter and configigure the session properties.however the final step is optional. if ther parameter is npt present it uses the initial value which isfile:///C|/Perl/bin/result.html (133 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

assigned at the time of creating the variable=======================================

Page 131: Informatica Interview QnA

126.Informatica - Can we use aggregator/active transformationafter update strategy transformationQUESTION #126No best answer available. Please pick the good answer available or submit youranswer.January 30, 2006 05:06:08 #1jawaharRE: Can we use aggregator/active transformation after ...Click Here to view complete documentwe can use but the update flag will not be remain.but we can use passive transformation=======================================I guess no update can be placed just before to the target qs per my knowledge=======================================You can use aggregator after update strategy. The problem will be once you perform the update strategysay you had flagged some rows to be deleted and you had performed aggregator transformation for allrows say you are using SUM function then the deleted rows will be subtracted from this aggregatortransformation.=======================================

127.Informatica - why dimenstion tables are denormalized innature ?QUESTION #127No best answer available. Please pick the good answer available or submit youranswer.January 31, 2006 04:05:43 #1RahmanRE: why dimenstion tables are denormalized in nature ?...file:///C|/Perl/bin/result.html (134 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete document

Page 132: Informatica Interview QnA

Because in Data warehousing historical data should be maintained to maintain historical data meanssuppose one employee details like where previously he worked and now where he is working all detailsshould be maintain in one table if u maintain primary key it won't allow the duplicate records with sameemployee id. so to maintain historical data we are all going for concept data warehousing by usingsurrogate keys we can achieve the historical data(using oracle sequence for critical column).so all the dimensions are marinating historical data they are de normalized because of duplicate entrymeans not exactly duplicate record with same employee number another record is maintaining in thetable.=======================================dear reham thanks for ur responce First of all i want to tell one thing to all users who r using this site.please give answers only if u r confident about it. refer it once again in the manual its not wrong. If wegive wrong answers lot of people who did't know the answer thought it as the correct answer and mayfail in the interview. the site must be helpfull to other please keep that in the mind.regarding why dimenstion tables r in denormalised in nature.i had discussed with my project manager about this. what he told is :->The attributes in a dimension tables are used over again and again in queries. for efficient queryperformance it is best if the query picks up an attribute from the dimension table and goes directly to thefact table and do not thru the intermediate tables. if we normalized the dimension table we will createsuch intermediate tables and that will not be efficient=======================================Yes what your manager told is correct. Apart from this we maintain Hierarchy in these tables.Maintaining Hierarchy is pretty important in the dwh environment. For example if there is a child tableand then a parent table. if both child and parent are kept in different tables one has to every time join or

Page 133: Informatica Interview QnA

query both these tables to get the parent child relation. so if we have both child and parent in the sametable we can always refer immediately. this may be a case.Similary if we have a hierarchy something like this...county > city > state > territory > division > region> nationIf we have different tables for all it would be a waste of database space and also we need to query allfile:///C|/Perl/bin/result.html (135 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

these tables everytime. Thats why we maintain hierarchy in dimension tables and based on the businesswe decide whether to maintain in the same table or different tables.=======================================hello everyonei don't know the answer of this question but i ve to tell u that how can we say that dimension table is denormalizedbecause in snowflake schema we normalized all the dimension tables.what would be ur comment on this??=======================================I am a beginner to DWbut as I know fact tables - DenormalizedAnd Dimension Tables - Normalized.If I am wrong please correct.=======================================De-normalization is basically the concept of keeping all the dimension hierarchies in a singledimensions tables. This causes less number of joins while retriving data from dimensions and hencefaster data retrival. This is why dimensions in OLAP systems are de-normalized.=======================================

128.Informatica - In a sequential Batch how can we stop singlesession?QUESTION #128 Submitted by: prasadns26hi,we can stop it using PMCMD command or in the monitor right click on that

Page 134: Informatica Interview QnA

perticular session and select stop.this will stop the current session and thesessions next to it.file:///C|/Perl/bin/result.html (136 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Above answer was rated as good by the following members:sn3508 Click Here to view complete documentwe have a task called wait event using that we can stop.we start using raise event.this is as per my knowledge.=======================================hiwe can stop it using PMCMD command or in the monitor right click on that perticular session andselect stop.this will stop the current session and the sessions next to it.=======================================

129.Informatica - How do you handle decimal places whileimporting a flatfile into informatica?QUESTION #129No best answer available. Please pick the good answer available or submit youranswer.February 11, 2006 20:44:03 #1rajendarRE: How do you handle decimal places while importing a...file:///C|/Perl/bin/result.html (137 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentwhile geeting the data from flat file in informatica ie import data from flat file it will ask for theprecision just enter that=======================================while importing the flat file the flat file wizard helps in configuring the properties of the file so thatselect the numeric column and just enter the precision value and the scale. precision includes the scale

Page 135: Informatica Interview QnA

for example if the number is 98888.654 enter precision as 8 and scale as 3 and width as 10 for fixedwidth flat file=======================================you can handle that by simply using the source analyzer window and then go to the ports of that flat filerepresentations and changing the precision and scales.=======================================hiwhile importing flat file definetion just specify the scale for a neumaric data type. in the mapping theflat file source supports only number datatype(no decimal and integer). In the SQ associated with thatsource will have a data type as decimal for that number port of the source.source ->number datatype port ->SQ -> decimal datatype.Integer is not supported. hence decimal istaken care.=======================================

130.Informatica - If you have four lookup tables in the workflow.How do you troubleshoot to improve performance?QUESTION #130No best answer available. Please pick the good answer available or submit youranswer.February 10, 2006 15:51:01 #1swapnaUse shared cachefile:///C|/Perl/bin/result.html (138 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentWhen a workflow has multiple lookup tables use shered cache .=======================================there r many ways to improve the mapping which has multiple lookups.1) we can create an index for the lookup table if we have permissions(staging area).

Page 136: Informatica Interview QnA

2) divide the lookup mapping into two (a) dedicate one for insert means: source - target these r newrows . only the new rows will come to mapping and the process will be fast . (b) dedicate the secondone to update : source target these r existing rows. only the rows which exists allready will come intothe mapping.3)we can increase the chache size of the lookup.=======================================

131.Informatica - How do I import VSAM files from source totarget. Do I need a special pluginQUESTION #131No best answer available. Please pick the good answer available or submit youranswer.February 13, 2006 08:52:18 #1swatiRE: How do I import VSAM files from source to target. ...Click Here to view complete documentAs far my knowledge by using power exchange tool convert vsam file to oracle tables then do mappingas usual to the target table.=======================================HiIn mapping Designer we have direct option to import files from VSAM Navigation : Sources > Importfrom file > file from COBOL.Thanksfile:///C|/Perl/bin/result.html (139 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Farid khan Pathan.=======================================Yes you will need PowerExchange. With the product you can read from and write to VSAM. I haveused it to read VSAM from one mainframe platform and write to a different platform. Have worked onKSDS and ESDS file types. You will need PowerExchange client on your platform and a

Page 137: Informatica Interview QnA

PowerExchange listener on each of the mainframe platform(s) that you wish to work on.=======================================PowerExchange does not need to copy your VSAM file to Oracle unless you want to do that. It can do adirect read/write to VSAM.=======================================

132.Informatica - Differences between Normalizer andNormalizer transformation.QUESTION #132No best answer available. Please pick the good answer available or submit youranswer.March 08, 2006 06:03:58 #1ravi kumar guturiRE: Differences between Normalizer and Normalizer tran...Click Here to view complete documentNormalizer: It is a transormation mainly using for cobol sourcesit's change the rows into coloums and columns into rowsNormalization:To remove the retundancy and inconsitecy=======================================

133.Informatica - What is IQD file?QUESTION #133No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (140 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

February 27, 2006 05:41:52 #1sathyanath.gopi Member Since: February 2006 Contribution: 2RE: What is IQD file?Click Here to view complete documentIQD file is nothing but Impromptu Query Definetion This file is maily used in Cognos Impromptu toolafter creating a imr( report) we save the imr as IQD file which is used while creating a cube in powerplay transformer.In data source type we select Impromptu Query Definetion.

Page 138: Informatica Interview QnA

=======================================IQD file is nothing but Impromptu Query Definition.This file used for creating cubes in COGNOSPowerplay Transformer.=======================================

134.Informatica - What is data merging, data cleansing, sampling?QUESTION #134No best answer available. Please pick the good answer available or submit youranswer.March 08, 2006 06:01:26 #1ravi kumar guturiRE: What is data merging, data cleansing, sampling?Click Here to view complete documentsimply manCleansing:---TO identify and remove the retundacy and inconsistencysampling: just smaple the data throug send the data from source to target=======================================

135.Informatica - How to import oracle sequence into Informatica.QUESTION #135No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (141 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

February 15, 2006 05:39:04 #1sunil kumarRE: How to import oracle sequence into Informatica.Click Here to view complete documentCREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE PROCEDUREFINALLY CALL THE PROCEDURE IN INFORMATICA WITH THE HELP OF STOREDPROCEDURE TRANSFORMATION.if still is there any problem please contact me.Thanks GoodLuck!

Page 139: Informatica Interview QnA

=======================================Hi sunilI got a problem with this...Can you jsut tell me a procedure to generate sequence number in SQl like...if i give n no of emplyees it should generate seq.and i want to use them in informatica...using storedprocedure and load them into target..can u help me with this..thanks=======================================

136.Informatica - With out using Updatestretagy and sessonsoptions, how we can do the update our target table?QUESTION #136No best answer available. Please pick the good answer available or submit youranswer.February 14, 2006 23:48:35 #1SarithaRE: With out using Updatestretagy and sessons options,...file:///C|/Perl/bin/result.html (142 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentyou can use this by using update override in target properties=======================================using update override in target option.=======================================In session properties There is an optioninsertupdateinsert as updateupdate as updatelike thatby using this we will easily solve=======================================By default all the rows in the session is set as insert flag you can change it in the session general

Page 140: Informatica Interview QnA

properties -- Treate source rows as :updateso all the incoming rows will be set with update flag.now you can update the rows in the target table=======================================hiif ur database is teradata we can do it with a tpump or mload external loader.update override in target properties is used basically for updating the target table based on a non keycolumn.e.g update by ename.its not a key column in the EMP table.But if u use a UPD or session levelproperties it necessarily should have a PK.=======================================file:///C|/Perl/bin/result.html (143 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

137.Informatica - Two relational tables are connected to SQ Trans,what are the possible errors it will be thrown?QUESTION #137No best answer available. Please pick the good answer available or submit youranswer.February 18, 2006 08:45:49 #1geek_78 Member Since: February 2006 Contribution: 1RE: Two relational tables are connected to SQ Trans,wh...Click Here to view complete documentWe can connect two relational table in one sq Transformation.No errors will be performRegardsR.Karthikeyan=======================================The only two possibilities as of I know is1. Both the table should have primary key/foreign key relation ship2. Both the table should be available in the same schema or same database=======================================

138.Informatica - what are partition points?

Page 141: Informatica Interview QnA

QUESTION #138 Submitted by: sarithaPartition points mark the thread boundaries in a source pipeline and dividethe pipeline into stages.file:///C|/Perl/bin/result.html (144 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Above answer was rated as good by the following members:sn3508 Click Here to view complete documentPartition points mark the thread boundaries in a source pipeline and dividethe pipeline into stages.=======================================

139.Informatica - what are cost based and rule based approachesand the differenceQUESTION #139No best answer available. Please pick the good answer available or submit youranswer.March 02, 2006 17:18:19 #1GayathriRE: what are cost based and rule based approaches and ...Click Here to view complete documentCost based and rule based approaches are the optimization techniques which are used in related todatabases where we need to optimize a sql query.Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these two techniques. bczthe third has some disadvantages.)When ever you process any sql query in Oracle what oracle engine internally does is it reads the queryand decides which will the best possible way for executing the query. So in this process Oracle followsthese optimization techniques.1. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways ( like may have path 1and path2 for same query) then What CBO does is it basically calculates the cost of each path and the

Page 142: Informatica Interview QnA

analyses for which path the cost of execution is less and then executes that path so that it can optimizethe quey execution.2. Rule base optimizer(RBO): this basically follows the rules which are needed for executing a query.So depending on the number of rules which are to be applied the optimzer runs the query.file:///C|/Perl/bin/result.html (145 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Use:If the table you are trying to query is already analysed then oracle will go with CBO.If the table is not analysed the Oracle follows RBO.For the first time if table is not analysed Oracle will go with full table scan.=======================================

140.Informatica - what is mystery dimention?QUESTION #140No best answer available. Please pick the good answer available or submit youranswer.March 05, 2006 23:55:59 #1ReddyRE: what is mystery dimention?Click Here to view complete documentusing Mystery Dimension ur maitaining the mystery data in ur Project.=======================================Plz explain me Clearly what is meant by Mystery Dimension?=======================================Also known as Junk DimensionsMaking sense of the rogue fields in your fact table..Please read the article http://www.intelligententerprise.com/000320/webhouse.jhtml=======================================

141.Informatica - what is difference b/w Informatica 7.1 andAbinitioQUESTION #141

Page 143: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (146 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

February 24, 2006 01:25:58 #1Niraj KumarRE: what is difference b/w Informatica 7.1 and Abiniti...Click Here to view complete documentin Informatica there is the concept of co-operating system which makes the mapping in parallel fashionwhich is not in Informatica=======================================There is a lot of diffrence between informatica an Ab InitioIn Ab Initio we r using 3 parllalisimbut Informatica using 1 parllalisimIn Ab Initio no scheduling option we can scheduled manully or pl/sql scriptbut informatica contains 4 scheduling optionsAb Inition contains co-operating systembut informatica is notRamp time is very quickly in Ab Initio campare than InformaticaAb Initio is userfriendly than Informatica=======================================

142.Informatica - What is Micro Strategy? Why is it used for? Canany one explain in detail about it?QUESTION #142No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (147 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

March 11, 2006 03:28:52 #1kundanRE: What is Micro Strategy? Why is it used for? Can an...Click Here to view complete documentMicro strategy is again an BI tool whicl is a HOLAP... u can create 2 dimensional report and also cubes

Page 144: Informatica Interview QnA

in here.......basically a reporting tool. IT HAS A FULL RANGE OF REPORTING ON WEB ALSO INWINDOWS.=======================================

143.Informatica - Can i start and stop single session in concurentbstch?QUESTION #143No best answer available. Please pick the good answer available or submit youranswer.March 08, 2006 05:50:15 #1ravi kumar guturiRE: Can i start and stop single session in concurent b...Click Here to view complete documentya shoor Just right click on the particular session and going to recovery optionorby using event wait and event rise=======================================

144.Informatica - what is the gap analysis?QUESTION #144No best answer available. Please pick the good answer available or submit youranswer.April 11, 2007 10:55:26 #1file:///C|/Perl/bin/result.html (148 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

vizaik Member Since: March 2007 Contribution: 30RE: what is the gap analysis?Click Here to view complete documentFor a project there will be:1.BRD(Business Requirement Document)-BA2.SSSD(Source System Study Document)-BABRD consists of the requirements of the client.SSSD consists of the source system study.

Page 145: Informatica Interview QnA

The source that does not the meet the requiremnts specified in BRD using the source given in the SSSDis treated as gap analysis. or in one word the difference between 1 and 2 is called gap analysis.=======================================

145.Informatica - what is the difference between stop and abortQUESTION #145No best answer available. Please pick the good answer available or submit youranswer.March 02, 2006 15:17:45 #1SirishaRE: what is the difference between stop and abortClick Here to view complete documentThe PowerCenter Server handles the abort command for the Session task like the stop command exceptit has a timeout period of 60 seconds. If the PowerCenter Server cannot finish processing andcommitting data within the timeout period it kills the DTM process and terminates the session.=======================================stop: _______If the session u want to stop is a part of batch you must stop the batchif the batch is part of nested batch Stop the outer most bacth\Abort:----You can issue the abort command it is similar to stop command except it has 60 second time out .file:///C|/Perl/bin/result.html (149 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

If the server cannot finish processing and commiting data with in 60 sec=======================================Stop: In this case data query from source databases is stopped immediately but whatever data has beenloaded into buffer there transformations and loading contunes.Abort: Same as Stop but in this casemaximum time allowed for buffered data is 60 Seconds.=======================================

Page 146: Informatica Interview QnA

Stop: In this case data query from source databases is stopped immediately but whatever data has beenloaded into buffer there transformations and loading contunes. Abort: Same as Stop but in this casemaximum time allowed for buffered data is 60 Seconds.=======================================

146.Informatica - can we run a group of sessions without usingworkflow managerQUESTION #146No best answer available. Please pick the good answer available or submit youranswer.March 05, 2006 23:48:38 #1ReddyRE: can we run a group of sessions without using workf...Click Here to view complete documentya Its Posible using pmcmd Command with out using the workflow Manager run the group of session.as per my knowledge i give the answer.=======================================

147.Informatica - what is meant by complex mapping,QUESTION #147No best answer available. Please pick the good answer available or submit youranswer.March 13, 2006 02:46:29 #1satyam_un Member Since: March 2006 Contribution: 5file:///C|/Perl/bin/result.html (150 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: what is meant by complex mapping,Click Here to view complete documentcomplex mapping means having more business rules .=======================================Complex maping means involved in more logic and more business rules.Actually in my project complex mapping is

Page 147: Informatica Interview QnA

In my bank project I involved in construct a 1 dataware houseMeny customer is there in my bank project They r after taking loans relocated in to another placethat time i feel to diffcult maintain both prvious and current adressesin the sense i am using scd2This is an simple example of complex mapping=======================================

148.Informatica - explain use of update strategy transformationQUESTION #148No best answer available. Please pick the good answer available or submit youranswer.March 22, 2006 00:00:52 #1satyambabuRE: explain use of update strategy transformationfile:///C|/Perl/bin/result.html (151 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentmaintain the history data and maintain the most recent changaes data.=======================================To flag source records as INSERT DELETE UPDATE or REJECT for target database. Default flag isInsert. This is must for Incremental Data Loading.=======================================

149.Informatica - what are mapping parameters and varibles inwhich situation we can use itQUESTION #149No best answer available. Please pick the good answer available or submit youranswer.March 16, 2006 06:27:48 #1GirishRE: what are mapping parameters and varibles in which ...Click Here to view complete documentMapping parameters have a constant value through out the session

Page 148: Informatica Interview QnA

whereas in mapping variable the values change and the informatica server saves the values in therepository and uses next time when u run the session.=======================================If we need to change certain attributes of a mapping after every time the session is run it will be verydifficult to edit the mapping and then change the attribute. So we use mapping parameters and variablesand define the values in a parameter file. Then we could edit the parameter file to change the attributevalues. This makes the process simple.Mapping parameter values remain constant. If we need to change the parameter value then we need toedit the parameter file .But value of mapping variables can be changed by using variable function. If we need to increment theattribute value by 1 after every session run then we can use mapping variables .In a mapping parameter we need to manually edit the attribute value in the parameter file after everysession run.file:///C|/Perl/bin/result.html (152 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================How can you edit the parameter file? Once you setup a mapping variable how can you define them in aparameter file?=======================================

150.Informatica - what is worklet and what use of worklet and inwhich situation we can use itQUESTION #150 Submitted by: SSekarWorklet is a set of tasks. If a certain set of task has to be reused in manyworkflows then we use worklets. To execute a Worklet, it has to be placed insidea workflow.The use of worklet in a workflow is similar to the use of mapplet in a mapping.

Page 149: Informatica Interview QnA

Above answer was rated as good by the following members:sn3508 Click Here to view complete documentA set of worlflow tasks is called workletWorkflow tasks means1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc......But we r use diffrent situations by using this only=======================================Worklet is a set of tasks. If a certain set of task has to be reused in many workflows then we useworklets. To execute a Worklet it has to be placed inside a workflow.The use of worklet in a workflow is similar to the use of mapplet in a mapping.=======================================file:///C|/Perl/bin/result.html (153 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Worklet is reusable workflows. It might contain more than on task in it. We can use these worklets inother workflows.=======================================Besides the reusability of a worklet as mentioned above we can also use a worklet to group relatedsessions together in a very big workflow. Suppose we have to extract a file and then load a fact table inthe workflow we can use one worklet to load/update dimensions.=======================================

151.Informatica - How do you configure mapping in informaticaQUESTION #151No best answer available. Please pick the good answer available or submit youranswer.March 17, 2006 05:34:39 #1sureshRE: How do you configure mapping in informaticaClick Here to view complete documentYou should configure the mapping with the least number of transformations and expressions to do themost amount of work possible. You should minimize the amount of data moved by deleting

Page 150: Informatica Interview QnA

unnecessary links between transformations.For transformations that use data cache (such as Aggregator Joiner Rank and Lookup transformations)limit connected input/output or output ports. Limiting the number of connected input/output or outputports reduces the amount of data the transformations store in the data cache.You can also perform the following tasks to optimize the mapping:l Configure single-pass reading.l Optimize datatype conversions.l Eliminate transformation errors.l Optimize transformations.l Optimize expressions. You should configure the mapping with the least number oftransformations and expressions to do the most amount of work possible. You should minimizethe amount of data moved by deleting unnecessary links between transformations.For transformations that use data cache (such as Aggregator Joiner Rank and Lookuptransformations) limit connected input/output or output ports. Limiting the number of connectedinput/output or output ports reduces the amount of data the transformations store in the datacache.file:///C|/Perl/bin/result.html (154 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

You can also perform the following tasks to optimize the mapping:m Configure single-pass reading.m Optimize datatype conversions.m Eliminate transformation errors.m Optimize transformations.m Optimize expressions.=======================================

152.Informatica - Can i use a session Bulk loading option thattime can i make a recovery to the session?QUESTION #152 Submitted by: SSekarIf the session is configured to use in bulk mode it will not write recovery

Page 151: Informatica Interview QnA

information to recovery tables. So Bulk loading will not perform the recovery asrequired.Above answer was rated as good by the following members:sn3508 Click Here to view complete documentIt's not possible=======================================If the session is configured to use in bulk mode it will not write recovery information to recovery tables.So Bulk loading will not perform the recovery as required.=======================================no why because in bulk load u wont create redo log file when u normal load we create redo log filebut in bulk load session performance increases=======================================

153.Informatica - what is difference between COM & DCOM?QUESTION #153No best answer available. Please pick the good answer available or submit youranswer.October 05, 2006 01:23:48 #1balajifile:///C|/Perl/bin/result.html (155 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: what is difference between COM & DCOM?Click Here to view complete documentHaiCOM is technology developed by Microsoft which is based on OO Design .COM Componenetsexposes its interfaces at interface pointer where client access Components intrerface.DCOM is the protocol which enables the s/w componenets in different machine to communicate witheach other through n/w .=======================================

Page 152: Informatica Interview QnA

154.Informatica - what are the enhancements made to Informatica7.1.1 version when compared to 6.2.2 version?QUESTION #154No best answer available. Please pick the good answer available or submit youranswer.April 04, 2006 01:07:29 #1sn3508 Member Since: April 2006 Contribution: 20RE: what are the enhancements made to Informatica 7.1....Click Here to view complete documentI'm a newbie. Correct me if I'm wrong.In 7+ versions- we can lookup a flat file- union and custom transformation- there is propogate option i.e if we change any datatype of a field all the linked columns will reflectthat changefile:///C|/Perl/bin/result.html (156 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

- we can write to XML target.- we can use upto 64 partitions=======================================1.union & custom transformation2.lookup on flatfile3.we can use pmcmd command4.we can export independent&dependent repository objects5.version controlling6.data proffiling7.supporting of 64mb architecture8.ldap authentication=======================================

155.Informatica - how do you create a mapping using multiplelookup transformation?QUESTION #155

Page 153: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.March 30, 2006 16:26:57 #1SriRE: how do you create a mapping using multiple lookup ...file:///C|/Perl/bin/result.html (157 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentUse Multiple Lookups in the mapping=======================================Use unconnected lookup if same lookup repeats multiple times.=======================================u mean only look up trf and not any other transformations? be clear? if not one senario is that we canuse multiple connected loopup trf depending upon the target.if u have different wh_keys in ur targettable and ur souce table only has its columns but not wh_keys.so inorder to connect coloumn to wh_keycoloumn eg:sales_branch...>wh_sales_branch.we use multiple look ups.depending upon the targets=======================================

156.Informatica - what is the exact meaning of domain?QUESTION #156No best answer available. Please pick the good answer available or submit youranswer.May 01, 2006 16:51:12 #1kalyanRE: what is the exact meaning of domain?Click Here to view complete documenta particular environment or a name that identifies one or more IP addressesexample gov - Governmentagencies edu - Educational institutions org - Organizations (nonprofit) mil - Military com - commercialbusiness net - Network organizations ca - Canada th - Thailand in - India=======================================Domain is nothing but give a comlete information on a particular subject area..

Page 154: Informatica Interview QnA

like sales domain telecom domain..etc=======================================Domain in Informatica means - A central Global repository (GDR) along with the registered Localrepositories (LDR) to this GDR. This is possible only in PowerCenter and not PowerMart.=======================================In Database parlance you can define a domain a set of all possible permissible values for any attribute .Like you can say the domain for attribute Credit Card No# consists of all possible valid 16 digitnumbers.Thanks.file:///C|/Perl/bin/result.html (158 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

157.Informatica - what is the hierarchies in DWHQUESTION #157No best answer available. Please pick the good answer available or submit youranswer.May 01, 2006 16:37:54 #1kalyanRE: what is the hierarchies in DWHClick Here to view complete documentData sources ---> Data acquisition ---> Warehouse ---> Front end tools ---> Metadata management --->Data warehouse operation management=======================================Hierarchy in DWH is nothing but an ordered series of related dimensions grouped together to performmultidimensional analysis.=======================================

158.Informatica - Difference between Rank and Dense Rank?QUESTION #158 Submitted by: sm1506Rank:

Page 155: Informatica Interview QnA

12<--2nd position2<--3rd position45Same Rank is assigned to same totals/numbers. Rank is followed by thePosition. Golf game ususally Ranks this way. This is usually a Gold Ranking.Dense Rank:12<--2nd position2<--3rd positionfile:///C|/Perl/bin/result.html (159 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

34Same ranks are assigned to same totals/numbers/names. the next rank followsthe serial number.Above answer was rated as good by the following members:sn3508 Click Here to view complete documentRank:12<--2nd position2<--3rd position45Same Rank is assigned to same totals/numbers. Rank is followed by the Position. Golf game ususallyRanks this way. This is usually a Gold Ranking.Dense Rank:12<--2nd position2<--3rd position3

Page 156: Informatica Interview QnA

4Same ranks are assigned to same totals/numbers/names. the next rank follows the serial number.=======================================

159.Informatica - can anyone explain about incrementalaggregation with an example?QUESTION #159No best answer available. Please pick the good answer available or submit youranswer.April 09, 2006 21:15:24 #1file:///C|/Perl/bin/result.html (160 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

maverickwild Member Since: November 2005 Contribution: 3RE: can anyone explain about incremental aggregation w...Click Here to view complete documentWhen you use aggregator transformation to aggregate it creates index and data caches to store the data 1.Of group By columns 2. Of aggreagte columnsthe incremental aggreagtion is used when we have historical data in place which will be used inaggregation incremental aggregation uses the cache which contains the historical data and for eachgroup by column value already present in cache it add the data value to its corresponding data cachevalue and outputs the row in case of a incoming value having no match in index cache the new valuesfor group by and output ports are inserted into the cache .=======================================Incremental aggregation is specially used for tune the performance of the aggregator. It captures thechange each time (incrementally) you run the transformation and then performs the aggregationfunction to the changed rows and not to the entire rows. This improves the performance because you arenot reading the entire source each time you run the session.=======================================

Page 157: Informatica Interview QnA

160.Informatica - What is meant by Junk Attribute in Informatica?QUESTION #160No best answer available. Please pick the good answer available or submit youranswer.April 17, 2006 10:10:23 #1raghavendraRE: What is meant by Junk Attribute in Informatica?file:///C|/Perl/bin/result.html (161 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentJunk Dimension A Dimension is called junk dimension if it contains attribute which are rarely changedormodified. example In Banking Domain we can fetch four attributes accounting to a junk dimensionslike from the Overall_Transaction_master table tput flag tcmp flag del flag advance flag all theseattributes can be a part of a junk dimensions. thankxregards raghavendra=======================================HiIn the requirment collection phase all the attributes that are likely to be used in any dimension will begathered. while creating a dimension we use all the related attributes of that dimesion from the gatheredlist. At the last a dimension will be created with all the left over attributes which is usually called asJUNK Dimension and the attributes are called JUNK Attributes.=======================================

161.Informatica - Informatica Live Interview QuestionsQUESTION #161 here are some of the interview questions i could not answer,any body can help giving answers for others also.thanks in advance.Explain grouped cross tab?Explain reference cursor

Page 158: Informatica Interview QnA

What are parallel query's and query hintsWhat is meta data and system catalogWhat is factless fact schemaWhat is confirmed dimensionWhich kind of index is preferred in DWHWhy do we use DSS database for OLAP tools Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.April 17, 2006 07:27:07 #1binoy_pa Member Since: April 2006 Contribution: 5RE: Informatica Live Interview Questionsfile:///C|/Perl/bin/result.html (162 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================confirmed dimension one dimension that shares with two fact tablefactless means fact table without measures only contains foreign keys-two types of factless table one isevent tracking and other is covergae tableBit map indexes preffered in the data ware housingMetedata is data about data here every thing is stored example-mapping sessions privileges other datain informatica we can see the metedata in the repository.System catalog that we used in the cognos that also contains data tables privileges predefined filter etcusing this catalog we generate reportsgroup cross tab is a type of report in cognos where we have to assign 3 measures for getting the result=======================================Hi BinI doubt your answer about the Grouped Cross Tab where you said 3 measure are to be specified which ifeel is wrong.I think that grouped cross tab has only one measure but the side and row headers are grouped likeIndia ChinaMah | Goa XYZ | PQR2000 20K 30K 45K 55K2001 39K 60K 34K 66KHere the cross tab is grouped on Country and then State.

Page 159: Informatica Interview QnA

Similary even we can go further to drill year to Quarters.And this is known gruped Cross tab.file:///C|/Perl/bin/result.html (163 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================The cursor which is not declared in the declaration section but in executable section where we can givethe table name dynamically there.so that the cursor can fetch the data from that table=======================================grouped cross tab is the single report which contains number of crosstab report based on the groupeditems. likeHere countries are groupe items.INDIAM1 M2Banglore 542 542Hyderabad 255 458Chennai 45 254USAM1 M2LA 578 5876Chicago 4785 546Washington DC 548 556PAKISTANM1 M2Lahore 457 875Karachi 458 687Islamabad 7894 64Thanksfile:///C|/Perl/bin/result.html (164 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================HiRest of the answers are given by friends earlier.DSS->Decision Support System.The purpose of a DWH is to provide users data through which they can make their critical besinessdecisions.DSS data base is nothing but a DWH. OLAP tools obviously use data from a DWH which is

Page 160: Informatica Interview QnA

transformed to generate reports. These reports are used by the users analysts to extract strategicinformation which helps in decision making.=======================================The Default Index type for DWH is bitmap(non-unique)index=======================================

162.Informatica - how do we remove the staging areaQUESTION #162No best answer available. Please pick the good answer available or submit youranswer.June 08, 2006 12:39:29 #1HanuRE: how do we remove the staging areafile:///C|/Perl/bin/result.html (165 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentAre u talking in the DW or in Informatica?If u want any staging area dont create staging DB. Load data directly into TargetHanu.=======================================hithis question is logically not correct. staging area is just a set of intermediate tables.u can create ormaintain these tables in the same database as of ur DWH or in a different DB.These tables will be usedto store data from the source which will be cleaned transformed and undergo some business logic.Oncethe source data is done with the above process data from STG will be populated to the final Fact tablethrough a simple one to one mapping.=======================================Hi1)At DB level we can simply DROP the stagin table.2)then Create Target table at DB level.3) Directly LOAD the records into target table.NOTE: It is recommended to use staging area.=======================================

Page 161: Informatica Interview QnA

163.Informatica - what is polling?QUESTION #163 It displays update information about the session in themonitor window. Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.May 01, 2006 16:32:38 #1kalyanRE: what is polling?file:///C|/Perl/bin/result.html (166 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================It displays the updated information about the session in the monitor window. The monitor windowdisplays the status of each session when you poll the Informatica server.=======================================

164.Informatica - What is Transaction?QUESTION #164No best answer available. Please pick the good answer available or submit youranswer.April 14, 2006 09:08:18 #1vishaliRE: What is Transaction?Click Here to view complete documentA transaction can be define as DML operation.means it can be insertion modification or deletion of data performed by users/ analysts/applicators=======================================transaction is nothing but changing one window to another window during process=======================================Transaction is a set of rows bound by commit or rollback.=======================================HiTransaction is any event that indicates some action.In DB terms any commited changes occured in the database is said to be transaction.=======================================

Page 162: Informatica Interview QnA

165.Informatica - what happens if you try to create a shortcut to anon-shared folder?QUESTION #165No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (167 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.April 14, 2006 14:59:17 #1sunil_reddy Member Since: April 2006 Contribution: 2RE: what happens if you try to create a shortcut to a ...Click Here to view complete documentIt only creates a copy of it..=======================================

166.Informatica - In a joiner trasformation, you should specify thesource with fewer rows as the master source. Why?QUESTION #166No best answer available. Please pick the good answer available or submit youranswer.April 14, 2006 09:02:38 #1vishaliRE: In a joiner trasformation, you should specify the ...Click Here to view complete documentin joinner transformation informatica server reads all the records from master source builds index anddata caches based on master table rows.after building the caches the joiner transformation reads recordsfrom the detail source and perform joins=======================================Joiner transformation compares each row of the master source against the detail source. The fewer

Page 163: Informatica Interview QnA

unique rows in the master the fewer iterations of the join comparison occur which speeds the joinprocess.=======================================

167.Informatica - Where is the cache stored in informatica?QUESTION #167No best answer available. Please pick the good answer available or submit youranswer.April 15, 2006 02:14:01 #1file:///C|/Perl/bin/result.html (168 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

shekar25_g Member Since: April 2006 Contribution: 2RE: Where is the cache stored in informatica?Click Here to view complete documentcache stored in informatica is in informatica server.=======================================HiCache is stored in the Informatica server memory and over flowed data is stored on the disk in fileformat which will be automatically deleted after the successful completion of the session run. If youwant to store that data you have to use a persistant cache.=======================================

168.Informatica - can batches be copied/stopped from servermanager?QUESTION #168No best answer available. Please pick the good answer available or submit youranswer.May 08, 2006 05:24:58 #1MOOTATI RAGHAVENDROA REDDYRE: can batches be copied/stopped from server manager?...Click Here to view complete documentyes we can stop the batches using server manager or pmcmd commnad

Page 164: Informatica Interview QnA

=======================================

169.Informatica - what is rank transformation?where can we usethis transformation?QUESTION #169No best answer available. Please pick the good answer available or submit youranswer.April 18, 2006 01:56:38 #1madhanfile:///C|/Perl/bin/result.html (169 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: what is rank transformation?where can we use this ...Click Here to view complete documentRank transformation is used to find the status.ex if we have one sales table and in this if we find moreemployees selling the same product and we are in need to find the first 5 0r 10 employee who is sellingmore products.we can go for rank transformation.=======================================It is used to filter the data from top/from buttom according to the condition.=======================================To arrange records in Hierarchical Order and to selecte TOP or BOTTOM records. It is same asSTART WITH and CONNECT BY PRIOR clauses.=======================================It is an active transformation which is used to identify the top and bottom values based on the numerics .by deafult it will create a rankindex port to caliculate the rank=======================================

170.Informatica - Can Informatica load heterogeneous targetsfrom heterogeneous sources?QUESTION #170No best answer available. Please pick the good answer available or submit youranswer.

Page 165: Informatica Interview QnA

April 26, 2006 01:22:19 #1AnantRE: Can Informatica load heterogeneous targets from he...file:///C|/Perl/bin/result.html (170 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentYes it can. For example...Flat File and Relations sources are joined in the mapping and later Flat Fileand relational targets are loaded.=======================================yes informatica for load the data form the heterogineous sources to heterogeneous target.=======================================

171.Informatica - how do you load the time dimension.QUESTION #171No best answer available. Please pick the good answer available or submit youranswer.April 25, 2006 08:32:33 #1Appadu Dora PRE: how do you load the time dimension.Click Here to view complete documentTime Dimension will generally load manually by using PL/SQL shell scripts proc C etc......=======================================create a procedure to load data into Time Dimension. The procedure needs to run only once to popullateall the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in urtable.create or replace procedure QISODS.Insert_W_DAY_D_PR asLastSeqID number default 0;loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');beginLoopLastSeqID : LastSeqID + 1;loaddate : loaddate + 1;INSERT into QISODS.W_DAY_D values(

Page 166: Informatica Interview QnA

file:///C|/Perl/bin/result.html (171 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

LastSeqIDTrunc(loaddate)Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_FLOAT(TO_CHAR(loaddate 'MM'))TO_FLOAT(TO_CHAR(loaddate 'Q'))trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)TO_FLOAT(TO_CHAR(loaddate 'YYYY'))TO_FLOAT(TO_CHAR(loaddate 'DD'))TO_FLOAT(TO_CHAR(loaddate 'D'))TO_FLOAT(TO_CHAR(loaddate 'DDD'))11111TO_FLOAT(TO_CHAR(loaddate 'J'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +TO_number(TO_CHAR(loaddate 'MM'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +TO_number(TO_CHAR(loaddate 'Q'))TO_FLOAT(TO_CHAR(loaddate 'J'))/7TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713TO_CHAR(load_date 'Day')TO_CHAR(loaddate 'Month')Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')Trunc(loaddate 'DAY') + 1Decode(Last_Day(loaddate) loaddate 'y' 'n')to_char(loaddate 'YYYYMM')to_char(loaddate 'YYYY') || ' Half' ||Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_CHAR(loaddate 'YYYY / MM')TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate'Q')) )TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate

Page 167: Informatica Interview QnA

'WW')))TO_CHAR(loaddate 'YYYY'));If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') ThenExit;End If;End Loop;file:///C|/Perl/bin/result.html (172 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

commit;end Insert_W_DAY_D_PR;=======================================

172.Informatica - what is hash table informatica?QUESTION #172No best answer available. Please pick the good answer available or submit youranswer.May 03, 2006 15:13:30 #1uma bojjaRE: what is hash table informatica?Click Here to view complete documentHash partitioning is the type of partition that is supported by Informatica where the hash user keys arespecified .Can you please explain more on this?=======================================I donot know exact answer uma but i am telling as per my knowledge.Hash table is used to extract thedata through Java Virtual Machine.If u know more about this plz send to me=======================================Hash partitions are some what similar to database partitions. This will allow user to partition the data byrage which is fetched from source.This will be handy while handling partitioned tables.--Kr=======================================when you want the Informatica Server to distribute rows to the partitions by group.=======================================

Page 168: Informatica Interview QnA

In hash partitioning the Informatica Server uses a hash function to group rows of data among partitions.file:///C|/Perl/bin/result.html (173 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

The Informatica Server groups the data based on a partition key.Use hash partitioning when you wantthe Informatica Server to distribute rows to the partitions by group. For example you need to sort itemsby item ID but you do not know how many items have a particular ID number.cheers karthik=======================================

173.Informatica - What is meant by EDW?QUESTION #173No best answer available. Please pick the good answer available or submit youranswer.May 04, 2006 10:06:21 #1Uma Bojja Member Since: May 2006 Contribution: 7RE: What is meant by EDW?Click Here to view complete documentEDW~~~~~Its a big data warehouses OR centralized data warehousing OR the old style of warehouse.Its a single enterprise data warehouse (EDW) with no associated data marts or operational data store(ODS) systems.=======================================If the warehouse is build across particular vertical of thecompany it is called as enterprise data warehouse.It is limited to particularverticle.For example if the warehouse is build across sales vertical then it istermed as EDW for sales hierachyfile:///C|/Perl/bin/result.html (174 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================EDW is Enterprise Datawarehouse which means that its a centralised DW for the whole organization.this apporach is the apporach on Imon which relies on the point of having a single warehouse/

Page 169: Informatica Interview QnA

centralised where the kimball apporach says to have seperate data marts for each vertical/department.Advantages of having a EDW:1. Golbal view of the Data2. Same point of source of data for all the users acroos the organization.3. able to perform consistent analysis on a single Data Warehouse.to over come is the time it takes to develop and also the management that is required to build acentralised database.ThanksYugandhar=======================================

174.Informatica - how to load the data from people soft hrm topeople soft erm using informatica?QUESTION #174No best answer available. Please pick the good answer available or submit youranswer.May 08, 2006 14:00:35 #1Uma Bojja Member Since: May 2006 Contribution: 7RE: how to load the data from people soft hrm to peopl...file:///C|/Perl/bin/result.html (175 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentFollowing are necessary1.Power Connect license2.Import the source and target from people soft using ODBC connections3.Define connection under Application Connection Browser for the people soft source/target inworkflow manager .select the proper connection (people soft with oracle sybase db2 and informix)and execute like a normal session.=======================================

175.Informatica - what are the measure objectsQUESTION #175

Page 170: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.May 15, 2006 00:50:56 #1karthikeyanRE: what are the measure objectsClick Here to view complete documentAggregate calculation like sum avg max min these are the measure objetcs.=======================================

176.Informatica - what is the diff b/w STOP & ABORT inINFORMATICA sess level ?QUESTION #176No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (176 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

May 18, 2006 05:13:07 #1rkarthikeyanRE: what is the diff b/w STOP & ABORT in INFORMATICA s...Click Here to view complete documentStop:We can Restart the sessionAbort:WE cant restart the session.We should truncate all the pipeline after that start the session=======================================Stop : After issuing stop PCS processes all those records which it got from source qualifier and writesto the target.Abort: It works in the same way as stop but there is a time out period of 60sec.=======================================

177.Informatica - what is surrogatekey ? In ur project in whichsituation u has used ? explain with example ?QUESTION #177

Page 171: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.May 22, 2006 09:14:55 #1afzalRE: what is surrogatekey ? In ur project in which situ...file:///C|/Perl/bin/result.html (177 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentA surrogate key is system genrated/artificial key /sequence number or A surrogate key is a substitutionfor the natural primary key.It is just a unique identifier or number for each row that can be used for theprimary key to the table. The only requirement for a surrogate primary key is that it is unique for eachrow in the tableI it is useful because the natural primary key (i.e. Customer Number in Customer table)can change and this makes updates more difficult.but In my project I felt that the primary reason for thesurrogate keys was to record the changing context of the dimension attributes.(particulaly for scd )Thereason for them being integer and integer joins are faster. Unlike other=======================================Surrogate key is a Unique identifier for eatch row it can be used as a primary key for DWH.The DWHdoes not depends on primary keys generated by OLTP systems for internally identifying the recods.When the new record is inserting into DWH primary keys are autimatically generated such type od keysare called SURROGATE KEY.Advantages1. Have a flexible mechanisam for handling S.C.D's2. wecan save substantial storage space with integer valued surrogate keys.=======================================

178.Informatica - what is Partitioning ? where we can usePartition? wht is advantages?Is it nessisary?QUESTION #178

Page 172: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.May 22, 2006 08:51:38 #1afzalRE: what is Partitioning ? where we can use Partition?...Click Here to view complete documentThe Partitioning Option increases PowerCenter’s performance through parallel data processing and thisoption provides a thread-based architecture and automatic data partitioning that optimizes parallelprocessing on multiprocessor and grid-based hardware environments.=======================================partitions are used to optimize the session performancewe can select in sesstion propetys for partiotionstypesdefault----passthrough partitionfile:///C|/Perl/bin/result.html (178 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

key range partionround robin partionhash partiotion=======================================HiIn informatica we can tune performance in 5 different levels that is at source level target level mappinglevel session level and at network level.So to tune the performance at session level we go for partitioning and again we have 4 types ofpartitioningthose are pass through hash round robin key range.pass through is the default one.In hash again we have 2 types that is userdefined and automatic.round robin can not be applied at source level it can be used at some transformation levelkey range can be applied at both source or target levels.if you want me to explain each partioning level in detail the i can .=======================================hi nimmi please explain me regarding complete partition. I need clear picture. what tranmission it will

Page 173: Informatica Interview QnA

ristrict how it will ristric where we have to give.thanksmadhu.=======================================file:///C|/Perl/bin/result.html (179 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

179.Informatica - hwo can we eliminate duplicate rows from flatfile?QUESTION #179No best answer available. Please pick the good answer available or submit youranswer.May 22, 2006 04:26:30 #1KarthikeyaRE: hwo can we eliminate duplicate rows from flat file...Click Here to view complete documentkeep aggregator between source qualifier and target and choose group by field key it will eliminate theduplicate records.=======================================Hi Before loading to target use an aggregator transformation and make use of group by function toeleminate the duplicates on columns .Nanda=======================================Use Sorter Transformation. When you configure the Sorter Transformation to treat output rows asdistinct it configures all ports as part of the sort key. It therefore discards duplicate rows comparedduring the sort operation=======================================Hi Before loading to target Use an aggregator transformation and use group by clause to eliminate theduplicate in columns.Nanda=======================================Use sorter transformation select distinct option duplicate rows will be eliminated.=======================================if u want to delete the duplicate rows in flat files then we go for rank transformation or oracle external

Page 174: Informatica Interview QnA

procedure tranfornationselect all group by ports and select one field for rank its easily dupliuctee now=======================================file:///C|/Perl/bin/result.html (180 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Hiusing Sorter Transformation we can eliminate the Duplicate Rows from Flat fileThanksN.Sai=======================================to eliminate the duplicate in flatfiles we have distinct property in sorter transformation. If we enable thatproperty automatically it will remove duplicate rows in flatfiles.=======================================

180.Informatica - How to Generate the Metadata Reports inInformatica?QUESTION #180No best answer available. Please pick the good answer available or submit youranswer.June 01, 2006 07:27:14 #1balanagdara Member Since: April 2006 Contribution: 4RE: How to Generate the Metadata Reports in Informatic...Click Here to view complete documentHi VenkatesanYou can generate PowerCenter Metadata Reporter from a browser on any workstation.Bala Dara=======================================HiYou can generate PowerCenter Metadata Reporter from a browser on any workstation even aworkstation that does not have PowerCenter tools installed.file:///C|/Perl/bin/result.html (181 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Bala Dara=======================================

Page 175: Informatica Interview QnA

hey bala can you be more specific about that how to generate the metadata report in informantica ???=======================================yes we can generate reports using Metadata Reporter... It is a web based application used only forcreating metadata reports in informatica.Using metadata reporter we can connect to repository and get the metadata without the knowledge ofSql and technical skills.I thik this is my answers if it is yes.... reply me..kumar=======================================

181.Informatica - Can u tell me how to go for SCD's and its types.Where do we use them mostlyQUESTION #181No best answer available. Please pick the good answer available or submit youranswer.June 08, 2006 08:53:46 #1priyamayee Member Since: June 2006 Contribution: 3RE: Can u tell me how to go for SCD's and its types.Wh...file:///C|/Perl/bin/result.html (182 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentHii It depends on the business requirement u have. We use this SCD's to maintain history(changes/updates) in the dimensions. Each SCD type has it's own way of storing/updating/maintaining thehistory. For example A customer dimension is a SCD because the customer can change his/her addresscontact name or anything. These types of changes in dimensions are know a SCD's. And we use the 3different SCD TYPES to handle these changes and historyu.byeMayee=======================================The Slowly Changing Dimension problem is a common one particular to data warehousing. In anutshell this applies to cases where the attribute for a record varies over time. We give an example

Page 176: Informatica Interview QnA

below: Christina is a customer with ABC Inc. She first lived in Chicago Illinois. So the original entry inthe customer lookup table has the following record: Customer Key Name State 1001 ChristinaIllinoisAt a later date she moved to Los Angeles California on January 2003. How should ABC Inc.now modify its customer table to reflect this change? This is the Slowly Changing Dimension problem.There are in general three ways to solve this type of problem and they are categorized as follows: InType 1 Slowly Changing Dimension the new information simply overwrites the original information. Inother words no history is kept. In our example recall we originally have the following table: CustomerKey Name State 1001 Christina IllinoisAfter Christina moved from Illinois to California the newinformation replaces the new record and we have the following table: Customer Key Name State 1001Christina CaliforniaAdvantages: - This is the easiest way to handle the Slowly Changing Dimensionproblem since there is no need to keep track of the old information. Disadvantages: - All history is lost.By applying this methodology it is not possible to trace back in history. For example in this case thecompany would not be able to know that Christina lived in Illinois before. Usage: About 50 of the time.When to use Type 1: Type 1 slowly changing dimension should be used when it is not necessary for thedata warehouse to keep track of historical changes. In Type 2 Slowly Changing Dimension a newrecord is added to the table to represent the new information. Therefore both the original and the newrecord will be present. The new record gets its own primary key. In our example recall we originallyhave the following table: Customer Key Name State 1001 Christina IllinoisAfter Christina moved fromIllinois to California we add the new information as a new row into the table: Customer Key NameState 1001 Christina Illinois 1005 Christina CaliforniaAdvantages: - This allows us to accurately keep

Page 177: Informatica Interview QnA

all historical information. Disadvantages: - This will cause the size of the table to grow fast. In caseswhere the number of rows for the table is very high to start with storage and performance can become aconcern. - This necessarily complicates the ETL process. Usage: About 50 of the time. When to useType 2: Type 2 slowly changing dimension should be used when it is necessary for the data warehouseto track historical changes. In Type 3 Slowly Changing Dimension there will be two columns toindicate the particular attribute of interest one indicating the original value and one indicating thecurrent value. There will also be a column that indicates when the current value becomes active. In ourexample recall we originally have the following table: Customer Key Name State1001 ChristinaIllinoisTo accomodate Type 3 Slowly Changing Dimension we will now have the following columns:Customer Key Name Original State Current State Effective Date After Christina moved from Illinois toCalifornia the original information gets updated and we have the following table (assuming theeffective date of change is January 15 2003): Customer Key Name Original State Current StateEffective Date 1001 Christina Illinois California 15-JAN-2003Advantages: - This does not increase thesize of the table since new information is updated. - This allows us to keep some part of history.file:///C|/Perl/bin/result.html (183 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Disadvantages: - Type 3 will not be able to keep all history where an attribute is changed more thanonce. For example if Christina later moves to Texas on December 15 2003 the California informationwill be lost. Usage: Type 3 is rarely used in actual practice. When to use Type 3: Type III slowlychanging dimension should only be used when it is necessary for the data warehouse to track historicalchanges and when such changes will only occur for a finite number of time.=======================================

Page 178: Informatica Interview QnA

182.Informatica - How to export mappings to the productionenvironment?QUESTION #182No best answer available. Please pick the good answer available or submit youranswer.June 13, 2006 19:15:18 #1UmaBojjaRE: How to export mappings to the production environme...Click Here to view complete documentIn the designer go to the main menu and one can see the export/import options.Import the exported mapping in to the production repository with replace options.ThanksUma=======================================

183.Informatica - how u will create header and footer in targetusing informatica?QUESTION #183No best answer available. Please pick the good answer available or submit youranswer.June 13, 2006 19:05:25 #1UmaBojjafile:///C|/Perl/bin/result.html (184 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: how u will create header and footer in target usin...Click Here to view complete documentIf you are focus is about the flat files then one can set it in file properties while creating a mapping or atthe session level in session propertiesThanksUma=======================================hi uma

Page 179: Informatica Interview QnA

thanks for the answer i want the complete explanation regarding to this question how to create headerand footer in target?=======================================you can always create a header and a trailer in the target file using an aggregator transformation.Take the number of records as count in the aggregator transformation.create three separate files in a single pipeline.One will be your header and other will be your trailer coming from aggregator.The third will be your main file.Concatenate the header and the main file in post session command usnig shell script.=======================================

184.Informatica - what is the difference between constraind baseload ordering and target load planQUESTION #184file:///C|/Perl/bin/result.html (185 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit youranswer.June 16, 2006 14:16:55 #1Uma Bojja Member Since: May 2006 Contribution: 7RE: what is the difference between constraind base loa...Click Here to view complete documentConstraint based load orderingexample:Table 1---MasterTabke 2---DetailIf the data in table1 is dependent on the data in table2 then table2 should be loaded first.In such cases tocontrol the load order of the tables we need some conditional loading which is nothing but constraintbased loadIn Informatica this feature is implemented by just one check box at the session level.ThanksUma

Page 180: Informatica Interview QnA

=======================================Target load order comes in the designer property..Click mappings tab in designer and then target loadplan.It will show all the target load groups in the particular mapping. You specify the order there theserver will loadto the target accordingly.A target load group is a set of source-source qulifier-transformations and target.Where as constraint based loading is a session proerty. Here the multiple targets must be generated fromone source qualifier. The target tables must posess primary/foreign key relationships. So that the serverloads according to the key relation irrespective of the Target load order plan.file:///C|/Perl/bin/result.html (186 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================If you have only one source it s loading into multiple target means you have to use Constraint basedloading. But the target tables should have key relationships between them.If you have multiple source qualifiers it has to be loaded into multiple target you have to use Targetload order.=======================================Constraint based loading : If your mapping contains single pipeline(flow) with morethan one target(If target tables contain Master -Child relationship) you need to use constraint based load in sessionlevel.Target Load plan : If your mapping contains multipe pipeline(flow) (specify execution order one byone.example pipeline 1 need to execute first then pipeline 2 then pipeline 3) this is purly based onpipeline dependency=======================================

185.Informatica - How do we analyse the data at database level?QUESTION #185No best answer available. Please pick the good answer available or submit youranswer.

Page 181: Informatica Interview QnA

June 16, 2006 14:20:55 #1Uma Bojja Member Since: May 2006 Contribution: 7RE: How do we analyse the data at database level?file:///C|/Perl/bin/result.html (187 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentData can be viewed using Informatica's designer tool.If you want to view the data on source/target we can preview the data but with some limitations.We can use data profiling too.ThanksUma=======================================

186.Informatica - why sorter transformation is an activetransformation?QUESTION #186No best answer available. Please pick the good answer available or submit youranswer.June 16, 2006 15:02:41 #1Kiran Kumar CholletiRE: why sorter transformation is an active transformat...Click Here to view complete documentIt allows to sort data either in ascending or descending order according to a specified field. Also used toconfigure for case-sensitive sorting and specify whether the output rows should be distinct. then itwill not return all the rows=======================================becz it will change the rowid of the records transformed.active transformation:is no of records and thier rowid that pass through the transformation will differ.=======================================file:///C|/Perl/bin/result.html (188 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

This is type of active transformation which is responsible for sorting the data either in the ascending

Page 182: Informatica Interview QnA

order or descending order according to the key specifier. the port on which the sorting takes place iscalled as sortkeyportpropertiesif u select distinct eliminate duplicatescase sensitive valid for strings to sort the datanull treated low null values are given least priority=======================================If any transformation has the distinct option then it will be a active one bec active transformation isnothing but the transformation which will change the no. of o/p records.So distinct always filters theduplicate rows which inturn decrease the no of o/p records when compared to i/n records.One more thing is An active transformation can also behave like a passive=======================================

187.Informatica - how is the union transformation activetransformation?QUESTION #187No best answer available. Please pick the good answer available or submit youranswer.June 18, 2006 09:53:25 #1zafarRE: how is the union transformation active transformat...file:///C|/Perl/bin/result.html (189 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentActive Transformation is one which can change the number of rows i.e input rows and output rowsmight not match. Number of rows coming out of Union transformation might not match the incomingrows.Zafar=======================================Active Transformation: the transformation that change the no. of rows in the Target.Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows)

Page 183: Informatica Interview QnA

Passive Transformation: the transformation that does not change the no. of rows in the Target.Source (100 rows) ---> Passive Transformation ---> Target (100 rows)Union Transformation: in Union Transformation we may combine the data from two (or) more sources.Assume Table-1 contains '10' rows and Table-2 contains '20' rows. If we combine the rows of Table-1and Table-2 we will get a total of '30' rows in the Target. So it is definetly an Active Transformation.=======================================thank u very munch sai venkatesh for ur answer but inthat case look up transformation should be anactive transformation but it is a passive transformation .=======================================Active transformation number of records passing through the transformation and their rowid will bedifferent it depends on rowid also.=======================================This is a type of passive transformation which is responsible for merging the data comming fromdifferent sources. the union transformation functions very similar to union all statement in oracle.=======================================Hi Saivenkatesh ur answer is very nice thanking you.=======================================Ya since the Union Trnsformation may lead a change to the no of rows incoming it is definitely anactive type.file:///C|/Perl/bin/result.html (190 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

In the other case Look-up in no way can change the no. of row that are passing thru it. Thetransformation just looks to the refering table. The no. of records increases or decreases by thetransformations that follow the look-up transformation.=======================================Are you sure that Lookup is a passive trasformation=======================================Ya Surely Lookup is a passive one=======================================

Page 184: Informatica Interview QnA

Hi u are saying source rows also 10+20 30 it passes all 30 rows to the target according to activedefinition while it passes the rows to the next t/r should be d/t but it passes all 30 rows.I am confusing here can anyone clear me in detail. thanks in advance=======================================

188.Informatica - what is tracing level?QUESTION #188No best answer available. Please pick the good answer available or submit youranswer.June 21, 2006 10:47:28 #1rajeshRE: what is tracing level?Click Here to view complete documentTracing level determines the amount of information that informatcia server writes in a session log.=======================================Ya its the level of information storage in session log.The option comes in the properties tab of transformations. By default it remains Normal . Can beVerbose InitialisationVerbose Datafile:///C|/Perl/bin/result.html (191 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Normalor Terse.=======================================

189.Informatica - How can we join 3 database like Flat File,Oracle, Db2 in Informatrica..Thanks in advanceQUESTION #189No best answer available. Please pick the good answer available or submit youranswer.June 24, 2006 18:28:27 #1

Page 185: Informatica Interview QnA

sandeepRE: How can we join 3 database like Flat File, Oracle,...Click Here to view complete documenthiusing join transformation=======================================You have to use two joiner transformations.fIRST one will join two tables and the next one will join thethird with the resultant of the first joiner.=======================================

190.Informatica - Is a fact table normalized or de-normalized?QUESTION #190 Submitted by: lakshmiHiDimension tables can be normalized or de-normalized. Facts are alwaysnormalized.Regardsfile:///C|/Perl/bin/result.html (192 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Above answer was rated as good by the following members:Vamshidhar Click Here to view complete documentFlat table is always Normalised since no redundants!!=======================================Well!! A fact table is always DENORMALISED table. It consists of data from dimension table(Primary Key's) and Fact table has Foreign keys and measures.Thanks!!=======================================the main funda of DW is de-normalizing the data for faster access by the reporting tool...so if urbuilding a DW ..90 it has to be de-normalized and off course the fact table has to be de normalized...Hope answer the question...=======================================the fact table is always DE-NORMALIZED.somebody answered it as normalized.See if u dont know

Page 186: Informatica Interview QnA

the answers plz dont post them.Just dont make lottery by posting wrong answers.=======================================HiI read the above comments. I confused. then we should ask Kimball know. Here is the comment..Fable August 3 2005Dimensional models are fully denormalized.FactDimensional models combine normalized and denormalized table structures. The dimension tables ofdescriptive information are highly denormalized with detailed and hierarchical roll-up attributes in thesame table. Meanwhile the fact tables with performance metrics are typically normalized. While weadvise against a fully normalized with snowflaked dimension attributes in separate tables (creatingblizzard-like conditions for the business user) a single denormalized big wide table containing bothmetrics and descriptions in the same table is also ill-advised.Ref:http://www.kimballgroup.com/html/commentarysub2.htmlfile:///C|/Perl/bin/result.html (193 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================HiDimension tables can be normalized or de-normalized. Facts are always normalized.Regards=======================================HiDimension table can be normalized or de-normalized. But fact table is always normalized=======================================Dimension table may be normalized or denormalized according to your schema but Fact table alwayswill be denormalized.=======================================Hi allDimension table may be normalized or denormalized according to your schema but Fact table alwayswill be denormalized.

Page 187: Informatica Interview QnA

RegardsUmeshBSIL(Mumbai)=======================================hiplease see the following site:http://72.14.253.104/search?q cache:lkFjt6EmsxMJ:www.kimballgroup.com/html/commentarysub2.file:///C|/Perl/bin/result.html (194 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

html+fact+table+normalized+or+denormalized&hl en&gl us&ct clnk&cd 1I am highlighting what Kimball says here: Dimensional models combine normalized anddenormalized table structures. The dimension tables of descriptive information are highlydenormalized with detailed and hierarchical roll-up attributes in the same table. Meanwhile the facttables with performance metrics are typically normalized. While we advise against a fully normalizedwith snowflaked dimension attributes in separate tables (creating blizzard-like conditions for thebusiness user) a single denormalized big wide table containing both metrics and descriptions in thesame table is also ill-advised.Regardslakshmi=======================================

191.Informatica - What is the difference between PowerCenter 7and PowerCenter 8?QUESTION #191No best answer available. Please pick the good answer available or submit youranswer.August 03, 2006 11:31:21 #1satishRE: What is the difference between PowerCenter 7 and P...Click Here to view complete document

Page 188: Informatica Interview QnA

the major difference is in 8 version some new transformations are added apart from these the remainingsame.=======================================

192.Informatica - What is the difference between PowerCenter 6and powercenter 7?QUESTION #192No best answer available. Please pick the good answer available or submit youranswer.June 30, 2006 10:28:09 #1Gopal. Pfile:///C|/Perl/bin/result.html (195 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: What is the difference between PowerCenter 6 and p...Click Here to view complete document1)lookup the flat files in informatica 7.X but we cann't lookup flat files in informatica 6.X2) External Stored Procedure Transformation is not available in informatica 7.X but this transformationincluded in informatica 6.X=======================================Also Union Transformation is not there in 6.x where as its there in 7.xPradeep=======================================l Also custom transformation is not available in 6.xl The main difference is the version control available in 7.xl Session level error handling is available in 7.xl XML enhancements for data integration in 7.x=======================================HiTell me some architectural difference between the two.=======================================in 7.x it have the more feature compare to 6.x liketransaction cotrol transformationunion transformationmidstream XML transformationflat file lookup

Page 189: Informatica Interview QnA

file:///C|/Perl/bin/result.html (196 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

custom transformationfunction likesoundex metaphone=======================================

193.Informatica - How to move the mapping from one database toanother?QUESTION #193No best answer available. Please pick the good answer available or submit youranswer.July 06, 2006 21:12:49 #1martinRE: How to move the mapping from one database to anoth...Click Here to view complete documentDo you mean migration between repositories? There are 2 ways of doing this.1. Open the mapping you want to migrate. Go to File Menu - Select 'Export Objects' and give a name -an XML file will be generated. Connect to the repository where you want to migrate and then select FileMenu - 'Import Objects' and select the XML file name.2. Connect to both the repositories. Go to the source folder and select mapping name from the objectnavigator and select 'copy' from 'Edit' menu. Now go to the target folder and select 'Paste' from 'Edit'menu. Be sure you open the target folder.=======================================u can also do it this way. connect to both the repositories open the respective folders. keep thedestination repository as active. from the navigator panel just drag and drop the mapping to the workarea. it will ask whether to copy the mapping say YES. its done.=======================================file:///C|/Perl/bin/result.html (197 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

if we go by the direct meaning of your question....there is no need for a new mapping for a new databse

Page 190: Informatica Interview QnA

you just need to change the connections in the workflow manager to run the mapping on anotherdatabase=======================================

194.Informatica - How do we do complex mapping by usingflatfiles / relational database?QUESTION #194No best answer available. Please pick the good answer available or submit youranswer.September 28, 2006 05:56:56 #1srinivasRE: How do we do complex mapping by using flatfiles / ...Click Here to view complete documentif we are using more business reules or more transformations then it is called complex mapping. If wehave flat files then we use the flat as a sourse or we can take relational sources depends on theavailability.=======================================

195.Informatica - How to define Informatica server?QUESTION #195No best answer available. Please pick the good answer available or submit youranswer.July 06, 2006 02:45:14 #1deeprekhaRE: How to define Informatica server?file:///C|/Perl/bin/result.html (198 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentinformatica server is the main server component in informatica product family..Which is resonsible forreads the data from various source system and tranforms the data according to business rule and loads

Page 191: Informatica Interview QnA

the data into the target table=======================================

196.Informatica - How to call stored Procedure from Workflowmonitor in Informatica 7.1 versionQUESTION #196No best answer available. Please pick the good answer available or submit youranswer.October 19, 2006 16:57:37 #1PrasanthRE: How to call stored Procedure from Workflow monitor...Click Here to view complete documentCall stored procedure using a shell script.Invoke that shell script using command task with pmcmd.=======================================

197.Informatica - how can we store previous session logsQUESTION #197No best answer available. Please pick the good answer available or submit youranswer.July 12, 2006 02:54:55 #1HareeshRE: how can we store previous session logsfile:///C|/Perl/bin/result.html (199 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentJust run the session in time stamp mode then automatically session log will not overwrite currentsession log.Hareesh=======================================HiWe can do this way also. using $PMSessionlogcount(specify the number of runs of the session log tosave)

Page 192: Informatica Interview QnA

=======================================HiGo to Session-->right click -->Select Edit Task then Goto -->Config Object then set the propertySave Session Log By --RunsSave Session Log for These Runs --->To Number of Historical Session logs you want=======================================

198.Informatica - how can we use pmcmd command in aworkflow or to run a sessionQUESTION #198No best answer available. Please pick the good answer available or submit youranswer.July 14, 2006 02:31:34 #1abcRE: how can we use pmcmd command in a workflow or to r...file:///C|/Perl/bin/result.html (200 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentBy using command task.=======================================

199.Informatica - What is change data capture?QUESTION #199No best answer available. Please pick the good answer available or submit youranswer.July 14, 2006 06:26:43 #1AjayRE: What is change data capture?Click Here to view complete documentChange data capture (CDC) is a set of software design patterns used to determine the data that haschanged in a database so that action can be taken using the changed data.=======================================

Page 193: Informatica Interview QnA

Can you eloberate on how do you this pls?=======================================Changed Data Capture (CDC) helps identify the data in the source system that has changed since thelast extraction. With CDC data extraction takes place at the same time the insert update or deleteoperations occur in the source tables and the change data is stored inside the database in change tables.The change data thus captured is then made available to the target systems in a controlled manner.=======================================

200.Informatica - Wat is QTP in Data Warehousing?QUESTION #200No best answer available. Please pick the good answer available or submit youranswer.November 15, 2006 00:26:44 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: Wat is QTP in Data Warehousing?file:///C|/Perl/bin/result.html (201 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentI think this question is belongs to congnos.=======================================

201.Informatica - How can I get distinct values while mapping inInformatica in insertion?QUESTION #201No best answer available. Please pick the good answer available or submit youranswer.July 17, 2006 13:03:40 #1ManishaRE: How can I get distinct values while mapping in Inf...Click Here to view complete document

Page 194: Informatica Interview QnA

You can add an aggregator before insert and group by the feilds that need to be distinct.=======================================IN the source qualifier write the query with distinct on key column=======================================WellThere are two methods to get distinct values:If the sources are databases then we can go for SQL-override in source qualifier by changing thedefault SQL query. I mean selecting the check box called [] select distinct.andif the sources are heterogeneous i mean from different file systems then we can use SORTERTransformation and in transformation properties select the check box called [] select distinct same as insource qualifier we can get distinct values.=======================================

202.Informatica - what transformation you can use inplace oflookup?QUESTION #202No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (202 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.July 17, 2006 05:52:56 #1venkatRE: what transformation you can use inplace of lookup?...Click Here to view complete documentYou can use the joiner transformation by setting as outer join of either master or detail.=======================================HiLook-up transformation can serve in so many situations.So if you can a bit particular about the scenarioo that you are talking about it will be easy to interpret.=======================================Hi

Page 195: Informatica Interview QnA

lookup's either we can use first or last value. for suppose lookup have more than one record matchingwe need all matching records in that situation we can use master or detail outer join instead of lookup.(according to logic)=======================================You can use joiner in place of lookup=======================================You can join that table which you wanted to use in lookup in the source qualifier using SQL override toavoid using lookup transformation=======================================

203.Informatica - Why and where we are using factless fact table?QUESTION #203No best answer available. Please pick the good answer available or submit youranswer.July 18, 2006 05:08:53 #1kumarfile:///C|/Perl/bin/result.html (203 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: Why and where we are using factless fact table?Click Here to view complete documentHiIam not sure but you can confrirm with other people.Factless fact is nothing but Non-additive measures.EX: Temperature in fact table will note it as Moderate Low High. This type of things are called Nonadditivemeasures.CheersKumar.=======================================Factless Fact Tables are the fact tables with no facts or measures(numerical data). It contains only theforiegn keys of corresponding Dimensions.=======================================such fact tables are required to avoid flaking of levels within dimension and to define them as a separate

Page 196: Informatica Interview QnA

cube connected to the main cube.=======================================transaction can occur without the measure then it is factless fact table or coverage tablefor example product samples=======================================Fact table will contains metrics and FK's corresponding to the Dimesional table but factless Fact tablewill contains only FK's of corresponding dimensions without any metrics regards rma=======================================

204.Informatica - tell me one complecated mappingQUESTION #204file:///C|/Perl/bin/result.html (204 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit youranswer.September 14, 2006 02:44:20 #1srinivasRE: tell me one complecated mappingClick Here to view complete documentif we are using more business rules or more transformations then it is a complex mapping like SCD type2(version no effective date range flag current date)=======================================Mapping is nothing but flow of data from source to Target we r giving instructions to power centerserver to move data from source to targets accourding to our business rules if more business rules rthere in our mapping then its copmlex mapping.regards rma=======================================

205.Informatica - how do we do unit testing in informatica?howdo we load data in informatica ?QUESTION #205

Page 197: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.July 22, 2006 04:25:39 #1Praveen kumarTestingfile:///C|/Perl/bin/result.html (205 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentUnit testing are of two types1. Quantitaive testing2.Qualitative testingSteps.1.First validate the mapping2.Create session on themapping and then run workflow.Once the session is succeeded the right click on session and go for statistics tab.There you can see how many number of source rows are applied and how many number of rows loadedin to targets and how many number of rows rejected.This is called Quantitative testing.If once rows are successfully loaded then we will go for qualitative testing.Steps1.Take the DATM(DATM means where all business rules are mentioned to the corresponding sourcecolumns) and check whether the data is loaded according to the DATM in to target table.If any data isnot loaded according to the DATM then go and check in the code and rectify it.This is called Qualitative testing.This is what a devloper will do in Unit Testing.=======================================

206.Informatica - how do we load data by using perioddimension?QUESTION #206No best answer available. Please pick the good answer available or submit youranswer.September 23, 2006 07:46:32 #1

Page 198: Informatica Interview QnA

file:///C|/Perl/bin/result.html (206 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

calltomadhu Member Since: September 2006 Contribution: 34RE: how do we load data by using period dimension?Click Here to view complete documenthiits very simple thourgh sheduleingthanksmadhu=======================================

207.Informatica - How many types of facts and what are they?QUESTION #207No best answer available. Please pick the good answer available or submit youranswer.July 21, 2006 14:54:14 #1BalaRE: How many types of facts and what are they?Click Here to view complete documentI know some there are Additive Facts Semi-Additive Non-Additive Accumulating Facts Factless factsPeriodic fact table Transaction Fact table.ThanksBala=======================================There areFactless Facts:Facts without any measures.file:///C|/Perl/bin/result.html (207 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Additive Facts:Fact data that can be additive/aggregative.Non-Additive facts: Facts that are result of non-additonSemi-Additive Facts: Only few colums data can be added.Periodic Facts: That stores only one row per transaction that happend over a period of time.Accumulating Fact: stores row for entire lifetime of event.There must be some more if some one knows pls add.=======================================hithere are 3 types

Page 199: Informatica Interview QnA

additivesemi additivenon additivethanksmadhu=======================================haithere r three types of factsAdditive Fact-Fact which is used across all dimensionssemi Additive Fact- Fact which is used some dimesion and not with someNon Additive Fact- Fact which is not used any of the dimesion=======================================1. Regular Fact - With numeric values2.Factless Fact - Without numeric valuesfile:///C|/Perl/bin/result.html (208 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

208.Informatica - How can you say that union Transormation isActive transformation.QUESTION #208No best answer available. Please pick the good answer available or submit youranswer.July 22, 2006 07:26:42 #1kirankumarvemaRE: How can you say that union Transormation is Active...Click Here to view complete documentBy using union transformation we can eleminate some rows....so this is active transformation.bye=======================================Pls explain more. we are doing the union.how the union is eliminating the some rows.=======================================By Definiation Active transformation is the transformation that changes the number of rows that passthrough it...in union transformation the number of rows resulting from union can be (are) different fromthe actual number of rows.

Page 200: Informatica Interview QnA

=======================================union is active becoz it eliminates duplicates from the sources=======================================Heloo Msr your answer is wrong.Union is not eliminating the duplicate rows.If anybody knows pleasegive me reason. All are giving the above answers are not supporting that question.Don't give the activetransofrmation defination.I want exact reasons.=======================================hiWe can merge multiple source qualier query records in union trans at same time its not like expresiontrans (each row). so we can say it is active.=======================================HiUnion Transformation is a active transformation because it changes the number of rows through thepipeline.file:///C|/Perl/bin/result.html (209 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

It normally has multiple input groups to add on it compare to other transformation.Before uniontransformation was implement the funda on number of rows was right i.e before 7.0 but now its not theexact benchmark to determine the active transformationThanksUma=======================================HiUnion also Active tranformation.because it eliminates duplicate rowsWhen you select distinct option in union properties tab.=======================================Hi allsome people saying that a union T/R eliminates duplicates. but it is woring . As of now it wonteliminate duplicates.The union T/R is active and also passive depends upon the property "is active" which is present at unionT/R properties tab. this Specifies whether this transformation is an active or a passive transformation.

Page 201: Informatica Interview QnA

When you enable this option the transformation can generate 0 1 or more output rows for each inputrow. Otherwise it can generate only 0 or 1 output row for each input row.but this property is disabled in informatica 7.1.1. I think it may developed in future.regardskiran=======================================

209.Informatica - how many types of dimensions are available ininformatica?QUESTION #209No best answer available. Please pick the good answer available or submit youranswer.Sorting Options Page 1 of 2 « First 1 2 > Last »file:///C|/Perl/bin/result.html (210 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

July 31, 2006 00:18:27 #1helloRE: how many types of dimensions are available in info...Click Here to view complete documentthree types of dimensions are available=======================================What are they?Plz explain me.What is rapidly changing dimensions?=======================================no=======================================hi there r 3 types of dimensions1.star schema2.snowflake schema3.glaxy schema=======================================i think there r 3 types of dimension table1.stand-alone2 local.3.global=======================================There are 3 types of dimensions available according to my knowledge.

Page 202: Informatica Interview QnA

That are1. General dimensionsfile:///C|/Perl/bin/result.html (211 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

2. Confirmed dimensions3. Junk Dimensions=======================================Using the Slowly Changing Dimensions WizardThe Slowly Changing Dimensions Wizard creates mappings to load slowly changing dimension tables:l Type 1 Dimension mapping. Loads a slowly changing dimension table by inserting newdimensions and overwriting existing dimensions. Use this mapping when you do not want ahistory of previous dimension data.l Type 2 Dimension/Version Data mapping. Loads a slowly changing dimension table byinserting new and changed dimensions using a version number and incremented primary key totrack changes. Use this mapping when you want to keep a full history of dimension data and totrack the progression of changes.l Type 2 Dimension/Flag Current mapping. Loads a slowly changing dimension table byinserting new and changed dimensions using a flag to mark current dimension data and anincremented primary key to track changes. Use this mapping when you want to keep a fullhistory of dimension data tracking the progression of changes while flagging only the currentdimension.l Type 2 Dimension/Effective Date Range mapping. Loads a slowly changing dimension tableby inserting new and changed dimensions using a date range to define current dimension data.Use this mapping when you want to keep a full history of dimension data tracking changes withan exact effective date range.l Type 3 Dimension mapping. Loads a slowly changing dimension table by inserting new

Page 203: Informatica Interview QnA

dimensions and updating values in existing dimensions. Use this mapping when you want tokeep the current and previous dimension values in your dimension table.=======================================I want each and every one of you who is answering please don't make fun out of this.some one gave the answer no . some one gave the answer star flake schema snow flake schema etc howcan a schema come under a type of dimensionANSWER:file:///C|/Perl/bin/result.html (212 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

One major classification we use in our real time modelling isSlowly Changing Dimensionstype1 SCD: If you want to load an updated row of previously existed row the previous data will bereplaced. So we lose historical data.type2 SCD: Here we will add a new row for updated data. So we have both current and past recordswhich aggrees with the concept of datawarehousing maintaining historical data.type3 SCD: Here we will add new columns.but mostly used one is type2 SCD.we have one more type of dimension that isCONFORMED DIMENSION: The dimension which gives the same meaning across different starschemas is called Conformed dimension.ex: Time dimension. where ever it was gives the same meaning=======================================wat r those three dimensions tht r available in inforamtica here we get multiple answers cud anyone tellme the exact once.........thqhari krishna=======================================casual dimension confimed dimension degenrate dim junk dimension raged dim dirty dim=======================================

210.Informatica - How can you improve the performance of

Page 204: Informatica Interview QnA

Aggregate transformation?QUESTION #210No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (213 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

August 06, 2006 06:20:23 #1shanthiRE: How can you improve the performance of Aggregate t...Click Here to view complete documentBy using sorter transformation before the aggregator transformation.=======================================by using sorted input=======================================hiwe can improve the agrregator performence in the following ways1.send sorted input.2.increase aggregator cache size.i.e Index cache and data cache.3.Give input/output what you need in the transformation.i.e reduce number of input and output ports.=======================================IN Aggregator transformations select the sorted input check list in the properties tab and write sql queryin source qulifier.Its improoves the performance.=======================================Hiwe can improve the aggrifation performance by doing like this.create group by condition only on numaric columns.use sorter transimition before aggregatorgive sorter input to aggreator.file:///C|/Perl/bin/result.html (214 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

increate the cashe size of aggreator.=======================================

211.Informatica - Why did you use stored procedure in your ETLApplication?QUESTION #211

Page 205: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.August 11, 2006 13:16:09 #1sudhaRE: Why did you use stored procedure in your ETL Appli...Click Here to view complete documenthiusage of stored procedure has the following advantages1checks the status of the target database2drops and recreates indexes3determines if enough space exists in the database4performs aspecilized calculation=======================================Stored procedure in Informatica will be useful to impose complex business rules.=======================================to execute database procedures=======================================Hifile:///C|/Perl/bin/result.html (215 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Using of stored procedures plays important role.Suppose ur using oracle database where ur doing someETL changes you may use informatica .In this every row of the table pass should pass throughinformatica and it should undergo specified ETL changes mentioned in transformations. If use storedprocedure i..e..oracle pl/sql package this will run on oracle database(which is the databse where weneed to do changes) and it will be faster comapring to informatica because it is runing on the oracledatabse.Some things which we cant do using tools we can do using packages.Some jobs make takehours to run ........in order to save time and database usage we can go for stored procedures.=======================================

212.Informatica - why did u use update stategy in yourapplication?

Page 206: Informatica Interview QnA

QUESTION #212No best answer available. Please pick the good answer available or submit youranswer.August 08, 2006 12:39:03 #1angeletteeyeRE: why did u use update stategy in your application?Click Here to view complete documentUpdate Strategy is used to drive the data to be Inert Update and Delete depending upon some condition.You can do this on session level tooo but there you cannot define any condition.For eg: If you want todo update and insert in one mapping...you will create two flows and will make one as insert and one asupdate depending upon some condition.Refer : Update Strategy in Transformation Guide for moreinformation=======================================Update Strategy is the most important transformation of all Informatica transformations.The basic thing one should understand about this is it is essential transformation to perform DMLoperations on already data populated targets(i.e targets which contain some records before this mappingloads data)It is used to perform DML operations.Insertion Updation Deletion Rejectionwhen records come to this transformation depending on our requirement we can decide whether tofile:///C|/Perl/bin/result.html (216 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

insert or update or reject the rows flowing in the mapping.For example take an input row if it is already there in the target(we find this by lookup transformation)update it otherwise insert it.We can also specify some conditions based on which we can derive which update strategy we have touse.eg: iif(condition DD_INSERT DD_UPDATE)if condition satisfies do DD_INSERT otherwise do DD_UPDATE

Page 207: Informatica Interview QnA

DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which canperform the respective DML operations.There is a function called DECODE to which we can arguments as 0 1 2 3DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection=======================================Update Strategy is the most important transformation of all Informatica transformations.The basic thing one should understand about this is it is essential transformation to perform DMLoperations on already data populated targets(i.e targets which contain some records before this mappingloads data)It is used to perform DML operations.Insertion Updation Deletion Rejectionwhen records come to this transformation depending on our requirement we can decide whether toinsert or update or reject the rows flowing in the mapping.For example take an input row if it is already there in the target(we find this by lookup transformation)update it otherwise insert it.We can also specify some conditions based on which we can derive which update strategy we have touse.file:///C|/Perl/bin/result.html (217 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

eg: iif(condition DD_INSERT DD_UPDATE)if condition satisfies do DD_INSERT otherwise do DD_UPDATEDD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which canperform the respective DML operations.There is a function called DECODE to which we can arguments as 0 1 2 3DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection=======================================to perform dml operationsthanksmadhu=======================================

Page 208: Informatica Interview QnA

213.Informatica - How do you create single lookup transformationusing multiple tables?QUESTION #213No best answer available. Please pick the good answer available or submit youranswer.August 10, 2006 16:46:28 #1SrinivasRE: How do you create single lookup transformation usi...file:///C|/Perl/bin/result.html (218 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentWrite a override sql query. Adjust the ports as per the sql query.=======================================no it is not possible to create single lookup on multiple tables. beacuse lookup is created upon targettable.=======================================for connected lkp transformation1>create the lkp transformation.2>go for skip.3>manually enter theports name that u want to lookup.4>connect with the i/p port from src table.5>give the condition6>gofor generate sql then modify according to u'r requirement validateit will work....=======================================just we can create the view by using two table then we can take that view as lookup table=======================================If you want single lookup values to be used in multiple target tables this can be done !!!For this we can use Unconnected lookup and can collect the values from source table in any target tabledepending upon the business rule ...=======================================

214.Informatica - In update strategy target table or flat file whichgives more performance ? why?

Page 209: Informatica Interview QnA

QUESTION #214No best answer available. Please pick the good answer available or submit youranswer.August 10, 2006 04:10:37 #1prasad.yandapallistored procedurefile:///C|/Perl/bin/result.html (219 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentwe use stored procedure for populating and maintaing databases in our mapping=======================================Pros: Loading Sorting Merging operations will be faster as there is no index concept and Data will be inASCII mode.Cons: There is no concept of updating existing records in flat file.As there is no indexes while lookups speed will be lesser.=======================================hi faltfile give more performance.beacause there is no index cosept and there is no constraints=======================================

215.Informatica - How to load time dimension?QUESTION #215No best answer available. Please pick the good answer available or submit youranswer.August 15, 2006 04:08:18 #1MaheshRE: How to load time dimension?file:///C|/Perl/bin/result.html (220 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentWe can use SCD Type 1/2/3 to load any Dimensions based on the requirement.=======================================We can use SCD Type 1/2/3 to load data into any dimension tables as per the requirement.=======================================

Page 210: Informatica Interview QnA

U can load time dimension manually by writing scripts in PL/SQL to load the time dimension table withvalues for a period.Ex:- M having my business data for 5 years from 2000 to 2004 then load all the date starting from 1-1-2000 to 31-12-2004 its around 1825 records. Which u can do it fast writing scripts.Bhargav=======================================hithourgh type1 type2 type3 depending upon the coditionthanksmadhu=======================================For loading data in to other dimensions we have respective tables in the oltp systems..But for time dimension we have only one base in the OLTP database. Based on that we have to loadtime dimension. We can loan the time dimension using ETL procedures which calls the procedure orfunction created in the database. If the columns are more in the time dimension we have to creat itmanually by using Excel sheet.=======================================create a procedure to load data into Time Dimension. The procedure needs to run only once to popullateall the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in urtable.file:///C|/Perl/bin/result.html (221 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

create or replace procedure QISODS.Insert_W_DAY_D_PR asLastSeqID number default 0;loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');beginLoopLastSeqID : LastSeqID + 1;loaddate : loaddate + 1;INSERT into QISODS.W_DAY_D values(LastSeqIDTrunc(loaddate)

Page 211: Informatica Interview QnA

Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_FLOAT(TO_CHAR(loaddate 'MM'))TO_FLOAT(TO_CHAR(loaddate 'Q'))trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)TO_FLOAT(TO_CHAR(loaddate 'YYYY'))TO_FLOAT(TO_CHAR(loaddate 'DD'))TO_FLOAT(TO_CHAR(loaddate 'D'))TO_FLOAT(TO_CHAR(loaddate 'DDD'))11111TO_FLOAT(TO_CHAR(loaddate 'J'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +TO_number(TO_CHAR(loaddate 'MM'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +TO_number(TO_CHAR(loaddate 'Q'))TO_FLOAT(TO_CHAR(loaddate 'J'))/7TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713TO_CHAR(load_date 'Day')TO_CHAR(loaddate 'Month')Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')Trunc(loaddate 'DAY') + 1Decode(Last_Day(loaddate) loaddate 'y' 'n')to_char(loaddate 'YYYYMM')to_char(loaddate 'YYYY') || ' Half' ||Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_CHAR(loaddate 'YYYY / MM')TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddatefile:///C|/Perl/bin/result.html (222 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

'Q')) )TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate'WW')))TO_CHAR(loaddate 'YYYY'));

Page 212: Informatica Interview QnA

If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') ThenExit;End If;End Loop;commit;end Insert_W_DAY_D_PR;=======================================

216.Informatica - what is the architecture of any Datawarehousing project? what is the flow?QUESTION #216No best answer available. Please pick the good answer available or submit youranswer.August 21, 2006 06:52:22 #1satyaneerumalla Member Since: August 2006 Contribution: 16RE: what is the architecture of any Data warehousing p...Click Here to view complete document1)The basic step of datawarehousing starts with datamodelling. i.e creation dimensions and facts.2)datawarehouse starts with collection of data from source systems such as OLTP CRM ERPs etc3)Cleansing and transformation process is done with ETL(Extraction Transformation Loading) tool.4)by the end of ETL process target databases(dimensions facts) are ready with data which accomplishesthe business rules.5)Now finally with the use of Reporting tools(OLAP) we can get the information which is used fordecision support.=======================================file:///C|/Perl/bin/result.html (223 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

very nice answerthanks=======================================nice answer i have more doubts and can u give me ur mail id=======================================

Page 213: Informatica Interview QnA

217.Informatica - What are the questions asked in PDM Round(Final Hr round)QUESTION #217No best answer available. Please pick the good answer available or submit youranswer.November 14, 2006 02:08:00 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: What are the questions asked in PDM Round(Final Hr...Click Here to view complete documentWe can't say which questions asking in which round that will depending on the interviewer. In Hr roundthey will ask what about ur current company tell some thing about ur company and why ur prefferingthis company and tell me some thing my company this they will ask any type of question. These are thesample questions.=======================================

218.Informatica - What is the difference between materializedview and a data mart? Are they same?QUESTION #218No best answer available. Please pick the good answer available or submit youranswer.August 28, 2006 09:46:52 #1satyaneerumalla Member Since: August 2006 Contribution: 16RE: What is the difference between materialized view a...file:///C|/Perl/bin/result.html (224 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentHi friendplease elaborate what do you mean by materialized view.Then i think i can help you to clear your doubt.mail me at [email protected]=======================================

Page 214: Informatica Interview QnA

A materialized view provides indirect access to table data by storing the results of a query in a separateschema object unlike an ordinary view which does not take up any storage space or contain data.Materialized views are schema objects that can be used to summarize precompute replicate anddistribute data. E.g. to construct a data warehouse.The definition of materialized view is very near to the concept of Cubes where we keep summarizeddata. But cubes occupy space.Coming to datamart that is completely different concept. Datawarehouse contains overall view of theorganization. But datamart is specific to a subjectarea like Finance etc...we can combine different data marts of a compnay to form datawarehouse or we can split adatawarehouse into different data marts.=======================================hiview direct connect to tableit wont contain data and what we ask it will fire on table and give the data.ex: sum(sal).materilized view are indirect connect to data and stored in separte scheema. it is just like cube in dw. itwill have data. what ever we ask summerised information it will give and stores it. when ever we askfile:///C|/Perl/bin/result.html (225 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

next time it will directly from here itself.performance wise this view are faster than normal view.=======================================

219.Informatica - In workflow can we send multiple email ?QUESTION #219No best answer available. Please pick the good answer available or submit youranswer.August 28, 2006 15:13:20 #1prudhviRE: In workflow can we send multiple email ?

Page 215: Informatica Interview QnA

Click Here to view complete documentyeswe can send multiple e-mail in a workflow=======================================Yes only on the UNIX version of Workflow and not Windows based version.=======================================

220.Informatica - How do we load from PL/SQL script intoInformatica mapping?QUESTION #220No best answer available. Please pick the good answer available or submit youranswer.August 28, 2006 09:43:04 #1satyaneerumalla Member Since: August 2006 Contribution: 16RE: How do we load from PL/SQL script into Informati...file:///C|/Perl/bin/result.html (226 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentYou can use StoredProcedure transformation.There you can specify the pl/sql procedure name.when we run the session containing this transformation the pl/sql procedure will get executed.If you want more clarification.Either elaborate your question or mail me at [email protected]=======================================hifor database procedure we have procedure transmission. we can use that one.thanksmadhu=======================================You can actually create a view and import it as source in mapping ....=======================================

221.Informatica - can any one tell why we are populating timedimension only with scripts not with mapping?

Page 216: Informatica Interview QnA

QUESTION #221No best answer available. Please pick the good answer available or submit youranswer.September 23, 2006 07:07:31 #1calltomadhu Member Since: September 2006 Contribution: 34file:///C|/Perl/bin/result.html (227 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: can any one tell why we are populating time dimens...Click Here to view complete documenthibecuase time dimension is rapidly changing dimenction. if you use mapping it is very big stuff and thatto be it very big problem in performence wise.thanksmadhu=======================================How can time dimension be a rapidly chnging dimensions. Time dimension is one table where you loaddate and time related information so that the key can be used in facts. this way you dont have to useentire date in the fact and can rather use the time key. there are a number of advantages in performanceand simplicity of design with this strategy.You use a script to laod time dimension becuase you load it one time. as i said earlier all it contains aredates starting from one point of time say... 01/01/1800 to some date in future say 01/01/3001=======================================

222.Informatica - What about rapidly changing dimensions?Can uanalyze with an example?QUESTION #222No best answer available. Please pick the good answer available or submit youranswer.September 04, 2006 09:05:10 #1satyaneerumalla Member Since: August 2006 Contribution: 16

Page 217: Informatica Interview QnA

RE: What about rapidly changing dimensions?Can u analy...file:///C|/Perl/bin/result.html (228 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documenthia rapidly changing dimensions are those in which values are changing continuously and giving lot ofdifficulty to maintain them.i am giving one of the best real world example which i found in some website while browsing. gothrough it. i am sure you like it.description of a rapidly changing dimension by that person:I'm trying to model a retailing case . I'm having a SKU dimensionof around 150 000 unique products which is already a SCD Type 2 forsome attributes. In addition I'm willing to track changes of the salesand purchase price. However these prices change almost daily for quitea lot of these products leading to a huge dimensional table and requiring continuous updations.so a better option would be shift those attributes into a fact table as facts which solves the problem.=======================================hi If you dont mine plz tell me how to creat rapidly changing dimensinons. and one more question....please tell me what is the use of custom transformation.. thanking u.bye.=======================================Rapidly changing dimension is that whre the dimensions changes quikly.best example is ATM transactions(BANKS).The data being changes continuesly and concurently foreach second so it is very difficult to capture this dimensions.=======================================hiquestion itself there that data is quite freequently changing. Changing means it can be modify or added.Example.The best example of this one is Sales Table.file:///C|/Perl/bin/result.html (229 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================

223.Informatica - What are Data driven Sessions?

Page 218: Informatica Interview QnA

QUESTION #223 The informatica server follows instructions coded into updatestrategy transformations with in the session mapping to determine how to flagrecords for insert,update,delete or reject. If you do not choose data drivenoptionn setting, the informatica server ignores all update strategytransformations in the mapping Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 07, 2006 07:38:33 #1fazalRE: What are Data driven Sessions?=======================================Once you load the data in your DW you can update the new data with the following options in yoursession properties:-1.update 2.insert2.delete and datadriven and all these options are present in your session properties nowif you select the datadriven option informatica takes the logic to update delete or reject data from yourdesigner update strategy transformation.it will look some thing like thisIIF( JOB 'MANAGER' DD_DELETE DD_INSERT ) this expression marks jobs with an ID of managerfor deletion and all other items for insertion.Hope answer the question.=======================================

224.Informatica - what are the transformations that restrict thepartitioning of sessions?QUESTION #224 *Advanced External procedure transformation and Externalprocedure transformation:

Page 219: Informatica Interview QnA

This Transformation contains a check box on the properties tab to allowpartitioning.file:///C|/Perl/bin/result.html (230 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

*Aggregator Transformation:If you use sorted ports you cannot partition the associated source*Joiner Transformation:you can not partition the master source for a joiner transformation*Normalizer Transformation*XML targets.Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.September 07, 2006 14:32:12 #1ManasaRE: what are the transformations that restrict the par...=======================================1)source defination2)Sequence Generator3)Unconnected Transformation4)Xml Target defination=======================================Advanced External procedure transformation and External procedure transformation:This Transformation contains a check box on the properties tab to allow partitioning.Aggregator Transformation:If you use sorted ports you cannot partition the associated sourceJoiner Transformation:you can not partition the master source for a joiner transformationNormalizer TransformationXML targets.=======================================

225.Informatica - Wht is incremental loading?Wht is versioning in

Page 220: Informatica Interview QnA

7.1?QUESTION #225No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (231 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

September 11, 2006 11:06:34 #1Anuj AgarwalRE: Wht is incremental loading?Wht is versioning in 7....Click Here to view complete documentIncremental loading in DWH means to load only the changed and new records.i.e. not to load the ASISrecords which already exist.Versioning in Informatica 7.1 is like a confugration management system where you have every versionof the mapping you ever worked upon. Whenever you have checkedin and created a lock noone else canwork on the same mapping version. This is very helpful in n environment where you have several usersworking on a single feature.=======================================HiThe Type 2 Dimension/Version Data mapping filters source rows based on user-defined comparisonsand inserts both new and changed dimensions into the target. Changes are tracked in the target table byversioning the primary key and creating a version number for each dimension in the table. In the Type 2Dimension/Version Data target the current version of a dimension has the highest version number andthe highest incremented primary key of the dimension.Use the Type 2 Dimension/Version Data mapping to update a slowly changing dimension table whenyou want to keep a full history of dimension data in the table. Version numbers and versioned primarykeys track the order of changes to each dimension.Shivaji Thaneru=======================================

Page 221: Informatica Interview QnA

HiPlease see the incremental loding answer in CDC. Now i will tell you about versioning.Simply if any body is working in Programming language then it is SOS. Means source of site. Meansfile:///C|/Perl/bin/result.html (232 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

where the hystory of data will be available.If any body dont know about this consept read the below.See i developed some software in one storing area. After that some enhancement is happend then i willdownload the data i will do the modification and keep that in same area but with some othere file name.If at all anothere developer wants this one simply he will download this data and he will modify or addagain he will keep that source code in that same place but with other filename. so like this hystory willbe maintained. If we found there is bug in previous version. so what we will do simply we will revertback the changes bye downloading the source .thanksmadhu=======================================

226.Informatica - What is ODS ?what data loaded from it ? Whatis DW architecture?QUESTION #226No best answer available. Please pick the good answer available or submit youranswer.September 11, 2006 10:58:25 #1A AgarwalRE: Wht is ODS ?wht data loaded from it ?Wht is DW ar...Click Here to view complete documentODS--Operational Data Source Normally in 3NF form. Data is stored with least redundancy.General architecture of DWHOLTP System--> ODS ---> DWH( Denomalised Star or Snowflake vary case to case)

Page 222: Informatica Interview QnA

=======================================This consept is really good. I will tell u clearly.file:///C|/Perl/bin/result.html (233 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Assume that i have one 24/7 company. Peak hours is 9-9. ok in this one per day around 40000 recordsare added or modified. Now at 9'o clock i had taken a backup and left. after 9 to 9 again instead ofstroing the data in the same server i will keep from 9 p.m to 9 a.m. data i will store saperatly. Assumethat 10000 records are added in this time. so that next day moring when i am dumping the data there isno need to take 40000+10000. It is very slow in performance wise. so i can take directly 10000 records.this consepct what we call as ODS. Operational Data Source.Arch can be in two ways.ODS to WHODS StagingArea whThanks & regardsmadhu=======================================ODS is an Integrated view of Operational sources(OLTP).=======================================thq from ur answer i can come to one conclusion tht ODS is used to store the current data.i can assume tht by defalut it will add tht 40000 records to this current data 10000 records n gives thereslut 50000.cud u plz reply to this......thq...=======================================

227.Informatica - what are the type costing functions ininformaticaQUESTION #227No best answer available. Please pick the good answer available or submit youranswer.September 22, 2006 10:02:20 #1

Page 223: Informatica Interview QnA

file:///C|/Perl/bin/result.html (234 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

calltomadhu Member Since: September 2006 Contribution: 34RE: what are the type costing functions in informatica...Click Here to view complete documentquestion was not clear can u repeat this question with full descritpion because there is no specificcosting functions in informatica.thanksmadhu.=======================================

228.Informatica - what is the repository agent?QUESTION #228No best answer available. Please pick the good answer available or submit youranswer.September 12, 2006 11:07:42 #1Shivat Member Since: September 2006 Contribution: 9RE: what is the repository agent?Click Here to view complete documentHiThe Repository Agent is a multi-threaded process that fetches inserts and updates metadata in therepository database tables. The Repository Agent uses object locking to ensure the consistency ofmetadata in the repository.ShivajiThaneru=======================================Hifile:///C|/Perl/bin/result.html (235 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

The Repository Server uses a process called Repository agent to access the tables from Repositorydatabase.The Repository sever uses multiple repository agent processes to manage multiple repositorieson different machines on the network using native drivers.=======================================Hi

Page 224: Informatica Interview QnA

The Repository Server uses a process called Repository agent to access the tables from Repositorydatabase.The Repository sever uses multiple repository agent processes to manage multiple repositorieson different machines on the network using native drivers.chsrgeekI=======================================HiName itself it is saying that agent means meadiator between and repositary server and reposatarydatabase tables. simply reposatary agent means who speaks with reposatary.thanksmadhu=======================================

229.Informatica - what is the basic language of informatica?QUESTION #229No best answer available. Please pick the good answer available or submit youranswer.September 15, 2006 14:38:58 #1vickRE: what is the basic language of informatica?file:///C|/Perl/bin/result.html (236 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentSql plus=======================================Hibasic language is Latin.and it was developed in VC++.Thanks & RegardsMadhu D.=======================================The basic language of Informatica is SQL plus.Then only it will under stand the data base language.=======================================

230.Informatica - What is CDC?

Page 225: Informatica Interview QnA

QUESTION #230No best answer available. Please pick the good answer available or submit youranswer.September 18, 2006 08:42:07 #1satyaneerumalla Member Since: August 2006 Contribution: 16RE: Wht is CDC?Click Here to view complete documentChanged Data Capture (CDC) helps identify the data in the source system that has changed since thelast extraction. With CDC data extraction takes place at the same time the insert update or deleteoperations occur in the source tables and the change data is stored inside the database in change tables.The change data thus captured is then made available to the target systems in a controlled manner.mail me to discuss any thing related to informatica. it's my pleasure to [email protected]:///C|/Perl/bin/result.html (237 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================CDC Changed Data Capture. Name itself saying that if any data is changed it will how to get the values.for this one we have type1 and type2 and type3 cdc's are there. depending upon our requirement we canfallow.thanksmadhu=======================================Whenever any source data is changed we need to capture it in the target system also this can bebasically in 3 waysTarget record is completely replaced with new record(Type 1)Complete changes can be captured as different records & stored in the target table(Type 2)Only last change & present data can be captured (Type 3)CDC can be done generally by using a timestamp or version key=======================================

Page 226: Informatica Interview QnA

231.Informatica - what r the mapping specifications? howversionzing of repository objects?QUESTION #231No best answer available. Please pick the good answer available or submit youranswer.September 19, 2006 02:45:04 #1satyaneerumalla Member Since: August 2006 Contribution: 16RE: what r the mapping specifications? how versionzing...file:///C|/Perl/bin/result.html (238 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentMapping SpecificationIt is a metadata document of a mapping.A typical Mapping specification contains:1.Mapping Name2.Business Requirement Information3.Source SystemInitial RowsShort DescriptionRefresh FrequencyPreprocessingPost ProcessingError StrategyReload StrategyUnique Source Fields4.Target SystemRows/Load5.SourcesTablesTable Name Schema/Owner Selection/FilterFilesFile Name File Owner Unique Key6.Targetsfile:///C|/Perl/bin/result.html (239 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Information about different targets. Some thing like above one.7.Information about lookups

Page 227: Informatica Interview QnA

8.Source To Target Field MatrixTarget Table Target Column Data-type Source Table Source Column Data-typeExpression Default Value if Null Data Issues/Quality/CommentsComing to Versioning of objects in repository...In this we have to things1.Checkout: when some user is modifying an object(source target mapping) he can checkout it. That ishe can lock it. So that until he release no body can access it.2.Checkin:When you want to commit an object u use this checkin feature.=======================================Mapping SpecificationIt is a metadata document of a mapping.A typical Mapping specification contains:1.Mapping Name2.Business Requirement Information3.Source SystemInitial RowsShort DescriptionRefresh FrequencyPreprocessingPost Processingfile:///C|/Perl/bin/result.html (240 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Error StrategyReload StrategyUnique Source Fields4.Target SystemRows/Load5.SourcesTablesTable Name Schema/Owner Selection/FilterFilesFile Name File Owner Unique Key6.TargetsInformation about different targets. Some thing like above one.7.Information about lookups8.Source To Target Field MatrixTarget Table Target Column Data-type Source Table Source Column Data-typeExpression Default Value if Null Data Issues/Quality/Comments

Page 228: Informatica Interview QnA

Coming to Versioning of objects in repository...In this we have to things1.Checkout: when some user is modifying an object(source target mapping) he can checkout it. That ishe can lock it. So that until he release no body can access it.file:///C|/Perl/bin/result.html (241 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

2.Checkin:When you want to commit an object u use this checkin feature.=======================================Please let me know the answer for this question.=======================================himapping is nothing but flow or work. where the data is coming and where data is going. for this we needmapping namesource tabletarget tablesession.thanksmadhu=======================================

232.Informatica - what is bottleneck in informatica?QUESTION #232No best answer available. Please pick the good answer available or submit youranswer.September 26, 2006 09:46:17 #1opbang Member Since: March 2006 Contribution: 46RE: what is bottleneck in informatica?file:///C|/Perl/bin/result.html (242 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentBottleneck in InformaticaBottleneck in ETL Processing is the point by which the performance of the ETL Process is slowr.When ETL Process is in progress first thing login to workflow monitor and observe performance

Page 229: Informatica Interview QnA

statistic. I.e. observe processing rows per second. In SSIS and Datastage when you run the job you cansee at every level how many rows per second is processed by the server.Mostly bottleneck occurs at source qualifier during fetching data from source joiner aggregator LookupCache Building Session.Removing bottleneck is performance tuning.=======================================

233.Informatica - What is the differance between Local andGlobal repositary?QUESTION #233No best answer available. Please pick the good answer available or submit youranswer.September 26, 2006 09:55:03 #1opbang Member Since: March 2006 Contribution: 46RE: What is the differance between Local and Global re...Click Here to view complete documentYou can develop global and local repositories to share metadata:l Global repository. The global repository is the hub of the domain. Use the global repository tostore common objects that multiple developers can use through shortcuts. These objects mayinclude operational or Application source definitions reusable transformations mapplets andmappings.l Local repositories. A local repository is within a domain that is not the global repository. Uselocal repositories for development. From a local repository you can create shortcuts to objects inshared folders in the global repository. These objects typically include source definitionscommon dimensions and lookups and enterprise standard transformations. You can also createfile:///C|/Perl/bin/result.html (243 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

copies of objects in non-shared folders.=======================================

Page 230: Informatica Interview QnA

234.Informatica - Explain in detail about Key Range & RoundRobin partition with an example.QUESTION #234No best answer available. Please pick the good answer available or submit youranswer.October 11, 2006 02:03:38 #1srinivas vadlakondaRE: Explain in detail about Key Range & Round Robin pa...Click Here to view complete documentkey range: The informatica server distributes the rows of data based on the st of ports that u specify asthe partition key.Round robin: The informatica server distributes the equal no of rows for each and every partition.=======================================

235.Informatica - COMMITS: What is the use of Source-basedcommits ? PLease tell with an example ?QUESTION #235No best answer available. Please pick the good answer available or submit youranswer.September 29, 2006 02:20:42 #1srinivas vadlakondaRE: COMMITS: What is the use of Source-based commits ?...file:///C|/Perl/bin/result.html (244 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentcommits the data based on the course records=======================================if you selected commit type is target once the cache is holding some 10000 records server will committhe records here server will be least bother of the number of source records processed.

Page 231: Informatica Interview QnA

if you selected commit type is source once after 10000 records are queried immedialty server willcommit that here server will be least bother how many records inserted in the target.=======================================

236.Informatica - What Bulk & Normal load? Where we use Bulkand where Normal?QUESTION #236No best answer available. Please pick the good answer available or submit youranswer.October 01, 2006 10:10:23 #1Vamsi Krishna.KRE: What Bulk & Normal load? Where we use Bulk and whe...Click Here to view complete documentHellowhen we try to load data in bulk mode there will be no entry in database log files so it will be tough torecover data if session got failed at some point. where as in case of normal mode entry of every recordwill be with database log file and with the informatica repository. so if the session got failed it will beeasy for us to start data from last committed point.Bulk mode is very fast compartively with normal mode.we use bulk mode to load data in databases it won't work with text files using as target where as normalmode will work fine with all type of targets.=======================================file:///C|/Perl/bin/result.html (245 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

in case of bulk for group of records a dml statement will created and executedbut in the case of normal for every recorda a dml statement will created and executedif u selecting bulk performance will be increasing=======================================Bulk mode is used for Oracle/SQLserver/Sybase. This mode improves performance by not writing to

Page 232: Informatica Interview QnA

the database log. As a result when using this mode recovery is unavailable. Further this mode doesn'twork when update transformation is used and there shouldn't be any indexes or constraints on the table.Ofcourse one can use the pre-session and post-session SQLs to drop and rebuild indexes/constraints.=======================================

237.Informatica - Which transformation has the most complexityLookup or Joiner?QUESTION #237No best answer available. Please pick the good answer available or submit youranswer.October 11, 2006 01:51:17 #1srinivas vadlakondaRE: Which transformation has the most complexity Looku...Click Here to view complete documentlookup trans is check the condition like joiner but it has the more feature likeget a related valueupdate slowly chaning dimention=======================================lookupbut it will reduse the complexity of the solutions and improves the performance of the workflows.=======================================

238.Informatica - Where we use Star Schema & where Snowflake?QUESTION #238No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (246 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.October 05, 2006 01:20:33 #1RajRE: Where we use Star Schema & where Snowflake?Click Here to view complete document

Page 233: Informatica Interview QnA

its depends for client requirements.initially we have implement high level design .so the client want tobe normalized (snow flake schema) or de-normaliized data (star schema) which is used for theiranalysis.so we have implemented whatever their requirements=======================================

239.Informatica - Can we create duplicate rows in star schema?QUESTION #239No best answer available. Please pick the good answer available or submit youranswer.October 25, 2006 09:02:00 #1AshwaniRE: Can we create duplicate rows in star schema?Click Here to view complete documentDuplicate row has nothing to do with StarSchema. StarSchema is mathodoly and Duplicate row is partof actual implementation in DB. If u look at Surrogate key conceot then NO beside surrogate key othercolumns can be duplicate...and in that case there is another special case if you implemented UniqueinDex...Boom=======================================

240.Informatica - Where persistent cache will be stored?QUESTION #240No best answer available. Please pick the good answer available or submit youranswer.October 01, 2006 10:04:07 #1Vamsi Krishna.Kfile:///C|/Perl/bin/result.html (247 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

RE: Where persistent cache will be stored?Click Here to view complete documentThe informatica server saves the cache files for every session and reuses it for next session by that the

Page 234: Informatica Interview QnA

query on the table will be reduced so that there will be some performance increment will be there.=======================================presistent cache stored in the server folder=======================================

241.Informatica - What is SQL override? In which transformationwe use override?QUESTION #241No best answer available. Please pick the good answer available or submit youranswer.October 01, 2006 10:01:32 #1Vamsi Krishna.KRE: What is SQL override? In which transformation we u...Click Here to view complete documentHelloBy default informatica server will generate sql query for every action if that query is not able to performthe exact task we can modify that query or we can genrate new once with new conditions and with newconstraints.1. source qualifier2. lookup3. target=======================================

242.Informatica - Can you update the Target table?QUESTION #242No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (248 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.October 01, 2006 09:55:26 #1Vamsi Krishna.KRE: Can you update the Target table?

Page 235: Informatica Interview QnA

Click Here to view complete documenthellowe have to update target table. if you are loading type-1 dimension type-2 dimension data at target surlyyou have to do.this update can be done by two type.1. using update stratergy2. target update override.Thanksvamsi.=======================================

243.Informatica - At what frequent u load the data?QUESTION #243No best answer available. Please pick the good answer available or submit youranswer.October 04, 2006 01:39:58 #1opbang Member Since: March 2006 Contribution: 46RE: At what frequent u load the data?file:///C|/Perl/bin/result.html (249 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentLoading frequency depends on the requirement of Business Users It could be daily during mid night...or weekly... Depending on the frequency... ETL process should take care... how the updated transactiondata will replace in fact tables...Other factors How fast OLTP is updating Data Volume Available time window for Extracting Data.=======================================

244.Informatica - What is a Materialized view? Diff. BetweenMaterialized view and viewQUESTION #244No best answer available. Please pick the good answer available or submit your

Page 236: Informatica Interview QnA

answer.October 11, 2006 01:45:10 #1srinivas.vadlakondaRE: What is a Materialized view? Diff. Between Materia...Click Here to view complete documentmaterialized views are used in datawarehousing to precompute and store aggregated data such sum ofsales and it will used for increase the speed of the query when we are taking the large data bases.view nothing but it is a small table which meats our criteria it will not occupy the space=======================================View doesn't occupy any storage space in table space but meterilized view will occupy space=======================================Materialized view stores the result set of a query but normal view does not store the same. We canrefresh the Materialised View when any changes are made in the master table. Normal view is only forview the records. We can perform DML operation and direct path insert operation in Materialized View.=======================================

245.Informatica - Is it possible to refresh the Materialized view?QUESTION #245No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (250 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

answer.October 05, 2006 05:37:16 #1lingeshRE: Is it possible to refresh the Materialized view?Click Here to view complete documentYaa we can refresh materialized view. While creating of materialized views we can give options such asrefresh fast and we can mention time so that it can refresh automatically and fetch new data i..e

Page 237: Informatica Interview QnA

updateed data and inserted data. For active dataware house i mean inorder to have real time datawarehouse we can have materialized views.EX:- see thisCREATE MATERIALIZED VIEW mv_emp_pkREFRESH FAST START WITH SYSDATENEXT SYSDATE + 1/48WITH PRIMARY KEYAS SELECT * FROM emp@remote_db;=======================================

246.Informatica - What are the common errors that you face daily?QUESTION #246No best answer available. Please pick the good answer available or submit youranswer.October 30, 2006 04:44:35 #1shivaRE: What are the common errors that you face daily?file:///C|/Perl/bin/result.html (251 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentmostly we are geting oracle fatal error.whilw the server is not able to connect the oracle server.=======================================

247.Informatica - What is Shortcut? What is use of it?QUESTION #247No best answer available. Please pick the good answer available or submit youranswer.October 03, 2006 04:17:18 #1Vamsi Krishna K.RE: What is Shortcut? What is use of it?Click Here to view complete documentShortcut is a facility providing by informatica to share metadata objects across folders without copying

Page 238: Informatica Interview QnA

the objects to every folder.we can create shortcuts for Source definitions Reusable transformationsMapplets Mappings Target definitions Business components.there are two diffrent types of shortcuts 1.local shortcut2. global shortcut=======================================

248.Informatica - what is the use of Factless Facttable?QUESTION #248No best answer available. Please pick the good answer available or submit youranswer.October 04, 2006 01:24:36 #1opbang Member Since: March 2006 Contribution: 46RE: what is the use of Factless Facttable?file:///C|/Perl/bin/result.html (252 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentFactless Fact table are fact table which is not having any measures.For example - You want to store the attendance information of the student. This table will give youdatewise whether the student has attended the class or not. But there is no measures because fees paidetc is not daily.=======================================transaction can occur without the measurefore example victim id=======================================

249.Informatica - while Running a Session, what are the two filesit will create?QUESTION #249No best answer available. Please pick the good answer available or submit youranswer.October 04, 2006 01:20:21 #1opbang Member Since: March 2006 Contribution: 46

Page 239: Informatica Interview QnA

RE: while Running a Session, what are the two files it...Click Here to view complete documentSession Log file and Session Detail file=======================================Besides session log it also creates the following files if applicable - reject files target output fileincremental aggregation file cache file=======================================

250.Informatica - give me an scenario where flat files are used?QUESTION #250No best answer available. Please pick the good answer available or submit youranswer.October 05, 2006 00:26:01 #1file:///C|/Perl/bin/result.html (253 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

opbang Member Since: March 2006 Contribution: 46RE: give me an scenario where flat files are used?Click Here to view complete documentLoading data from flatfiles to database is faster. You are receiving a data from a remote location. Atremote location the required data can be converted into flatfile and same you can use at target locationfor loading. This minimizes the requirement of bandwidth faster transmission.=======================================Hi Flat files have some advantages which normal table does not have. First one is explained by firstpost. Secondly Flat file can work with case sensitivity issue of data easily. where normal table haserrors.Kapil Goyal=======================================

251.Informatica - Architectural diff b/w informatica 7.1 and 5.1?QUESTION #251No best answer available. Please pick the good answer available or submit youranswer.

Page 240: Informatica Interview QnA

October 12, 2006 14:57:59 #1zskhan Member Since: June 2006 Contribution: 3RE: Architectural diff b/w informatica 7.1 and 5.1?Click Here to view complete document1. v7 has repository server & pmserver v5 had pmserver only. pmserver does not directly talks torepository database it talks to repository server which in turn talks to database.=======================================

252.Informatica - what r the types of data flows in workflowmanagerQUESTION #252No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (254 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

October 10, 2006 13:03:29 #1sravangopisettyRE: what r the types of data flows in workflow manager...Click Here to view complete documentTypes of dataflows1.sequential execution.2.paralal execution3.control flow execution=======================================

253.Informatica - what r the types of target loadsQUESTION #253No best answer available. Please pick the good answer available or submit youranswer.October 13, 2006 15:03:04 #1AnonymousRE: what r the types of target loadsClick Here to view complete documenthiTarget load plan is the process thru which u can decide the preference of the Target laod.

Page 241: Informatica Interview QnA

lets say u have three S.Q n three instances of Targets by default informatica will load the data in thefirst target but using the Target load plan u can change this sequence ...by selecting vich target u want tobe laoded first=======================================we define target load plans in 2 ways1. in mapping2. in session=======================================file:///C|/Perl/bin/result.html (255 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

There are two types of target load typesthey are1. bulk2. normal=======================================

254.Informatica - Can we use lookup instead of join? reasonQUESTION #254No best answer available. Please pick the good answer available or submit youranswer.October 11, 2006 01:28:11 #1srinivas vadlakondaRE: Can we use lookup instead of join? reasonClick Here to view complete documentyes we can use the lookup transformation instead of joiner but we can take the homogeous sources only.If u r taking joiner we can join the heterogeneous sources.=======================================If the relationship to other table is a one to one join or many to one join then we can use lookup to getthe required fields. In case the relationship is outer join joining both the tables will give correct resultsas Lookup returns only one row if multiple rows satisfy the join condition.=======================================

255.Informatica - what is sql override where do we use and whichtransformationsQUESTION #255

Page 242: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.October 10, 2006 12:33:10 #1sravangopisettyRE: what is sql override where do we use and which tra...file:///C|/Perl/bin/result.html (256 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentTHe defualt Sql Return by the source qualifier can be over writne by using source qualifier.Transfermation is Source qualifier transfermation.=======================================2.look up transfermation3.Target=======================================Besides the answers above in Session properties -> transformations tab -> source qualifier ->SQL query- by default it will be the query from the mapping. If you over write it this is also called 'SQL override'=======================================we use sql override for 3 transformationswhen the source is homo genious1. joiner transformation2.filter transformaton3. sorter transformation=======================================

256.Informatica - In a flat file sql override will work r not? whatis the extension of flatfile.QUESTION #256No best answer available. Please pick the good answer available or submit youranswer.October 06, 2006 09:39:08 #1suriRE: In a flat file sql override will work r not? what ...Click Here to view complete documentNope..out is the extension...=======================================

Page 243: Informatica Interview QnA

In Flat file SQL Override will not work. We have different properties to set for a flat file. If you aretalking about the flat file as a source it can be of any extension like .dat .doc etc. Yes if it is an Targetfile it will have extension as .out. Which can be altered in the Target Properties.RegardsRajeshfile:///C|/Perl/bin/result.html (257 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

=======================================Using flat files sql override willnot work. the extension of the flat files are .txt .doc .dat and the outputor target flatfile extension is .out=======================================

257.Informatica - what is cost effective transformation b/w lookupand joinerQUESTION #257No best answer available. Please pick the good answer available or submit youranswer.October 11, 2006 11:14:23 #1MykRE: what is cost effective transformation b/w lookup a...Click Here to view complete documentAre you lookuping flat file or database table? Generaly sorted joiner is more efective on flat files thanlookup because sorted joiner uses merge join and cashes less rows. Lookup cashes always whole file. Ifthe file is not sorted it can be comparable.Lookups into database table can be effective if the databasecan return sorted data fast and the amount of data is small because lookup can create whole cash inmemory. If database responses slowly or big amount of data are processed lookup cache initializationcan be really slow (lookup waits for database and stores cashed data on discs). Then it can be better usesorted joiner which throws data to output as reads them on input.=======================================

Page 244: Informatica Interview QnA

258.Informatica - WHAT IS THE DIFFERENCE BETWEENLOGICAL DESIGN AND PHYSICAL DESIGN INADATAWAREHOUSEQUESTION #258No best answer available. Please pick the good answer available or submit youranswer.October 13, 2006 15:13:45 #1AnonymousRE: WHAT IS THE DIFFERENCE BETWEEN LOGICAL DESIG...file:///C|/Perl/bin/result.html (258 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentDuring the logical design phase you defined a model for your data warehouse consisting of entitiesattributes and relationships. The entities are linked together using relationships.During the physical design process you translate the expected schemas into actual database structures.At this time you have to map:l Entities to tablesl Relationships to foreign key constraintsl Attributes to columnsl Primary unique identifiers to primary key constraintsl Unique identifiers to unique key constraintsn if u refer this diag u will understand better : -=======================================

259.Informatica - what is surrogate key ? how many surrogate keyused in ur dimensions?QUESTION #259No best answer available. Please pick the good answer available or submit youranswer.October 12, 2006 01:10:52 #1

Page 245: Informatica Interview QnA

srinivas vadlakondaRE: what is surrogate key ? how many surrogate key use...file:///C|/Perl/bin/result.html (259 of 363)4/1/2009 7:50:58 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentsurrogate key is used to replacement of the primary keyDWH does not depends upon the primary key it is used to identify the internal records each diemensionshould have atleast one surrogate key.=======================================Surrogate key or warehouse key acts like a composite primary key. If the target doesn't have a uniquekey the surrogate key helps to address a particular row. It is like a primary key in the target.=======================================

260.Informatica - what r the advantages and disadvantagesof astar schema and snoflake schema.thanks in advance.QUESTION #260No best answer available. Please pick the good answer available or submit youranswer.October 16, 2006 02:14:54 #1srinivas.vadlakondaRE: what r the advantages and disadvantagesof a ...Click Here to view complete documentSchemas are two types starflake and snowsflake schema In starflake fact table is in normalized formatand dimention table is in denormalized formatIn snowsflake both fact and dimention tables are in normalized format onlyif u r taking snowsflake it requires more dimention table and more foreign keys it will reduse the queryperformance. It will reduce the redundency.=======================================Main advantage in starschemafile:///C|/Perl/bin/result.html (260 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

1)it supports drilling and drill down options

Page 246: Informatica Interview QnA

2)fewer tables3)less databaseIn snowflake schema1)degrade query peformance because more joins is there2)savings small storage spaces=======================================

261.Informatica - why do we use reusable sequencegeneratortransformation only in mapplet?QUESTION #261No best answer available. Please pick the good answer available or submit youranswer.November 16, 2006 06:05:48 #1SaradaRE: why do we use reusable sequencegenerator transform...Click Here to view complete documentThe relation between reusable sequence generator and maplet is indirect.Reusable Sequence generator are preferably used when we want the same sequence (that is the nextvalue of the sequence) to be used in more than one mapping (may be because this next value loading thesame field of same table in different mappings and to maintain the continuity its required)suppose there are two mappings using a reusable sequence generator and we run the two mappings oneby one. here if the last value for the sequence generator for the mapping1 run is 999 then the sequencegenerator value for second mapping will start from 1000.this is how we relate reusable sequence generator and maplet.file:///C|/Perl/bin/result.html (261 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================HiQuestion is Are you sure its going to start with the 1000 number after the 999. By default for reusableSEQUENCE it has a cache value set to 1000 that's why it takes 1000 else if you only have 595 records

Page 247: Informatica Interview QnA

for the 1st session and automatically the 2nd session will start with 1000 as the sequence number bcosthe cache value is set to it.I got another question.Is it possible to change Number of Cached Values always to 1 instead of changing it after each time thesession/sessions is/are run.ThanksPhilip=======================================HiThe solution provided is correct. I would like to add more information into it.Reusable Sequence Generator is a must for a Mapplet:Mapplet is basically to reuse a mapping as such if a non- reusable sequence generator is used in mappletthen the sequence of numbers it generates mismatches and creates problem. Thus it is made a must.=======================================

262.Informatica - in which particular situation we useunconnected lookup transformation?QUESTION #262 Submitted by: sridhar39hi,both unconnected and connected will provide single output. if it is thecase that we can use either unconnected or connected i prefer unconnected whybecause unconnected doesnot participate in the dataflow so informatica servercreates a seperate cache for unconnected and processing takes place parallely. soperformance increases.file:///C|/Perl/bin/result.html (262 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Above answer was rated as good by the following members:Vamshidhar Click Here to view complete document

Page 248: Informatica Interview QnA

We can use the unconnected lookup transformation when i need to return the only one port at that time iwill use the unconnected lookup transformation instead of connected. We can also use the connected toreturn the one port but if u r taking unconnected lookup transformation it is not connected to the othertransformation and it is not a part of data flow that why performance will increase.=======================================The major advantage of unconnected lookup is its reusability. We can call an unconnected lookupmultiple times in the mapping unlike connected lookup.=======================================We can use the unconnected lookup transformation when we need to return the output from a singleport.If we want the output from a multiple ports at that time we have to use connected lookupTransformation.=======================================Use of connected and unconnected Lookup ic completely based on the logic which we need.However i just wanted to clear that we can get multiple rows data from an Unconnected lookup also.Just concatinate all the values which you want and get the result from the return row of unconnectedlookup and then furthur split it in the expression.However using Unconnected lookup takes more time as it breaks the flow and goes to an unconnectedlookup to fetch the results.=======================================hiboth unconnected and connected will provide single output. if it is the case that we can use eitherunconnected or connected i prefer unconnected why because unconnected doesnot participate in thedataflow so informatica server creates a seperate cache for unconnected and processing takes placeparallely. so performance increases.=======================================

Page 249: Informatica Interview QnA

263.Informatica - in which particular situation we use dynamicfile:///C|/Perl/bin/result.html (263 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

lookup?QUESTION #263No best answer available. Please pick the good answer available or submit youranswer.October 17, 2006 01:06:34 #1srinivas vadlakondaRE: in which particular situation we use dynamic looku...Click Here to view complete documentSpecific situation is not there to use the dynamic lookup if u r using the dynamic it will increase theperformance and also it will minimize the transformation like sequence generator.=======================================If the no. of records are in hundreds one doesn't see much difference whether a static cache is used ordynamic cache. If there are thousands of records dynamic cache kills time because it commits thedatabase for each insert or update it makes.=======================================

264.Informatica - is there any relationship between java &inforematica?QUESTION #264No best answer available. Please pick the good answer available or submit youranswer.October 18, 2006 07:04:27 #1phanimv Member Since: July 2006 Contribution: 41RE: is there any relationship between java & inforemat...file:///C|/Perl/bin/result.html (264 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentLike java informatica is a plat-form independent portable and architechtural nutral.

Page 250: Informatica Interview QnA

=======================================

265.Informatica - How many types of flatfiles available inInformatica?QUESTION #265No best answer available. Please pick the good answer available or submit youranswer.October 16, 2006 04:08:09 #1subbarao.gRE: How many types of flatfiles available in Informati...Click Here to view complete documentThere are two types of flate files:1.Delimtedwidth2.Fixdwidth=======================================Thank you very much Subbarao..=======================================

266.Informatica - what is the event-based scheduling?QUESTION #266No best answer available. Please pick the good answer available or submit youranswer.October 25, 2006 15:42:32 #1sn3508 Member Since: April 2006 Contribution: 20RE: what is the event-based scheduling?file:///C|/Perl/bin/result.html (265 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentIn time based scheduling the jobs run at the specified time. In some situations we've to run a job basedon some events like if a file arrives then only the job has to run whatever the time it is. In such casesevent based scheduling is used.=======================================

Page 251: Informatica Interview QnA

event based scheduling is using for row indicator file. when u dont no where is the source data that timewe use shellcommand script batch file to send to the local directory of the informaticaserver is waiting for row indiactor file befor running the session.=======================================

267.Informatica - what is the new lookup port in look-uptransformation?QUESTION #267No best answer available. Please pick the good answer available or submit youranswer.October 25, 2006 15:37:38 #1sn3508 Member Since: April 2006 Contribution: 20RE: what is the new lookup port in look-up transformat...Click Here to view complete documentI hope you're asking about the 'add a new port' button in the lookup transformation. If the answer is yesthis button creates a port where we can enter the name datatype ...of a port. This is mainly used whenusing unconnected lookup this reflects the datatype of the input port.=======================================Seems u r talking newlook up row when u configure u lookup as dyanamic lkp cache by default it willgenerates newlkprow it tells the informatica whetherthe row u got is existing or new row if it is new row it passes data. or else it discards.=======================================file:///C|/Perl/bin/result.html (266 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

This port is added by PC Client :-'Designer to a Lookup transformation when ever dynamic cache isused.This port is a indicator to Infa server action whether it is inserts or upates the dynamic cachethrough a numeric value.[0 1 2].=======================================The new port in lookup transformation is Associative portCheers

Page 252: Informatica Interview QnA

Bobby=======================================

268.Informatica - what is dynamic insert?QUESTION #268No best answer available. Please pick the good answer available or submit youranswer.November 15, 2006 00:22:38 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: what is dynamic insert?Click Here to view complete documentWhen we selecting the dynamic cache in look up transformation the informatica server create the newlook up row port it will indicates the numeric value wheather the informatica server inserts updates ormakes no changes to the look up cashe and if u associate a sequence id the informatica server create asequence id for newly inserted records.=======================================Hi srinuCan u explain how to associate sequence ?ThanksSri.=======================================

269.Informatica - how did you handle errors?(ETL-Row-Errors)QUESTION #269No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (267 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

November 29, 2006 02:02:06 #1phaniRE: how did you handle errors?(ETL-Row-Errors)Click Here to view complete documentHi friendIf there is an error comes it stored it on target_table.bad file.

Page 253: Informatica Interview QnA

The error are in two type1. row-based errors2. column based errorscolumn based errors identified byD-GOOD DATA N-NULL DATA O-OVERFLOW DATA R-REJECTED DATAthe data stored in .bad fileD1232234O877NDDDN23 Like thatif any doubt give msg to my mail id=======================================

270.Informatica - HOw do u setup a schedule for data loadingfrom scratch?QUESTION #270No best answer available. Please pick the good answer available or submit youranswer.December 12, 2006 17:08:30 #1hanug Member Since: June 2006 Contribution: 24file:///C|/Perl/bin/result.html (268 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: HOw do u setup a schedule for data loading from sc...Click Here to view complete documentwhether you are loading data from scratch(for the first time) or subsequent loads there is no changes tothe scheduling. The change is how to pickup the delta data.Hanu.=======================================

271.Informatica - HOw do u select duplicate rows usinginformatica?QUESTION #271No best answer available. Please pick the good answer available or submit youranswer.October 20, 2006 08:52:55 #1SharmilaRE: HOw do u select duplicate rows using informatica?

Page 254: Informatica Interview QnA

Click Here to view complete documentI thought we could identify dupilcates by using rank transformation.=======================================can u explain what are the steps for identifying the duplicates with the help of rank transformation.=======================================can u explain detail?=======================================HiYou can write SQL override in the source qualifier (to eliminate duplicates). For that we can usedistinct keyword.For example : consider a table dept(dept_no dept_name) and having duplicate records in that. then writefile:///C|/Perl/bin/result.html (269 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

this following query in the Source Qualifier sql over ride.1)select distinct(deptno) deptname from dept_test;2)select avg(deptno) deptname from dept_testgroup by deptname;if you want to have only duplicate records then write the following query in the Source Qualifier SQLOverrideselect distinct(deptno) deptname from dept_test a where deptno in(select deptno from dept_test bgroup by deptnohaving count(1)>1)=======================================i think we cant select duplicates from rank transformation if it is possible means explain how to do it=======================================We can get the duplicate records by using the rank transformation.=======================================we can aso use sorter transfermation.seect distinct one check box.=======================================

272.Informatica - How to load data to target where the source andtargets are XML'S?QUESTION #272

Page 255: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.October 25, 2006 15:23:31 #1sn3508 Member Since: April 2006 Contribution: 20RE: How to load data to target where the source and ta...file:///C|/Perl/bin/result.html (270 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentl If you don't have the structures create the Source or Target structure of the XML file by going toSources or Targets menu and selecting 'Import XML' and follow the stepsl Follow the regular steps you would do to create an ordinary mapping/session.l In the session you've to mention the location and name of source/target.l Once the session is success the xml file will be generated in the specified location.=======================================

273.Informatica - What TOAD and for what purpose it will beused?QUESTION #273No best answer available. Please pick the good answer available or submit youranswer.October 24, 2006 04:54:29 #1srinivas vadlakondaRE: What TOAD and for what purpose it will be used?Click Here to view complete documentAfter creating the mapping we can execute the session for that mapping after that we can use the unittesting to test data. And we can use the oracle to test how many records are loaded into the target thatloaded records are correct or not for this purposes we can use the oracle or Toad.=======================================TOAD is a user friendly interface for relational databases like Oracle Sql Server DB2. While unittesting the development or query on tables one can use TOAD with basic knowledge on databasecommands.

Page 256: Informatica Interview QnA

=======================================Toad is a application development tool built around an advanced SQL and PL/SQL editor. Using Toadyou can build and test PL/SQL packages procedures triggers and functions. You can create and editdatabase tables views indexes constraints and users. The Schema Browser and Project Managerprovides quick access to database objects.Toad s SQL Editor provides an easy and efficient way to write and test scripts and queries and itspowerful data grids provide an easy way to view and edit Oracle data.With Toad you can:file:///C|/Perl/bin/result.html (271 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

l View the Oracle Dictionaryl Create browse or alter objectsl Graphically build execute and tune queriesl Edit PL/SQL and profile stored proceduresl Manage your common DB tasks from one central windowl Find and fix database problems with constraints triggers extents indexes and grantsl Create code from shortcuts and templatesl Create custom code templatesl Control code access and development (with or without a third party version control product)using Toad's cooperative source control feature.=======================================

274.Informatica - What is target load order ?QUESTION #274No best answer available. Please pick the good answer available or submit youranswer.October 24, 2006 04:47:53 #1ram gopalRE: What is target load order ?Click Here to view complete documentIn a mapping if there are more than one target table then we need to give in which order the target tables

Page 257: Informatica Interview QnA

should be loadedexample: suppose in our mapping there are 2 target table1. customer2. Audit tablefirst customer table should be populated than Audit table for that we use target load orderHope u undeerstood=======================================file:///C|/Perl/bin/result.html (272 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Target oad pan specifies the order in which the data being extracted from the source qualifier.=======================================

275.Informatica - How to extract 10 records out of 100 records in aflat fileQUESTION #275No best answer available. Please pick the good answer available or submit youranswer.October 31, 2006 16:52:24 #1sridharRE: How to extract 10 records out of 100 records in a ...Click Here to view complete document1. create external directory2. store the file in this external directory3. create a external table corresponding to the file4. query the external table to access records like u would do a normal table=======================================hiFor falt file sourecsource--sq--sequence generator tran.on new fileld id--filter tran. id< 10--traget=======================================

276.Informatica - How many types of TASKS we have inWorkflomanager? What r they?QUESTION #276

Page 258: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (273 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

October 31, 2006 00:08:06 #1kumarRE: How many types of TASKS we have in Workflomanager?...Click Here to view complete documentsession command email notification=======================================work flow task :1)session 2)command 3)email 4)control 5)command 6)presession 7)post session 8)assigment=======================================1) session2) command 3) email4) event-wait5) event-raise6) assignment7) control8) decision9)timer10) worklet3) 8) 9) are self explanatory. 1) run mappings. 2) run OS commands/scripts. 4 + 5)raise user-defined or pre-defined events and wait for the the event to be raised. 6) assign values toworkflow var 10) run worklets.=======================================The following Tasks we r having in Workflow manager Assignment Control Command decision E-mailSession Event-Wait Event-raise and Timer. The Tasks developed in the task developer rreusable tasksand taske which r developed by useing workflow or worklet r non reusable. Among these tasks onlySession Command and E-mail r the reusable remaining tasks r non reusable.Regards rma=======================================1. Session=======================================We have Session Command and Email tasksCheersThana=======================================

Page 259: Informatica Interview QnA

277.Informatica - What is user defined Transformation?QUESTION #277No best answer available. Please pick the good answer available or submityour answer.November 02, 2006 15:51:39 #1file:///C|/Perl/bin/result.html (274 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

n k rajkumarRE: What is user defined Transformation?Click Here to view complete documentUser Defined Transformations AreStored Procedure and Sourcr Qualifier=======================================

278.Informatica - what is the difference between connected storedprocedure and unconnected stored procedure?QUESTION #278No best answer available. Please pick the good answer available or submit youranswer.November 15, 2006 00:11:33 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: what is the difference between connected stored pr...Click Here to view complete documentConnected stored procedure will execute when each row passed through the mapping then it willexecute Unconnected stored procedure will execute when it is called by the another transformation andconnected stored procedure is the part of a data flow in a pipe line but unconnected is not.=======================================

279.Informatica - what is semi additve measuresand fully additive

Page 260: Informatica Interview QnA

measuresQUESTION #279No best answer available. Please pick the good answer available or submit youranswer.November 03, 2006 06:17:52 #1rameshrfile:///C|/Perl/bin/result.html (275 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: what is semi additve measuresand fully addi...Click Here to view complete documenthi....this is ramesh.....(if any one feel there is some conceptual problem with my solution plz let me now)there are three types of facts...1.additive2.semi additive3. non-additiveadditve means when a any measure is queried of the fact table if the result relates to all the diemensiontable which are linked to the factsemi-additve when a any measure is queried from the fact table the results relates to some of thediemension tablenon-additive when a any measure is queried from the fact table if it does n't relate to any of thediemension and the result is driectly from the measures of the same fact table ex: to calculate the totalpercentage of loan just we take the value from the fact measure(loan) divide it with 100 we get itwithout the diemension...=======================================hiadditive means it can be summrized by any other column.semi additive means it can be summrized by some columns only.=======================================hi can u give one example of additive and semi additive facts it ll be better for understandfile:///C|/Perl/bin/result.html (276 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Page 261: Informatica Interview QnA

Akhi=======================================average monthly balance of your bank account is a semi-additive fact.=======================================A measurable data on which simple addition can be performed is called fully additive such measurabledata's no need to combine two or more dimensions for it's meaningEx:Product wise total salesBranch wise total salesA measurable data on which simple addition can't be performed is called semi additive such measurabledata's need to combine two or more dimensions for it's meaningEx: customer wise total sales amount ---------> Has no meaningcustomer wise product total sales amountIf any one have doubt about this let me now=======================================

280.Informatica - what is DTM process?QUESTION #280No best answer available. Please pick the good answer available or submit youranswer.November 06, 2006 04:40:39 #1prasanna alurRE: what is DTM process?file:///C|/Perl/bin/result.html (277 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentDTM means data transformation manager.in informatica this is main back ground process.it run aftercomplition of load manager.in this process informatica server search source and tgt connection inrepository if it correct then informatica server fetch the data from source and load it to target.=======================================dtm means data transmission manager. It is one of the component in informatica architecture. it willcollect the data and load the data.=======================================

Page 262: Informatica Interview QnA

Load manager Process: Starts the session creates the DTM processand sends post-session email when the session completes.The DTM process. Creates threads to initialize the session readwrite and transform data and handle pre- and post-session operations.=======================================

281.Informatica - what is Powermart and Power Center?QUESTION #281No best answer available. Please pick the good answer available or submit youranswer.November 06, 2006 04:34:18 #1prasanna alurRE: what is Powermart and Power Center?Click Here to view complete documentIn power center we can register multiple server but in power mart its not possible.another diff is in PCwe can create repository globle but it not in PM.=======================================pc we will use in production environment.pm we will use in development environment=======================================Hipower center will support global and local repositories and also it supports ERP packages. butfile:///C|/Perl/bin/result.html (278 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Powermart will support local repositories only and it doesn't support ERP packages .Power center is High Cost where as power mart it is Low.Power center normally used for enterprise data warehouses where as power mart will use for Low/Midrange data warehouses.Thanks and RegardsSiva Prasad.=======================================Power Center supports Partitioning process where as Power mart only does simple pass through.=======================================

Page 263: Informatica Interview QnA

282.Informatica - what are the differences between informatica6.1and informatica7.1QUESTION #282No best answer available. Please pick the good answer available or submit youranswer.November 06, 2006 16:09:38 #1calltomadhu Member Since: September 2006 Contribution: 34RE: what are the differences between informatica6.1 a...file:///C|/Perl/bin/result.html (279 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentload manager is a manager who will take care about the loading process from source to target=======================================in informatica7.11. we can take flatfile as a target2.flatfile as lookup table3.dataprofiling&versioning4.union transformation.it works like union all.thanks ®ardssivaprasad=======================================

283.Informatica - hi, how we validate all the mappings in therepository at onceQUESTION #283No best answer available. Please pick the good answer available or submit youranswer.November 07, 2006 00:03:46 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: hi, how we validate all the mappings in the reposi...file:///C|/Perl/bin/result.html (280 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete document

Page 264: Informatica Interview QnA

U r not able to validate all mappings at a time each time one mapping can be validated.=======================================HiYou can not validate all the mappings in one go. But you can validate all the mappings in a folder inone go and continue the process for all the folders.For dooing this log on to the repository manager. Open the folder then the mapping sub folder thenselect all or some of the mappings(by pressing the shift or control key ctrl+A does not work) and thenrightclick and validate.=======================================hiyou cant validate all maping at a time.you should go one by one.thanksmadhu=======================================Still we dont have such facility in informatica.=======================================Yes. We can validate all mappings using the Repo Manager.=======================================

284.Informatica - hw to work with pmcmd on windows platformQUESTION #284No best answer available. Please pick the good answer available or submit youranswer.November 08, 2006 15:31:22 #1calltomadhu Member Since: September 2006 Contribution: 34file:///C|/Perl/bin/result.html (281 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: hw to work with pmcmd on windows platformClick Here to view complete documenthiin workflow manager take pmcmd command.establish link between session and pmcmd. if session executes successfully pmcmd command executes.

Page 265: Informatica Interview QnA

thanksmadhu=======================================Hi Frnd Can u plz tell me where is the PMCMD option in workflomanager.ThanksSwati.=======================================Hi SwathiIn commandline only we can execute pmcmd pmrep commands. In workflow manger we executedirectly session task.=======================================C:Program FilesInformatica PowerCenter 7.1.3Serverbinpmcmd.exe=======================================C:-->Program Files-->Informatica PowerCenter 7.1.3-->Server-->bin-->pmcmd.exe=======================================

285.Informatica - inteview questionwhy do u use a reusablesequence genator tranformation in mapplets?QUESTION #285No best answer available. Please pick the good answer available or submit youranswer.November 14, 2006 02:03:43 #1file:///C|/Perl/bin/result.html (282 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

srinuv_11 Member Since: October 2006 Contribution: 23RE: inteview questionwhy do u use a reusa...Click Here to view complete documentIf u r not using reusable sequence generator transformation duplicasy will occur inorder to reduce thiswe can use reusable sequence generator transformation.=======================================You can use sequence generator but is the usein the mapping it create not unique value=======================================

Page 266: Informatica Interview QnA

If we use same sequence generator for multiple times to load the data at same time there will be DEADLLOCK occurs.To avoid Dead locks we dont use same sequence generator=======================================

286.Informatica - interview questiontell me what would the sizeof ur warehouse project?QUESTION #286No best answer available. Please pick the good answer available or submit youranswer.November 13, 2006 00:09:10 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: interview questiontell me what would t...Click Here to view complete documentU can say 900 mb=======================================The size of EDW will be on Terabyte. The Server will run on Either Unix or Linux with SAN box=======================================U can say 600-900GB including Ur Marts.It varies depending up on ur Project Structure and How manydata marts and EDWH=======================================Mr Srinuv by saying 900 MB r u kiddin with folks over here! Its a datawarehouse's size not some clientserver software.Thanks Vivek=======================================You can answer this question as the number of facts and dimension in ur warehouseFor Example: Insurance Datawarehouse2 Facts : Claims and policyfile:///C|/Perl/bin/result.html (283 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

8 Dimension : Coverage Customer Claim etc....=======================================I think the interviewer wants to the number of facts and dimensions in the warehouse and not the size in

Page 267: Informatica Interview QnA

GB's or TB of the actual Database.=======================================

287.Informatica - what is grouped cross tab?QUESTION #287No best answer available. Please pick the good answer available or submit youranswer.November 09, 2006 14:46:05 #1calltomadhu Member Since: September 2006 Contribution: 34RE: what is grouped cross tab?Click Here to view complete documentits one kind of report. generally we will use this one in cognos=======================================

288.Informatica - what is aggregate awareness?QUESTION #288No best answer available. Please pick the good answer available or submit youranswer.November 16, 2006 08:28:39 #1satyaRE: what is aggregate awareness?Click Here to view complete documentIt is the ability to dynamically re-write SQL to the level of granularity neededto answer a businessquestion=======================================

289.Informatica - Can we revert back reusable transformation tonormal transformation?QUESTION #289No best answer available. Please pick the good answer available or submit youranswer.

Page 268: Informatica Interview QnA

file:///C|/Perl/bin/result.html (284 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

November 10, 2006 03:48:16 #1satishRE: Can we revert back reusable transformation to norm...Click Here to view complete documentyes=======================================no=======================================no it is not reversible...when you open a transformation in edit mode there is a check box named REUSABLE... if you tick itwill give you a message saying that making reusable is not reversible...=======================================No. Once we declared a transformation as a Reusable we cant able to revert.=======================================Reverting to Original Reusable TransformationIf you change the properties of a reusable transformation in a mapping you can revert to the originalreusable transformation properties by clicking the Revert button.=======================================No we CANNOT revert back the reusable transformation. There is a revert button that can revert thelast changes made in the transformation.=======================================No We cant revert a resuble tranformation to Normal Trasnsformation.Once we select reusable colun will be enabled.=======================================YES...we can.1) Drag the reusable transformation from Repository Navigator into Mapping Designer by pressing thefile:///C|/Perl/bin/result.html (285 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

left button of the mouse2 ) and then press the Ctrl key before releasing the left button of mouse.3) Release the left button of mouse.4) Enjoy....:-)ThanksSantu=======================================

Page 269: Informatica Interview QnA

the last answer though correct in a way is not completely correct. by using the ctrl key we are making acopy of the original transformation but not changing the original transformation into a non-reusable one.=======================================I think if the transformation is created in Mapping designer and make it as reusable then we can revertback that one by the option "revert Back".But if we create transformation in Mapplet we can make it as non reusable one=======================================

290.Informatica - How do we delete staging area in our project?QUESTION #290No best answer available. Please pick the good answer available or submit youranswer.November 15, 2006 03:59:08 #1narayanaRE: How do we delete staging area in our project?Click Here to view complete documentIf your database is oracle then we can apply CDC(change data capture) and load data which is onlychanged recently after previous data load.=======================================if staging area is storing only incremental data ( means changed or new data with respect to previousload) then you can truncate the staging area.But if you maintain historical information in staging area then you can not truncate your staging area.file:///C|/Perl/bin/result.html (286 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================If we use Type 2 of slowly changing dimensions we can delete staging area. because SCD Type 2 storesprevious data with version number and time stamp.=======================================

291.Informatica - what is referential Intigrity error? how ll u

Page 270: Informatica Interview QnA

rectify it?QUESTION #291No best answer available. Please pick the good answer available or submit youranswer.November 27, 2006 06:48:12 #1SravanRE: what is referential Intigrity error? how ll u rect...Click Here to view complete documentreferential Intigrity is all about foreign key relationship between tables. Need to check for the primaryand foreign key ralationship and the existing data if any.(see if child table has any records which arepoiting to the master table records that are no more in master table.)=======================================You have set the session for constraint-based loading but the PowerCenter Server is unable to determinedependencies between target tables possibly due to errors such as circular key relationships.Action: Ensure the validity of dependencies between target tables=======================================

292.Informatica - what is constraint based error? how ll u clarifyit?QUESTION #292No best answer available. Please pick the good answer available or submit youranswer.November 13, 2006 00:07:44 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: what is constraint based error? how ll u clarify i...file:///C|/Perl/bin/result.html (287 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentConstraint based error will occur if the pk is duplicate in a table=======================================it is a primary and forigen key relationship.=======================================

Page 271: Informatica Interview QnA

hi when data from a single source needs to be loaded into multiple targets in that situation we useconstraint based load ordering.Note: The target tables must have primary - foreign key relationships.regards Phani...=======================================1. when data from a single source needs to be loaded into multiple targets=======================================

293.Informatica - why exactly the dynamic lookup?plz canany bady can clarify it?QUESTION #293No best answer available. Please pick the good answer available or submityour answer.November 13, 2006 00:03:24 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: why exactly the dynamic lookup?plz can any bady ca...Click Here to view complete documentDynamic look up means if the changes are applying to the look up chache then we can say it thedynamic look up.=======================================hii...y dynamic lookup.....suppose u r looking up a table that is changing frequently i.e u want to lookup recent data then u have togo for dynamic lookup......ex: online trancation data(ATM)=======================================Dynamic Lookup generally used for connect lkp transformation when the data is cgenged then isupdating insert or its leave without changing .....file:///C|/Perl/bin/result.html (288 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================Dynamic lookup cache is used in case of connected lookups only It is also called as read and writecache when a new record is inserted into the target table then cache is also updated with the new record

Page 272: Informatica Interview QnA

it is saved in the cache for faster lookup of data from the target. Generally we use this in case of slowlychanging dimensions.Pallavi=======================================

294.Informatica - How many mappings you have done in yourproject(in a banking)?QUESTION #294No best answer available. Please pick the good answer available or submit youranswer.November 13, 2006 00:01:56 #1srinuv_11 Member Since: October 2006 Contribution: 23RE: How many mappings you have done in your project(in...Click Here to view complete documentDepends on the dimensions for suppose if u r taking the banking porject it requires the dimension liketime primary account holder branch like we can take any no of dimension depending on the project andwe can create the mapping for cleancing the data or scrubbing the data for this also we can create themappings we can't exact this many.=======================================It depend upon the user requirement.according to user requirement we can configure the modelling .Onthat modelling basis we can identify the no of mappings which includes simple mapping to complexmapping.=======================================Exactly 93.133=======================================

295.Informatica - what are the UTP'SQUESTION #295No best answer available. Please pick the good answer available or submit youranswer.

Page 273: Informatica Interview QnA

November 21, 2006 09:42:20 #1file:///C|/Perl/bin/result.html (289 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

raja147 Member Since: November 2006 Contribution: 3RE: what are the UTP'SClick Here to view complete documenthiutps are done to check the mappings are done according to given business rules.utp is the (unit testplan ) done by deveploper.=======================================After creating the mapping each mapping can be tested by the developer individualy=======================================

296.Informatica - how can we delete duplicate rows from flatfiles ?QUESTION #296No best answer available. Please pick the good answer available or submit youranswer.December 02, 2006 07:42:09 #1amarnathRE: how can we delete duplicate rows from flat files ?...Click Here to view complete documentby using aggregater=======================================We can delete duplicate rows from flat files by using Sorter transformation.=======================================Sorter Transofrormation do the records in sorting order(for better performence) . iam asking how canwe delete duplicate rows=======================================use a lookup by primary key=======================================In the mapping read the flat file through a Source Definition and SQ. Apply a Sorter Transformation inthe property tab select distinct . out put will give a sorter distinct data hence you get rid of duplicates.

Page 274: Informatica Interview QnA

You can also use an Aggegator Transformation and group by the PK. Gives the same result.=======================================Use Sorter Transformation and check Distinct option. It will remove the duplicates.file:///C|/Perl/bin/result.html (290 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================

297.Informatica - 1.can u look up a flat file ? how ?2.what is testload?QUESTION #297No best answer available. Please pick the good answer available or submit youranswer.December 03, 2006 22:03:36 #1sravan kumarRE: 1.can u look up a flat file ? how ?2.what i...Click Here to view complete documentBy Using Look up transformation we can look up A flat file.When u click Look up transformation itshows u the message.Follow that.Test load is nothing but checking whether the data is moving correctly to the target or not.=======================================Test load is the property we can set at the session property level by which Informatica performs all preand post session tasks but does not save target data(in RDBMS target table it writes the data to checkthe constraints but rolls it back). If the target is flat file then it does not write anything in the file. Wecan specify number of source rows to test load the mapping. This is another way of debugging themapping without loading the target.=======================================

298.Informatica - what is auxiliary mapping ?QUESTION #298

Page 275: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.December 26, 2006 03:27:18 #1Kuldeep Kumar VermaRE: what is auxiliary mapping ?file:///C|/Perl/bin/result.html (291 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentAuxiliary Mapping is used to reflect change in one table when ever there is a change in the other table.Example:In Siebel we have S_SRV_REQ and S_EVT_ACT table Lets say that we have a image table defined forS_SRV_REQ from where our mappings read data. Now if there is any change in S_EVT_ACT then itwont be captured in S_SRV_REQ if our mappings are using image table for S_SRV_REQ. Toovercome this we define a mapping beteen S_SRV_REQ and S_EVT_ACT such that if there is anychange in second it will be reflected as an update in the first table.=======================================

299.Informatica - what is authenticator ?QUESTION #299No best answer available. Please pick the good answer available or submit youranswer.December 18, 2006 02:38:20 #1Reddappa C. ReddyRE: what is authenticator ?Click Here to view complete documentAn authenticator is either a token of authentication (is the act of establishing or confirming somethingor someone) or one who authenticates=======================================Authentication requests validate user names and passwords to access the PowerCenter repository.Youcan use the following authentication requests to access the PowerCenter repositories: Login Logout The

Page 276: Informatica Interview QnA

Login function authenticates user name and password for a specified repository. This is the firstfunction a client application should call before calling any other functions. The Logout functiondisconnects you from the repository and its PowerCenter Server connections. You can call this functiononce you are done calling Metadata and Batch Web Services functions to release resources at the WebServices Hub.=======================================

300.Informatica - how can we populate the data into a timedimension ?QUESTION #300No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (292 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

answer.December 03, 2006 22:04:48 #1sravan kumarRE: how can we populate the data into a time dimension...Click Here to view complete documentBy using Stored procedure Transformation.=======================================how ? can u explain.=======================================can u plz explain me about time dimension.=======================================

301.Informatica - how to create primary key only on odd numbers?QUESTION #301No best answer available. Please pick the good answer available or submit youranswer.December 07, 2006 02:50:14 #1Pavan.m

Page 277: Informatica Interview QnA

RE: how to create primary key only on odd numbers?Click Here to view complete documentUse sequence generator to generate warehouse keys and set the 'Increment by ' property of sequencegenrator to 2.=======================================Use sequence generator and set the 'Increment by' property in that with 2.=======================================

302.Informatica - How to load fact table ?QUESTION #302No best answer available. Please pick the good answer available or submit youranswer.December 11, 2006 16:59:11 #1hanug Member Since: June 2006 Contribution: 24file:///C|/Perl/bin/result.html (293 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: How to load fact table ?Click Here to view complete documentHI:There are two ways u have to load fact table.1. First time load2. Incremental load or delta loadIn both cases u have to aggregate get the keys of the dimentional tables and load into Fact.In case of increments u use date value to pickup only delta data while loading into fact.Hanu.=======================================Fact tables always maintain the history records and mostly consists of keys and measures So after allteh dimension tables are populated teh fact tables can be loaded.The load is always going to be a incremental load except for the first time which is a history load.=======================================

303.Informatica - How to load the time dimension usingInformatica ?

Page 278: Informatica Interview QnA

QUESTION #303No best answer available. Please pick the good answer available or submit youranswer.December 22, 2006 00:32:54 #1srinivasRE: How to load the time dimension using Informatica ?...file:///C|/Perl/bin/result.html (294 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentHIuse scd-type2-effectivedate range.=======================================HiUse strored procedure tranformation to load time dimention=======================================Hi KiranCan U plz tell me in detail how do we do that?I am a fresher in informatica...plz help.thanks.=======================================

304.Informatica - What is the process of loading the timedimension?QUESTION #304No best answer available. Please pick the good answer available or submit youranswer.December 29, 2006 08:20:34 #1manisha.sinha Member Since: December 2006 Contribution: 30RE: What is the process of loading the time dimension?...file:///C|/Perl/bin/result.html (295 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentcreate a procedure to load data into Time Dimension. The procedure needs to run only once to popullateall the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in urtable.

Page 279: Informatica Interview QnA

create or replace procedure QISODS.Insert_W_DAY_D_PR asLastSeqID number default 0;loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');beginLoopLastSeqID : LastSeqID + 1;loaddate : loaddate + 1;INSERT into QISODS.W_DAY_D values(LastSeqIDTrunc(loaddate)Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_FLOAT(TO_CHAR(loaddate 'MM'))TO_FLOAT(TO_CHAR(loaddate 'Q'))trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)TO_FLOAT(TO_CHAR(loaddate 'YYYY'))TO_FLOAT(TO_CHAR(loaddate 'DD'))TO_FLOAT(TO_CHAR(loaddate 'D'))TO_FLOAT(TO_CHAR(loaddate 'DDD'))11111TO_FLOAT(TO_CHAR(loaddate 'J'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +TO_number(TO_CHAR(loaddate 'MM'))((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +TO_number(TO_CHAR(loaddate 'Q'))TO_FLOAT(TO_CHAR(loaddate 'J'))/7TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713TO_CHAR(load_date 'Day')TO_CHAR(loaddate 'Month')Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')Trunc(loaddate 'DAY') + 1Decode(Last_Day(loaddate) loaddate 'y' 'n')file:///C|/Perl/bin/result.html (296 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

to_char(loaddate 'YYYYMM')

Page 280: Informatica Interview QnA

to_char(loaddate 'YYYY') || ' Half' ||Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2))TO_CHAR(loaddate 'YYYY / MM')TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate'Q')) )TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate'WW')))TO_CHAR(loaddate 'YYYY'));If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') ThenExit;End If;End Loop;commit;end Insert_W_DAY_D_PR;=======================================

305.Informatica - how can we remove/optmize source bottlenecksusing "query hints"QUESTION #305No best answer available. Please pick the good answer available or submit youranswer.January 08, 2007 16:29:36 #1creativehuang Member Since: January 2007 Contribution: 5RE: how can we remove/optmize source bottlenecks using...file:///C|/Perl/bin/result.html (297 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentit's SQL server question ?or informatica question?=======================================Create indexes for source table colums=======================================first u must have proper indexes and the table must be analyzed to gather stats to use the cbo. u can getfree doc from oracle technet.use the hints after and it is powerful so be careful with the hints.=======================================

Page 281: Informatica Interview QnA

306.Informatica - how can we eliminate source bottleneck usingquery hintQUESTION #306No best answer available. Please pick the good answer available or submit youranswer.March 12, 2007 06:28:50 #1sreedhark26 Member Since: January 2007 Contribution: 25RE: how can we eliminate source bottleneck using query...Click Here to view complete documentYou can identify source bottlenecks by executing the read query directly against the source database.Copy the read query directly from the session log. Execute the query against the source database with aquery tool such as isql. On Windows you can load the result of the query in a file. On UNIX systemsyou can load the result of the query in /dev/null.Measure the query execution time and the time it takes for the query to return the first row. If there is along delay between the two time measurements you can use an optimizer hint to eliminate the sourcebottleneck.=======================================

307.Informatica - where from we get the source data or how weaccess the source dataQUESTION #307No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (298 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

answer.January 03, 2007 00:58:49 #1phaniRE: where from we get the source data or how we access...Click Here to view complete document

Page 282: Informatica Interview QnA

HiWe get souce data in the form of excel files flat files etc. By using source analyzer we can access thesouce data=======================================hisource data exists in OLTP systems of any form (flat file relational database. xml definitions.)u canacces the source data with any of the source qualifier transformationsw and normaliser tranaformation.=======================================

308.Informatica - What are all the new features of informatica 8.1?QUESTION #308No best answer available. Please pick the good answer available or submit youranswer.February 03, 2007 01:57:27 #1SonjoyRE: What are all the new features of informatica 8.1?file:///C|/Perl/bin/result.html (299 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete document1. Java Custom Transformation support2. HTTP trans support3. New name of Superglue as Metadatamanager4. New name of Power Analyzer as Data Analyzer5. Support Grid Computing6. Push downOptimization=======================================1. Power Center 8 release has "Append to Target file" feature.2. ava transformation is introduced.3. User defined functions4. Midstream SQL transformation has been added in 8.1.1 not in 8.1.5. Informatica has added a new web based administrative console.6. Management is centralized that means services can be started and stopped on nodes via a centralweb interface=======================================

Page 283: Informatica Interview QnA

309.Informatica - Explain the pipeline partition with real timeexample?QUESTION #309No best answer available. Please pick the good answer available or submit youranswer.January 11, 2007 14:56:16 #1saibabu Member Since: January 2007 Contribution: 14RE: Explain the pipeline partition with real time exam...Click Here to view complete documentPIPELINE SPECIFIES THE FLOW OF DATA FROM SOURCE TO TARGET .PIPELINEPARTISON MEANS PARTISON THE DATA BASED ON SOME KEY VALUES AND LOAD THEDATA TO TARGET UNDER CONCURRENT MODE. WHICH INPROVES THE SESSIONPERFORMANCE i.e data loading time reduces. in real time we have some thousands of records existseveryday to load the data to targets .so pipeline partisoning definetly reduces the data loading time.=======================================

310.Informatica - How to FTP a file to a remote server?QUESTION #310No best answer available. Please pick the good answer available or submit youranswer.January 08, 2007 16:04:23 #1file:///C|/Perl/bin/result.html (300 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

creativehuang Member Since: January 2007 Contribution: 5RE: How to FTP a file to a remote server?Click Here to view complete documentftp targetaddressgo to your target directoryftp> ascii or bin

Page 284: Informatica Interview QnA

ftp> put or get=======================================Hi U can transfer file from one server to other..............In unix there is an utililty XCOMTCP whichtransfer file from one server to other. But lot of constraints they are for this..... U need to mention targetserver name and directory name where u need to send.The server directory should have writepermetion.....Check in detail in UNIX by typing MAN XCOMTCP command which guides u i guess.=======================================

311.Informatica - What's the difference between source and targetobject definitions in Informatica?QUESTION #311No best answer available. Please pick the good answer available or submit youranswer.January 05, 2007 09:06:50 #1Sravan KumarRE: What's the difference between source and target ob...file:///C|/Perl/bin/result.html (301 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentSource system is the system which provides the business data is called as source systemTarget system is a system to which the data being loads=======================================source definition is the structure of the source database existed in the OLTP system . using sourcedefinition u can extrcact the transactional data from OLTP SYSTEMS.TARGET DEFINITION IS THESTRUCTURE GIVEN BY THE DBA's TO POPULATE THE DATA FROM SOURCE DEFINITIONACCORDING BUSINESS RULES FOR THE PURPOSE OF MAKING EFFECTIVE DESITIONSFOR THE ENTERPRISE.=======================================

Page 285: Informatica Interview QnA

Hai what Saibabu wrote is correct.Source definition means defining the structure of the source fromwhich we have to extract the data to transform and then load to the target.Target definition meansdefining the structure of the target (relation table or flat file)=======================================

312.Informatica - how many types of sessions are there ininformatica.please explain them.QUESTION #312No best answer available. Please pick the good answer available or submit youranswer.January 08, 2007 15:56:28 #1creativehuang Member Since: January 2007 Contribution: 5RE: how many types of sessions are there in informatic...Click Here to view complete documentreusable nonusable session=======================================Total 10 SESSIONS1. SESSION: FOR MAPPING EXECUTIONfile:///C|/Perl/bin/result.html (302 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

2. EMAIL:TO SEND EMAILS3. COMMAND: TO EXECUTE OS COMMANDS4 CONTROL: FAIL STOP ABORT5.EVENT WAIT: FOR PRE_DEFINED OR POST_DEFINED EVENTS6 EVENT RAISE:TO RAISE ACTIVE USER_DEFINED EVENT7. DECISSION :CONDITION TO BE EVALUATED FOR CONTROLING FLOW OR PROCESS8. TIMER: TO HALT THE PROCESS FOR SPECIFIC TIME9.WORKLET TASK: REUSABLE TASK10.ASSIGNEMENT: TO ASSIGN VALUES WORKLET OR WORK FLOW VARIABLES=======================================Session is a type of workflow task and set of instructions that describe how to move Data from Sourceto targets using a mappingThere are two session in informatica

Page 286: Informatica Interview QnA

1. sequential: When Data moves one after another from source to target it is sequential2.Concurrent: When whole data moves simultaneously from source to target it is Concurrent=======================================HiVidyaAbove Qestion is How many sessions? your answer is Bathes ?=======================================file:///C|/Perl/bin/result.html (303 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

HI Sreedhark26ur answer is wrong . dont minguide the members .what u wrote is types of tasks usedfby the workflow manager.there r two types of sessions1.non resuable session2.reusable session=======================================

313.Informatica - What is meant by source is changingincrementally? explain with exampleQUESTION #313No best answer available. Please pick the good answer available or submit youranswer.January 09, 2007 07:30:59 #1Sravan KumarRE: What is meant by source is changing incrementally?...Click Here to view complete documentSource is changing Incrementally means the data in the source is keep on changing and your capturingthat changes with time stamps keyrange and with triggers scds.Capture these changes to load the dataincrementally.If We cannot capture this source data incrementally the data loading process will be verydifficult.For example we have a source where the data is changing.On 09/01/07 we have loaded all thedata in to the target.On 10/01/07 the source is updated with some new rows .It is very difficult to loadall the rows again in the target so what we have to do is capture the data whch is not loaded and load

Page 287: Informatica Interview QnA

only the changed rows into the target.=======================================I have a good example to explain this . Think about HR Data for a very big Company. The data keep onchanging every minute. Now We had build a downstream system to capture a chunk of data for specificpurpose say New Hirees . Every record in the source will have time stamp when we load data today wecheck the records which are updated/inserted today and will load them (This avoids to reprocess all thedata). We used incremental Refresh method to process such data. Infact most of the sources in OLTPare incremental changing/constantly changing.=======================================

314.Informatica - what is the diffrence between SCD andINCREMENTAL Aggregation?QUESTION #314No best answer available. Please pick the good answer available or submit youranswer.January 09, 2007 23:19:44 #1file:///C|/Perl/bin/result.html (304 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

srinivasRE: what is the diffrence between SCD and INCREMENTAL ...Click Here to view complete documentHiHIRE DIMENSIONAL DATA IS STORED. NO AGGRIGATE CALCULATIONSSCD WE R USING 3 WAYS1.TYPE-1: IT'S MAINTAIN CURRENT DATA ONY2.TYPE-2: IT'S MAINTAIN CURRENT DATA + COMPLET HOSTROY RECORDSTHESE R 3 WAYS :1.FLAG DATA2.VERSION NO MAPPING3.EFFECTIVE DATE RANGE3.TYPE-3: IT'S MAINTAIN CURRENT DATA +ONE TIME HISTROY

Page 288: Informatica Interview QnA

INCREMENTAL AGGRATION ARE STORED AGGRAGATE VALUES ACCORDING TO THEUSER REQUIREMENTS=======================================hi scd means 'slowly changing dimentions'since dimention table maintains master data the columnvalues occationally changed .so dimention tables are called as scd tables and the fields in the scd tablesare called as slowly changing dimentions .in order to maintain those changes we are following threetypes of methods.1. SCD TYPE1 this method maintains only current data2. SCD TYPE2 this methodmaintains whole history of the dimentions here three methods to identify which record is current one .1> flag current data 2> version number mapping 3> effective date range3. SCD TYPE3 this methodmaintains current data and one time historical data.INCREMENTAL AGGRIGATIONsomerequirements (daily weekly every 15 days quartly..........) need to aggrigate the values of certain colums.HERE U have to do the same job every time (according to requirement) and add the aggrigate value tothe previous aggrigate value(previous run value) of those column.THE PROCESS CALLED ASINCREMENTAL AGGRIGATION.=======================================file:///C|/Perl/bin/result.html (305 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

315.Informatica - when informatica 7.1.1 version was introducedinto marketQUESTION #315No best answer available. Please pick the good answer available or submit youranswer.March 12, 2007 06:19:29 #1sreedhark26 Member Since: January 2007 Contribution: 25RE: when informatica 7.1.1 version was introduced into...Click Here to view complete document

Page 289: Informatica Interview QnA

2004=======================================

316.Informatica - what is the advantages of converting storedprocedures into Informatica mappings?QUESTION #316No best answer available. Please pick the good answer available or submit youranswer.January 18, 2007 12:41:55 #1Gayathri84 Member Since: January 2007 Contribution: 2RE: what is the advantages of converting stored proced...Click Here to view complete documentStored Procedures are Hard to maintain and Debug . Maintaince is simpler in Informatica. Its a Userfriendly tool.Given the Logic. Its eaier to create a Mapping in Informatica than write a Stored Procedure.ThanksGayathri=======================================file:///C|/Perl/bin/result.html (306 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

a stored procedure call is made thru an ODBC connection over a network (sometimes the info serverresides on the same box as the db)...since there is an overhead in making the call it is inherently slower.=======================================

317.Informatica - How a LOOKUP is passive?QUESTION #317No best answer available. Please pick the good answer available or submit youranswer.January 19, 2007 15:54:06 #1monicageller Member Since: January 2007 Contribution: 3RE: How a LOOKUP is passive?

Page 290: Informatica Interview QnA

Click Here to view complete documentHiUnconnected lookup is used for updating Slowly Chaging Dimensions...so it is used to determinewhether the rows are already in the target or not but it doesn't change the no. of rows ...so it is passive.Connected lookup transformations are used to get a related value based on some value or to perform acalculation.....in either case it will either increase no. of columns or not...but doesn't change row count...so it is passive.In lookup SQL override property we can add a WHERE statement to the default SQL statement but itdoesn't change no. of rows passing through it it just reduces the no. of rows included in the cache.cheersMonica.=======================================the fact that a failed lookup does not erase the row makes it passive.=======================================Lookup is using for geta a related value and perform calculation. lookup is using for search value fromrelational table.=======================================

318.Informatica - How to partition the Session?(Interviewfile:///C|/Perl/bin/result.html (307 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

question of CTS)QUESTION #318No best answer available. Please pick the good answer available or submit youranswer.January 23, 2007 15:53:48 #1kirangvr Member Since: January 2007 Contribution: 5RE: How to partition the Session?(Interview question o...Click Here to view complete documento Round-Robin: PC server distributes rows of data evenly to all partitions. @filtero Hash keys:

Page 291: Informatica Interview QnA

distribute rows to the partitions by group. @rank sorter joiner and unsorted aggregator.o Key range:distributes rows based on a port or set of ports that you specify as the partition key. @source and targetoPass-through: processes data without redistributing rows among partitions. @any valid partition point.Hope ot helps you.=======================================When you create or edit a session you can change the partitioning information for each pipeline in amapping. If the mapping contains multiple pipelines you can specify multiple partitions in somepipelines and single partitions in others. You update partitioning information using the Partitions viewon the Mapping tab in the session properties.You can configure the following information in the Partitions view on the Mapping tab:l Add and delete partition points.l Enter a description for each partition.l Specify the partition type at each partition point.l Add a partition key and key ranges for certain partition types.=======================================By default when we create the session workflow creates pass-through partition points at SourceQualifier transformations and target instances.=======================================

319.Informatica - which one is better performance wise joiner orlookupfile:///C|/Perl/bin/result.html (308 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

QUESTION #319No best answer available. Please pick the good answer available or submit youranswer.January 25, 2007 13:41:43 #1anuRE: which one is better performance wise joiner or loo...Click Here to view complete document

Page 292: Informatica Interview QnA

Are you lookuping flat file or database table? Generaly sorted joiner is more effective on flat files thanlookup because sorted joiner uses merge join and cashes less rows. Lookup cashes always whole file. Ifthe file is not sorted it can be comparable.Lookups into database table can be effective if the databasecan return sorted data fast and the amount of data is small because lookup can create whole cash inmemory. If database responses slowly or big amount of data are processed lookup cache initializationcan be really slow (lookup waits for database and stores cashed data on discs). Then it can be better usesorted joiner which throws data to output as reads them on input.=======================================

320.Informatica - what is associated port in look up.QUESTION #320No best answer available. Please pick the good answer available or submit youranswer.February 01, 2007 16:08:08 #1kirangvr Member Since: January 2007 Contribution: 5RE: what is associated port in look up.file:///C|/Perl/bin/result.html (309 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentWhen you use a dynamic lookup cache you must associate each lookup/output port with aninput/outputport or a sequence ID. The PowerCenter Server uses the data in the associatedport to insert or updaterows in the lookup cache. The Designer associates the input/outputports with the lookup/output portsused in the lookup condition.=======================================whenever you are using scd2 you have to use dynamic cache then one port must be specified forupdating in cache that port is called associated port.=======================================

Page 293: Informatica Interview QnA

321.Informatica - what is session recovery?QUESTION #321No best answer available. Please pick the good answer available or submit youranswer.February 09, 2007 16:16:53 #1palRE: what is session recovery?Click Here to view complete documentSession recovery is used when you want the session to continue loading from the point where it stoppedlast time it ran.for example if the session failed after loading 10 000 records and you want the 10 001record to be loaded then you can use session recovery to load the data from 10 001.=======================================when ever session fails then we have to follow these steps:if no commit is performed by the informatica server then run session again.if at least one commit is performed by the session recover the session.if recover is not possible truncate the target table and load again.recovery option is set in the informatica server. i just forgotten where to do recovery i will tell you nexttime or you search for it=======================================If the recover option is not set and 10 000 records are committed then delete the record from the targettable using audit fields like update_date.=======================================file:///C|/Perl/bin/result.html (310 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

322.Informatica - what are the steps follow in performancetuning ?QUESTION #322No best answer available. Please pick the good answer available or submit youranswer.

Page 294: Informatica Interview QnA

February 09, 2007 16:12:47 #1palRE: what are the steps follow in performance tuning ?Click Here to view complete documentSteps for performance tuning:1) Target bottle neck2) source bottle-neck3)mapping bottle-neck etc..As a developer we just can clear the bottle necks at the mapping level and atthe session level.for example1) Removing transformation errors.2) Filtering the records at the earliest.3) Using sorted data before an aggregator.4) using less of those transformations which use cache.5) using an external loader like sql loader etc to load the data faster.6) less no of conversions like numeric to char and char to numeric etc7) writing an override instead of using filter etc.8)increasing the network packet size.9) all the source systems in the server machine to make it run faster etc pal.=======================================Hai Friends Performance tuning means techniques for improving the performance.1. Identify theBottlenecks(issues that reduces performance)2. Configure the Bottlenecks.The Hierarchy we have tofollow in performance tuning is a) Target b) Source c) Mapping d) Session e) SystemIf anything wrongin this plz. tell me. because I am still in the learning stage.==============================================================================Hi FriendsPlease clarify me What is mean by "bottle neck".For exfile:///C|/Perl/bin/result.html (311 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

1) Target bottle neck2) source bottle-neck3)mapping bottle-neckI am not aware what does this bottle neck meant.Thanks for your help in advance.Thanks

Page 295: Informatica Interview QnA

Thana=======================================Bottle neck means drawbacks or problems=======================================Target commit intervals and Source - based commit intervals.=======================================

323.Informatica - How to use incremental aggregation in real time?QUESTION #323No best answer available. Please pick the good answer available or submit youranswer.February 21, 2007 22:48:09 #1Hanu Ch RaoRE: How to use incremental aggregation in real time?Click Here to view complete documentThe first time you run a session with incremental aggregation enabled the server process the entiresource.=======================================

324.Informatica - Which transformation replaces the look uptransformation?QUESTION #324No best answer available. Please pick the good answer available or submit youranswer.Sorting Options Page 1 of 2 « First 1 2 > Last »February 21, 2007 08:11:18 #1file:///C|/Perl/bin/result.html (312 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

hegdeRE: Which transformation replaces the look up transfor...Click Here to view complete documentcan u pls clarify ur ques?=======================================

Page 296: Informatica Interview QnA

Is there any transformation that could do the function of a lookup transformation. or Is there anytransformation that we can use instead of lookup transformation.=======================================question is misleading ... do you mean look up transformation will become obsolete since newtransformation is created to do same thing in a better way ? which version ?=======================================Yes Joiner with Source outer Join and Stored Procedure can replace look-up transformation=======================================You can use Joiner transformation instead of lookup look up is only for relational tableswhich are not sources in your source analyzer but by using joiner we can join relational and flat filesrelational sources on different platforms.=======================================VickyCan you please Validate your answer with some example ??=======================================HiActually by look up t/s we are looking the required fields in the target and comparing with source ports..Without look up we can do by joinerMaster outer joiner means Matched rows of both t/s and unmatched from master table..So you can get all changes..if any doubt reply me back.. i will give another general example to you=======================================plz don't mislead.........A master outer join keeps all rows of data from the detail source and the matching rows from the mastersource. It discards the unmatched rows from the master source.=======================================file:///C|/Perl/bin/result.html (313 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

I hope The Joiner transformation is only for join the hetrogeneous sources.For Eg when we use onesource from Flat file and another one is from Relational.I have a question as "How Joiner

Page 297: Informatica Interview QnA

transformation overrides the Lookup" however both transformation performs different actions.CheersThana=======================================Lookup is nothing but Outer Join. So Look up t/s can be easily replaced by a joiner in any tool.=======================================

325.Informatica - What is Dataware house key?QUESTION #325No best answer available. Please pick the good answer available or submit youranswer.February 21, 2007 23:26:44 #1RavichandraRE: What is Dataware house key?Click Here to view complete documentDataware house key is warehouse key or surrogate key.ie generated by sequence generator.this keyused in loading data into fact table from dimensions.=======================================is it really called Data Warehouse Key ? is there such a term ?=======================================I never heard this type of key is there in DWH terminology!!!!!!!!!=======================================How datawarehouse key is called as surrogate key=======================================There is a Dataware house key........plz go thru kimball bookEvery data warehouse key should be a surrogate key because the data warehouse DBA must have theflexibility to respond to changing descriptions and abnormal conditions in the raw data.=======================================file:///C|/Perl/bin/result.html (314 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Surrogate key is know as datawarehouse key. Surrogate key is used as a unique key in warehouse wheredupicate keys may exits due to type 2 changes.

Page 298: Informatica Interview QnA

=======================================

326.Informatica - What is inline view?QUESTION #326No best answer available. Please pick the good answer available or submit youranswer.February 21, 2007 22:40:51 #1Hanu Ch RaoRE: What is inline view?Click Here to view complete documentThe inline view is a construct in Oracle SQL where you can place a query in the SQL FROM clause justas if the query was a table name.A common use for in-line views in Oracle SQL is to simplify complex queries by removing joinoperations and condensing several separate queries into a single query=======================================used in oracle to simplify queriesex: select rowid from (select ename from emp group by sal) where rowid<2=======================================

327.Informatica - What is the exact difference between joiner andlookup transformationQUESTION #327No best answer available. Please pick the good answer available or submit youranswer.February 20, 2007 12:21:06 #1palRE: What is the exact difference between joiner and lo...file:///C|/Perl/bin/result.html (315 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentA joiner is used to join data from different sources and a lookup is used to get a related values fromanother table or check for updates etc in the target table.for lookup to work the table may not exist in the mapping but for a joiner to work the table has to exist

Page 299: Informatica Interview QnA

in the mapping.pal.=======================================a lookup may be unconnected while a joiner may not=======================================lookup may not participate in mappinglookup does only non equi joinjoiner table must paraticipate in mappingjoiner does only outer join=======================================

328.Informatica - Explain about scheduling real time ininformaticaQUESTION #328No best answer available. Please pick the good answer available or submit youranswer.March 01, 2007 06:45:31 #1sanghala Member Since: April 2006 Contribution: 111RE: Explain about scheduling real time in informaticaClick Here to view complete documentScheduling of Informatica jobs can be done by the following ways:l Informatica Workflow Mangerl Using Cron in Unixl Using Opcon sheduler=======================================

329.Informatica - How do you handle two sessions in InformaticaQUESTION #329 can any one tell me the option between the two session . iffile:///C|/Perl/bin/result.html (316 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

previous session execute successfully than run the next session... Click Here to viewcomplete documentNo best answer available. Please pick the good answer available or submit your answer.

Page 300: Informatica Interview QnA

February 28, 2007 02:09:40 #1Divya RamanathanRE: How do you handle two sessions in Informatica=======================================You can handle 2 session by using a link condition (id $ PrevTaskStatus SUCCESSFULL)or you can have a decision task between them. I feel since its only one session dependent on one have alink condition=======================================By giving a link condition like $PrevTaskStatus SUCCESSFULL=======================================where exactly do we need to use this link condition (id $ PrevTaskStatus SUCCESSFULL)=======================================you can drag and drop more than one session in a workflow.there will be linking different and issequential linkingconcurrent linkingin sequential linking you can run which ever session you require or if the workflow runs all the sessionssequentially.in concurrent linking you can't run any session you want.=======================================

330.Informatica - How do you change change column to row inInformaticaQUESTION #330No best answer available. Please pick the good answer available or submit youranswer.March 02, 2007 11:38:19 #1file:///C|/Perl/bin/result.html (317 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Hanu Ch RaoRE: How do you change change column to row in Informat...Click Here to view complete documentHiWe can achieve this by Normalizer trans.

Page 301: Informatica Interview QnA

=======================================First u can ask what type of data change columns to rows in informatica?Next we can use expression and aggregator transformation.aggregator is using group by duplicate rows=======================================

331.Informatica - write a query to retrieve the latest records fromthe table sorted by version(scd).QUESTION #331No best answer available. Please pick the good answer available or submit youranswer.February 27, 2007 06:05:37 #1sunilRE: write a query to retrieve the latest records from ...Click Here to view complete documentyou can write a query like inline view clause you can compare previous version to new highest versionthen you can get your result=======================================hi SunilCan u please expalin your answer some what in detail ????=======================================HiAssume if you put the surrogate key in target (Dept table) like p_key andfile:///C|/Perl/bin/result.html (318 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

version field dno field and loc field is therethenselect a.p_key a.dno a.loc a.version from t_dept awhere a.version (select max(b.version) from t_dept b where a.dno b.dno)this is the query if you write in lookup it retrieves latest (max)version in lookup from target. in this way performance increases.=======================================Select Acct.* Rank() Over ( partition by ch_key_id order by version desc) as Rankfrom Acctwhere Rank() 1=======================================select business_key max(version) from tablename group by business_key

Page 302: Informatica Interview QnA

=======================================

332.Informatica - Explain about Informatica server process thathow it works relates to mapping variables?QUESTION #332No best answer available. Please pick the good answer available or submit youranswer.March 09, 2007 09:52:01 #1hemasundarnalco Member Since: December 2006 Contribution: 2RE: Explain about Informatica server process that how ...file:///C|/Perl/bin/result.html (319 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentinformatica primarly uses load manager and data transformation manager(dtm) to perform extractingtransformation and loading.load manager reads parameters and variables related to session mapping andserver and paases the mapping parameters and variable information to the DTM.DTM uses thisinformation to perform the datamovement from source to target=======================================The PowerCenter Server holds two different values for a mapping variable during a session run:l Start value of a mapping variablel Current value of a mapping variableStart ValueThe start value is the value of the variable at the start of the session. The start value could be a valuedefined in the parameter file for the variable a value saved in the repository from the previous run of thesession a user defined initial value for the variable or the default value based on the variable datatype.The PowerCenter Server looks for the start value in the following order:1. Value in parameter file2. Value saved in the repository3. Initial value

Page 303: Informatica Interview QnA

4. Default valueCurrent ValueThe current value is the value of the variable as the session progresses. When a session starts the currentvalue of a variable is the same as the start value. As the session progresses the PowerCenter Servercalculates the current value using a variable function that you set for the variable. Unlike the start valueof a mapping variable the current value can change as the PowerCenter Server evaluates the currentvalue of a variable as each row passes through the mapping.=======================================First load manager starts the session and it performs verifications and validations about variables andmanages post session tasks such as mail.then it creates DTM process.this DTM inturn creates a master thread which creates remaining threads.master thread credtesread threadfile:///C|/Perl/bin/result.html (320 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

write threadtransformation threadpre and post session thread etc...Finally DTM hand overs to the load manager after writing into the target=======================================

333.Informatica - What are the types of loading in Informatica?QUESTION #333No best answer available. Please pick the good answer available or submit youranswer.March 02, 2007 11:29:17 #1Hanu Ch RaoRE: What are the types of loading in Informatica?Click Here to view complete documentHiIn Informatica there are mainly 2 types of loading is there.1. Normal

Page 304: Informatica Interview QnA

2. Bulkyou say and one more Incremental Loading.Normal means it loads record by record and writes logs for that. it takes time.Bulk load means it loads number of records at a time to target - it ignores logs ignores tracing level. Ittakes less time to load data to target.Ok...=======================================2 type of loading1. Normal2. BulkNormal loading creates database log it is very slowBulk loading by passes the database log it is very fastit disable the constraints.=======================================file:///C|/Perl/bin/result.html (321 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

HanuI agree with you.You told about Incremental Loading.Can u please let me know in detail about this type of Loading??????Thanks in advance..........=======================================Loadings are 3 types1. One time data loading2. Complete data loading3. Incremental loading=======================================Two TypesNormal :Data retrival is very easy it creates a indexBulk : Data retrival is not possible it stores the data in improperway=======================================

334.Informatica - What are the steps involved in to get sourcefrom OLTP system to staging areaQUESTION #334No best answer available. Please pick the good answer available or submit youranswer.

Page 305: Informatica Interview QnA

March 13, 2007 16:56:07 #1reddyRE: What are the steps involved in to get source from ...file:///C|/Perl/bin/result.html (322 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentData Profile Data Cleanching is used to verify the data types=======================================Hi ReddyCould you tell me what do u mean by DATA PROFILE?Thanks in advance..=======================================go to source analyzerimport databasesgo to warehouse designerimport databases (target definitions) else create using generate sqlgo to mapping designerdrag and drop sources and targets definitionslink ports properlysave to repositorythan q=======================================

335.Informatica - What is use of event waiter?QUESTION #335No best answer available. Please pick the good answer available or submit youranswer.March 05, 2007 12:38:14 #1sreedhark26 Member Since: January 2007 Contribution: 25RE: What is use of event waiter?file:///C|/Perl/bin/result.html (323 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentl Event-Wait task. The Event-Wait task waits for an event to occur. Once the event triggers thePowerCenter Server continues executing the rest of the workflow.=======================================event wait is of two type1> predefine event: this type of event wait for the indicator file to trigger it..

Page 306: Informatica Interview QnA

2> user define event: this type of event wait for the event raise to trigger the event.thanks ravinder=======================================Event Wait task is a file watcher.when ever a trigger file is touched/created this task will kick off the rest of the sessions to executewhich are there in the batch.I used only User defined Event wait task.=======================================Event wait: will hold the workflow until it is get other instruction or delay mentioned by the [email protected]=======================================

336.Informatica - which transformation can perform the non equijoin?QUESTION #336No best answer available. Please pick the good answer available or submit youranswer.March 12, 2007 01:47:43 #1kasireddyRE: which transformation can perform the non equi join...file:///C|/Perl/bin/result.html (324 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentHiLookup transformation.=======================================Here look up not supports outer joinAnd supports non equijoinJoiner supports outer joinnot non equi=======================================Lookup only can do non equi joinjoiner does only outer detail outer and master outer joins.=======================================

Page 307: Informatica Interview QnA

It is [email protected]=======================================

337.Informatica - Which objects cannot be used in a mapplettransformationQUESTION #337No best answer available. Please pick the good answer available or submit youranswer.March 07, 2007 08:35:11 #1tirumaleshRE: Which objects cannot be used in a mapplet transfor...file:///C|/Perl/bin/result.html (325 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentNon-Reusuable Sequence generator cannot be used in maplet.=======================================l You cannot include the following objects in a mapplet:l Normalizer transformationsl Cobol sourcesl XML Source Qualifier transformationsl XML sourcesl Target definitionsl Pre- and post- session stored proceduresl Other mapplets=======================================When you add transformations to a mapplet keep the following restrictions in mind:©If you use a Sequence Generator transformation you must use a reusable SequenceGenerator transformation.©If you use a Stored Procedure transformation you must configure the Stored ProcedureType t o b e Normal.©You cannot include PowerMart 3.5-style LOOKUP functions in a mapplet.©You cannot include the following objects in a mapplet:

Page 308: Informatica Interview QnA

Normalizer transformationsCOBOL sourcesXML Source Qualifier transformationsXML sourcesTar ge t d e f i ni t i onsOther mapplets=======================================Joiner TransfermationNormalizer transformationsCobol sourcesXML Source Qualifier transformationsXML sourcesl Target definitionsfile:///C|/Perl/bin/result.html (326 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

l Pre- and post- session stored proceduresl Other mapplets=======================================We can not use the following objects/ transformations in the mapplet=======================================

338.Informatica - How to do aggregation with out usingAGGREGAROR Transformation ?QUESTION #338No best answer available. Please pick the good answer available or submit youranswer.March 09, 2007 09:37:56 #1hemasundarnalco Member Since: December 2006 Contribution: 2RE: How to do aggregation with out using AGGREGAROR Tr...Click Here to view complete documentwrite a sql qurey to perform an aggregation in the source qualifier transformation=======================================Overwrite the SQL Query at Source Qualifier transformation=======================================

339.Informatica - how do you test mapping and what is associate

Page 309: Informatica Interview QnA

port?QUESTION #339No best answer available. Please pick the good answer available or submit youranswer.March 26, 2007 11:00:14 #1sreedhark26 Member Since: January 2007 Contribution: 25RE: how do you test mapping and what is associate port...file:///C|/Perl/bin/result.html (327 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documenthiU can test mapping in mapping Designer by using debugger. In Debugger u can test first instance andnext instance. associated port is output port.sreedhar=======================================specifying the number of test load rows in the session properties.=======================================

340.Informatica - Why can't we use normalizer transformation inmapplet?QUESTION #340No best answer available. Please pick the good answer available or submit youranswer.March 26, 2007 10:56:14 #1sreedhark26 Member Since: January 2007 Contribution: 25RE: Why can't we use normalizer transformation in mapp...Click Here to view complete documentHiNt is using for cobol sources u can take data in source analyzer automatically displays normalizertransformation. u can not use in mapplet.sreedhar=======================================

Page 310: Informatica Interview QnA

341.Informatica - Strategy TransformationQUESTION #341 By using Update Strategy Transformation we use to maintainhistorical data using Type2 & Type3. By both of this which is better to use?Why? Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.March 26, 2007 09:12:45 #1file:///C|/Perl/bin/result.html (328 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

saiRE: Strategy Transformation=======================================using type 2 u can maintain total historical data along with current data;using type 3 u can maintain only one time historical data and currrent data:dased upon requrirement we have to choose the best suitable.=======================================

342.Informatica - Source Qualifier in InformaticaQUESTION #342 What is the Technical reason for having Source Qualifier inInformatica? Can a mapping be implemented without it? (Please don't mentionthe functionality of SQ) but the main reason why a mapping can't do withoutit... Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.March 27, 2007 06:32:28 #1chowdaryRE: Source Qualifier in Informatica=======================================in informaticasource qualifier will read data from sources.for reading data from sources sql is mandatory

Page 311: Informatica Interview QnA

=======================================SQ reads data from the sources when the informatica server runs the session. data to the any othertransformation is not allowed with out this SQ trans.It has got other qualities also it can be used as a filter.it can be used as a sorter and is also used to select distinct values.it can be used as a joiner if the data is coming from the same source.file:///C|/Perl/bin/result.html (329 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================In informatica source qualifier acts as a staging area and it will create its own data types which arerelated to source data types=======================================

343.Informatica - what is the difference between mapplet andreusable Transformation?QUESTION #343No best answer available. Please pick the good answer available or submit youranswer.March 30, 2007 02:38:57 #1veera_kk Member Since: March 2007 Contribution: 3RE: what is the difference between mapplet and reusabl...Click Here to view complete documentA set of reusable transformations is called mapplet where as a single reusable transformtion is calledreusable transformation.=======================================

344.Informatica - What is the difference between SQL Overridingin Source qualifier and Lookup transformation?QUESTION #344No best answer available. Please pick the good answer available or submit your

Page 312: Informatica Interview QnA

answer.April 08, 2007 11:10:38 #1ggk.krishna Member Since: February 2007 Contribution: 12RE: What is the difference between SQL Overriding in ...file:///C|/Perl/bin/result.html (330 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentHi1. In LOOKUP SQL override if you add or subtract ports from the SELECT statement thesession fails.2. In LOOKUP if you override the ORDER BY statement the session fails if the ORDER BYstatement does not contain the condition ports in the same order they appear in the Lookupcondition=======================================You can use SQL override in lookup if you have1. More than one look up table2. If you use where condition to reduce the records in the cache.=======================================If you write a query in source qualifier(to override using sql editor) and press validate you canrecognise whether the querry written is right or wrong.But in lookup override if the querry is wrong and if you press validate button.You cannot recognise butwhen you run a session you will get error message and session fails=======================================

345.Informatica - How can you call trigger in stored proceduretransformationQUESTION #345No best answer available. Please pick the good answer available or submit youranswer.May 07, 2007 22:54:43 #1hanug Member Since: June 2006 Contribution: 24RE: How can you call trigger in stored procedure trans...file:///C|/Perl/bin/result.html (331 of 363)4/1/2009 7:50:59 PM

Page 313: Informatica Interview QnA

file:///C|/Perl/bin/result.html

Click Here to view complete documentHi:Trigger can not be called from Stored procedure. Trigger will execute implicitly when u are performingDML operation on the table or view(instead of views).U can get the difference between trigger and sp anywhere in the doc.Hanu.=======================================

346.Informatica - How to assign a work flow to multiple servers?QUESTION #346 I have multiple servers, I want to assign a work flow tomultiple servers Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.October 19, 2007 15:08:36 #1krishnaRE: How to assign a work flow to multiple servers?=======================================Informatica server will use Load manager process to run the workflowload manager will do assign the workflow process to the multiple servers.=======================================

347.Informatica - what types of Errors occur when you run asession, can you describe them with real time exampleQUESTION #347No best answer available. Please pick the good answer available or submit youranswer.January 24, 2008 19:12:31 #1file:///C|/Perl/bin/result.html (332 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

kasisarath Member Since: January 2008 Contribution: 6RE: what types of Errors occur when you run a session, can you describe them with real time

Page 314: Informatica Interview QnA

exampleClick Here to view complete documentThere are several errors you will get. Couple of them are1. if informatica failed to connect data base2. If source file not eixsts in location3. If paramerters not initilized with parameter file4. incompataible piped data types etc.=======================================

348.Informatica - how do you add and delete header , footerrecords from flat file during load to oracle?QUESTION #348No best answer available. Please pick the good answer available or submit youranswer.September 21, 2007 10:10:41 #1AbhishekRE: how do you add and delete header , footer records ...Click Here to view complete documentWe can add header and footer record in two ways.1) Within the informatica session we can sequence the data as such that the header flows in first andfooter flows in last. This only holds true when you have the header and footer the same format as thedetail record.2) As soon as the sesion to generate detail record file finishes we can call unix script or unix commandthrough command task which will concat the header file detail file and footer file and generrate therequired file=======================================

349.Informatica - Can we update target table without usingupdate strategy transformation? why?QUESTION #349

Page 315: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (333 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

May 22, 2007 07:00:56 #1rahulRE: Can we update target table without using update s...Click Here to view complete documentYes we can update the target table by using the session properties.There are some options in the Session properties.=======================================using :TuTargetupdate override.=======================================

350.Informatica - How to create slowly changing dimension ininformatica?QUESTION #350No best answer available. Please pick the good answer available or submit youranswer.May 07, 2007 23:02:45 #1hanug Member Since: June 2006 Contribution: 24RE: How to create slowly changing dimension in informa...Click Here to view complete documentWe can create them manually or use the slowly changing dimension wizard to avoid the hassels.Use slowly changing dimension wizard and make necessary changes as per your logic.Hanu.=======================================

351.Informatica - What are the general reasons of session failurewith Look Up having Dynamic Cache?QUESTION #351

Page 316: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.April 24, 2007 15:52:27 #1shanthi1 Member Since: March 2007 Contribution: 6file:///C|/Perl/bin/result.html (334 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: What are the general reasons of session failure wi...Click Here to view complete documenthiIf u r using dynamic lookup and if it is trying to return more than one value then session fails=======================================

352.Informatica - what is the difference between source qualifiertransformation and filter transformation?QUESTION #352No best answer available. Please pick the good answer available or submit youranswer.April 24, 2007 15:59:43 #1shanthi1 Member Since: March 2007 Contribution: 6RE: what is the difference between source qualifier tr...Click Here to view complete documentThere is no SQL override in FILTER where as in Source Qualifier we have SQL override and also inSQ transformation we have options like SELECT DISTINCT JOIN FILTER CONDITIONS SORTEDPORTS etc...=======================================In Source Qualifier we can filter records from different source systems(Relational or Flatfile). In FilterTransformation we will filter those records which we need to update or proceed further. In simplebefore Filter Transformation the data from source system may or may not be processed(ExpressionTransformation etc...).=======================================

Page 317: Informatica Interview QnA

By using source qualifier transfomation we can filter out the records for only relational sources but byusing Filter Transformation we can filter out the records of any source..=======================================By using Source Qualifier we can filter out records from only relational sources. But by using FilterTransformation we can filter out records from any sources.In Filter Transformation we can use any expression to prepare filter condition which evaluates to TRUEor FALSE. The same cannot be done using Source Qualifier.=======================================A Source Qualifier transformation is the starting point in the mapping where in we are bringing thefile:///C|/Perl/bin/result.html (335 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

incoming data or the source data is extracted from this transformation after connecting to the sourcedata base.A filter transformation is a transformation which is placed in the mapping pipe line in order to pass thedata to the data following some specific conditions that has to be followed by the passing records.Of course the same purpose can be solved by the Source Qualifier transformation if this is extractingdata from a relational source where as if the data is going to be extracted from a flat file then we cannotdo it using source qualifier.=======================================

353.Informatica - How do you recover a session or folder if youaccidentally dropped them?QUESTION #353No best answer available. Please pick the good answer available or submit youranswer.May 07, 2007 22:58:05 #1hanug Member Since: June 2006 Contribution: 24RE: How do you recover a session or folder if you acci...Click Here to view complete document

Page 318: Informatica Interview QnA

u can find ur backup and restore from the backup. If u dont have backup u lost everything. u cant get itback.Thats why we should always take the backup of the objects that we create.Hanu.=======================================

354.Informatica - How do you automatically execute a batch orsession?QUESTION #354No best answer available. Please pick the good answer available or submit youranswer.May 07, 2007 22:51:18 #1hanug Member Since: June 2006 Contribution: 24file:///C|/Perl/bin/result.html (336 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: How do you automatically execute a batch or sessio...Click Here to view complete documentu can use the scripting(either UNIX shell or Windows Dos scripting) and then schedule the script.Hanu.=======================================

355.Informatica - What is the best way to modify a mapping if thetarget table name is changed?QUESTION #355No best answer available. Please pick the good answer available or submit youranswer.June 20, 2007 03:12:31 #1yuva010RE: What is the best way to modify a mapping if the ta...Click Here to view complete documentThere is no as such best way but you have to incorporate following steps -1. Change the name in Mapping Designer2. Refresh mapping with right click on session

Page 319: Informatica Interview QnA

3. Reset the target connections.4. Save.=======================================

356.Informatica - what is homogeneous transformation?QUESTION #356No best answer available. Please pick the good answer available or submit youranswer.May 11, 2007 18:57:56 #1vinod madalaRE: what is homogeneous transformation?file:///C|/Perl/bin/result.html (337 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentPulling the data from same type of sources using a same database link is called Homogenous. The keyword is same databse link (thats the system DSN we use).=======================================

357.Informatica - If a Mapping is running slow, What steps youwill take, to correct it?QUESTION #357No best answer available. Please pick the good answer available or submit youranswer.May 11, 2007 19:05:32 #1vinodh259 Member Since: May 2007 Contribution: 2RE: If a Mapping is running slow, What steps you will ...Click Here to view complete documentprovide optimizer hints to the informatica. such as force informatica to use indexes synonyms etc ..best technique is find ehich trnsformation is makin the mapping slow and with that logic build a queryin the databse and see how the database executing plan. to see the plan how database executing plan for

Page 320: Informatica Interview QnA

example EXPLAIN PLAN command in oracle helps u . for more information try help topics in powerdesigner. in help lookfor optimizing hints=======================================

358.Informatica - How can you join two tables without usingjoiner and sql override transformations?QUESTION #358No best answer available. Please pick the good answer available or submit youranswer.May 07, 2007 22:48:49 #1hanug Member Since: June 2006 Contribution: 24RE: How can you join two tables without using joiner a...file:///C|/Perl/bin/result.html (338 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentU can use the Lookup Transformation to perform the join. Here u may not get the same results as SQLOVERRIDE(data coming from same soure) or Joiner(same source or different sources) because it takessingle record in case of multi matchIf one of the table contains single record u dont need to use SQL OVERRIDE or Joiner to join records.Let it perform catesion product. This way u dont need to use both(sql override joiner)Hanu.=======================================you can join two tables with in the same database by using lookup query override=======================================If the sources are homogenous we use source qualifierif the sources have same structure we can use union transformation=======================================

359.Informatica - What does Check-In and Check-Out option referto in the mapping designer?QUESTION #359

Page 321: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.May 15, 2007 11:21:21 #1ramgan_tryst Member Since: May 2007 Contribution: 2RE: What does Check-In and Check-Out option refer to i...Click Here to view complete documentCheck-In and Check-Out refers to Versioning your Mapping. It is like maintaining the changes youhave made. It is like using VSS or CVS. When you right-click you mapping you have a option calledVersioning if you have got that facility enabled.=======================================

360.Informatica - Where and Why do we use Joiner Cache?QUESTION #360No best answer available. Please pick the good answer available or submit youranswer.June 28, 2007 04:57:02 #1mohammed haneeffile:///C|/Perl/bin/result.html (339 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: Where and Why do we use Joiner Cache?Click Here to view complete documentHiJoiner Cache will use in Joiner t/r to improve the performance. While using joiner cache informaticaserver first read the data from master source and built data index & data cache in the master rows. Afterbuilding the cache joiner t/r reads records from detail source to performs joins.=======================================

361.Informatica - where does the records goes which does notsatisfy condition in filter transformation?QUESTION #361

Page 322: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.July 06, 2007 05:14:44 #1pkonakalla Member Since: May 2007 Contribution: 2RE: where does the records goes which does not satisfy...Click Here to view complete documentIt goes to the default group. If you connect default group to an output the powercenter processes thedata. Otherwise it doesnt process the default group.=======================================The rows which are not satisfing the filter transformation are discarded. It does not appear in the sessionlogfile or reject files.=======================================There is no default group in Filter Transformation. The records which does not satisfy filter conditionare discarded and not written to reject file or session log file=======================================

362.Informatica - How can we access MAINFRAME tables inINFORMATICA as a source ?file:///C|/Perl/bin/result.html (340 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

QUESTION #362 ex-: Suppose a table EMP is in MAINFRAME then how can weaccess this table as SOURCE TABLE in informatica? Click Here to view completedocumentNo best answer available. Please pick the good answer available or submit your answer.May 26, 2007 14:32:55 #1vishnukirank Member Since: May 2007 Contribution: 1RE: How can we access MAINFRAME tables in INFORMATICA...=======================================Use Informatica Power Connect to connect to external systems like Mainframes and import the sourcetables.

Page 323: Informatica Interview QnA

=======================================Use the Normalizer transformation to take the Mainframe sources(COBOL)=======================================

363.Informatica - How to run a workflow without using GUI i.e,Worlflow Manager, Workflow Monitor and pmcmd?QUESTION #363No best answer available. Please pick the good answer available or submit youranswer.August 03, 2007 03:56:21 #1balaetl Member Since: November 2005 Contribution: 3RE: How to run a workflow without using GUI i.e, Worlf...file:///C|/Perl/bin/result.html (341 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentpmcmd is not GUI. It is a command you can use within unix script to run the workflow.=======================================Unless the job is scheduled you cannot manually run a workflow without using a GUI.=======================================

364.Informatica - How to implement de-normalization concept inInformatica Mappings?QUESTION #364No best answer available. Please pick the good answer available or submit youranswer.January 24, 2008 19:04:25 #1kasisarath Member Since: January 2008 Contribution: 6RE: How to implement de-normalization concept in Informatica Mappings?Click Here to view complete documentUser Normalizer. This transformation used to normalize data.=======================================

Page 324: Informatica Interview QnA

365.Informatica - What are the Data Cleansing Tools used in theDWH?What are the Data Profiling Tools used for DWh?QUESTION #365No best answer available. Please pick the good answer available or submit youranswer.June 21, 2007 04:23:27 #1ShashikumarRE: What are the Data Cleansing Tools used in the DWH?...file:///C|/Perl/bin/result.html (342 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentData Cleansing Tool: TriliumProfile tool: Informatica Profiler=======================================

366.Informatica - how do you use Normalizer to convert columnsinto rows ?QUESTION #366No best answer available. Please pick the good answer available or submit youranswer.February 18, 2008 03:47:43 #1Deepak Rajkumar Member Since: June 2007 Contribution: 4RE: how do you use Normalizer to convert columns into rows ?Click Here to view complete documentUsing Normalizer in the normalizer properties we can add coloumns and for a coloumn we can specifythe occurance levels which converts rows into coloumns.=======================================

367.Informatica - how to use the shared cache feature in look uptransformation

Page 325: Informatica Interview QnA

QUESTION #367No best answer available. Please pick the good answer available or submit youranswer.September 28, 2007 11:26:00 #1chandrarekhaRE: how to use the shared cache feature in look up tra...file:///C|/Perl/bin/result.html (343 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentIf you are using single lookup and you are using only for reading tha data you can use satatic cache=======================================Instead of creating multiple cache files informatica server creates only one cache file for all the lookupsused with in the mapping which selected as a shared cache option.=======================================

368.Informatica - What is Repository size, What is its min andmax size?QUESTION #368No best answer available. Please pick the good answer available or submit youranswer.September 28, 2007 11:24:21 #1chandrarekhaRE: What is Repository size, What is its min and max ...Click Here to view complete document10GB=======================================

369.Informatica - What will be the way to send only duplicaterecords to the Target?QUESTION #369No best answer available. Please pick the good answer available or submit your

Page 326: Informatica Interview QnA

answer.July 23, 2007 08:23:22 #1Samir DesaiRE: What will be the way to send only duplicate record...file:///C|/Perl/bin/result.html (344 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentYou can use following query in SQ -->Select Col_1 Col_2 Col..n from source group by Col_1 Col_2 Col..n having count(*)>1to get only duplicate records in the traget.Best RegradsSamir Desai.=======================================If you take a ex:EMP TABLE having the duplicate recordsThe query isSELECT *FROM EMP WHERE EMPNO IN SELECT EMPNO FROM EMP GROUP BY EMPNOHAVING COUNT(*)>1;Sanjeeva Reddy=======================================

370.Informatica - What are the Commit & Commit Intervals?QUESTION #370No best answer available. Please pick the good answer available or submit youranswer.July 31, 2007 13:05:53 #1rasmi Member Since: June 2007 Contribution: 20RE: What are the Commit & Commit Intervals?Click Here to view complete documentCommit interval is a interval in which the Informatica server loads the data into the target.=======================================

371.Informatica - Explain Session Recovery Process?QUESTION #371

Page 327: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (345 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

September 28, 2007 11:21:50 #1chandrarekhaRE: Explain Session Recovery Process?Click Here to view complete documentYou have 3 steps in session recoveryIf Informatica server performs no commit run the session againAt least one commit perform recoveryperform recovery is not possible truncate the target table and run the session again.Rekha=======================================hiwhen the informatica server starts a recovery session it reads theopb_srvr_recovery table and notes the rowid of the last row commitedto the target database. when it starts recovery process again it startsfrom the next row_id. if session recovery should take place atleast onecommit must be executed.=======================================

372.Informatica - Explain pmcmd?QUESTION #372No best answer available. Please pick the good answer available or submit youranswer.July 31, 2007 11:54:03 #1rasmi Member Since: June 2007 Contribution: 20RE: Explain pmcmd?file:///C|/Perl/bin/result.html (346 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete document1. Pmcmd is perform following tasks1.start and stop the sessions and batches2.process the sesson recovery3.stop the informatica server4.check whether informatica server working or not.=======================================

Page 328: Informatica Interview QnA

pmcmd means powermart command prompt which used to perform the tasks from command promptand not from Informatica GUI window=======================================It a command line program. It performs the following tasksStart and stop the session and batchesStop the Informatica ServerChecks whether the server is working or not=======================================hiPMCMD means program command line utility.it is a program command line utility to communicate with informatica server.PMCMD performs following tasks1)start and stop batches and sessions2)recovery sessions3)stops the informatica4)schedule the sessions by shell scripting5)schedule the sessions by using operating system schedule tools like CRON=======================================

373.Informatica - What does the first column of bad file (rejectedrows) indicate? ExplainQUESTION #373No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (347 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

September 24, 2007 14:24:17 #1chandrarekhaRE: What does the first column of bad file (rejected r...Click Here to view complete documentFirst column of the bad file indicates row -indicator and second column indicates column indicatorRow indicator : Row indicator tells the writer what to do with the row of wrong dataRow indicator meaning rejected by0 Insert target/writer1 update target/writer

Page 329: Informatica Interview QnA

2 delete target/writer3 reject writerIf the row indicator is 3 the writer rejects the row because the update starategy expression is marked asreject=======================================

374.Informatica - What is the size of data mart?QUESTION #374No best answer available. Please pick the good answer available or submit youranswer.August 22, 2007 05:24:31 #1hpadalaRE: What is the size of data mart?file:///C|/Perl/bin/result.html (348 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentDatamart size around 1 gb. its based on your project.=======================================HiData mart is a part of the data warehouse.for example In an organization we can manage Employeepersonal information as one data mart and Project information as one data mart one data ware housemay have any number of data marts.The size of the data mart depends on your business needs it variesbusiness to business. some times OLTP database may act as data mart for ware house.CheersThana=======================================

375.Informatica - What is meant by named cache?At whatsituation we can use it?QUESTION #375

Page 330: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit youranswer.August 24, 2007 13:54:43 #1sn3508 Member Since: April 2006 Contribution: 20RE: What is meant by named cache?At what situati...Click Here to view complete documentBy default there will be no name for the cache in lookup transformation. Everytime you run the sessionthe cache will be rebuilt. If you give a name to it it is called Persistent Cache. In this case the first timeyou run the session the cache will be build and the same cache is used for any no. of runs. This meansthe cache doesn't have any changes reflected to it even if the lookup source is changed. You can rebuiltit again by deleting the cache=======================================

376.Informatica - How do you define fact less Fact Table inInformaticaQUESTION #376No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (349 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

September 18, 2007 00:30:12 #1ramakrishnaRE: How do you define fact less Fact Table in Informat...Click Here to view complete documentA Fact table with out measures is called fact less fact table=======================================Fact table without measure is called factless fact tablefact less fact tables are used to capture date transaction events=======================================

377.Informatica - what are the main issues while working with

Page 331: Informatica Interview QnA

flat files as source and as targets ?QUESTION #377No best answer available. Please pick the good answer available or submit youranswer.February 06, 2008 09:25:16 #1rasmi Member Since: June 2007 Contribution: 20RE: what are the main issues while working with flat files as source and as targets ?Click Here to view complete documentWe need to specify correct path in the session and mension either that file is 'direct' or 'indirect'. keepthat file in exact path which you have specified in the session .-regardsrasmi=======================================1. We can not use SQL override. We have to use transformations for all our requirements2. Testing the flat files is a very tedious job3. The file format (source/target definition) should match exactly with the format of data file. Most ofthe time erroneous result come when the data file layout is not in sync with the actual file.(i) Your data file may be fixed width but the definition is delimited----> truncated data(ii) Your data file as well as definition is delimited but specifying a wrong delimiter (a) a delimitorfile:///C|/Perl/bin/result.html (350 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

other than present in actual file or (b) a delimiter that comes as a character in some field of the file--->wrong data again(iii) Not specifying NULL character properly may result in wrong data(iv) there are other settings/attributes while creating file definition which one should be very careful4. If you miss link to any column of the target then all the data will be placed in wrong fields. Thatmissed column wont exist in the target data file.

Page 332: Informatica Interview QnA

Please keep adding to this list. There are tremendous challenges which can be overcome by being a bitcareful.=======================================

378.Informatica - When do you use Normal Loading and the BulkLoading, Tell the difference?QUESTION #378No best answer available. Please pick the good answer available or submit youranswer.September 19, 2007 11:35:05 #1rama krishnaRE: When do you use Normal Loading and the Bulk Loadin...Click Here to view complete documentIf we use SQL Loder connections then it will be to go for Bulk loading. And if we use ODBCconnections for source and target definations then it is better to go for Normal loading.If we use Bulk loading then the session performence will be increased.how means... if we use the bulk loading the data will be BYPASS through the DATALOGS. Soautomatically performence will be increased.=======================================Normal Load: It loads the records one by one Server writes log file for each record So it takes moretime to load the data.Bulk load : It loads the number of records at a time it does not write any log files or tracing levels so ittakes less time.=======================================You would use Normal Loading when the target table is indexed and you would use bulk loading whenfile:///C|/Perl/bin/result.html (351 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

the target table is not indexed. Running Bulk Load in an indexed table will cause the session to fail.=======================================

Page 333: Informatica Interview QnA

379.Informatica - What is key factor in BRD?QUESTION #379No best answer available. Please pick the good answer available or submit youranswer.January 24, 2008 19:05:37 #1kasisarath Member Since: January 2008 Contribution: 6RE: What is key factor in BRD?Click Here to view complete documentcould you please eloborate what is BRD=======================================

380.Informatica - how do you measure slowly changingdimensions using lookup tableQUESTION #380No best answer available. Please pick the good answer available or submit youranswer.September 24, 2007 13:41:04 #1vemurisasidhar Member Since: August 2007 Contribution: 10RE: how do you measure slowly changing dimensions usin...Click Here to view complete documentthe lookup table is used to split the data by comparing the source and target data. the data is branchedaccordingly whether to update or insert=======================================

381.Informatica - Have you implmented Lookup in yourmapping, If yes give some example?QUESTION #381No best answer available. Please pick the good answer available or submit youranswer.September 26, 2007 13:39:09 #1

Page 334: Informatica Interview QnA

file:///C|/Perl/bin/result.html (352 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

vemurisasidhar Member Since: August 2007 Contribution: 10RE: Have you implmented Lookup in your mapping, If yes...Click Here to view complete documentwe do. we have to update or insert a row in the target depending upon the data from the sources. soinorder to split the rows either to update or insert into the target table we use the lookup transformationin reference to target table and compared with source table.=======================================

382.Informatica - Which SDLC suits best for the datawarehousingproject.QUESTION #382No best answer available. Please pick the good answer available or submit youranswer.September 12, 2007 17:28:25 #1anjanroy Member Since: September 2005 Contribution: 4RE: Which SDLC suits best for the datawarehousing proj...Click Here to view complete documentDatawarehousing projects are different from the traditional OLTP project. First of all they are ongoing.A datawarehouse project is never "complete". Here most of the time the business users would say -"give us the data and then we will tell you what do we want". So here a traditional waterfall model isnot the optimal SDLC approach.The best approach here is of a phased iteration - where you implement and deliver projects in smallmanageable chunks (90 days ~ 1 qtr) and keep maturing your data warehouse.=======================================

383.Informatica - What is the difference between sourcedefinition database and source qualifier?QUESTION #383

Page 335: Informatica Interview QnA

No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (353 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

answer.September 24, 2007 13:26:52 #1vemurisasidhar Member Since: August 2007 Contribution: 10RE: What is the difference between source definition d...Click Here to view complete documenta source definition database contain the datatypes that are used in the orginal database from which thesource is extracted. where the source qualifier is used to convert the source definition datatypes to theinformatica datatypes. which is easy to work with.=======================================

384.Informatica - What is the logic will you implement to loaddata into a fact table from n dimension tables?QUESTION #384No best answer available. Please pick the good answer available or submit youranswer.September 24, 2007 13:24:01 #1vemurisasidhar Member Since: August 2007 Contribution: 10RE: What is the logic will you implement to load data ...Click Here to view complete documentwe can do this by using mapping wizard. there are of basically two types:1)getting started wizzard2)scdgettign started wizard is used when there is no need to change the previous data.scd can hold the historical data.=======================================

385.Informatica - What is a Shortcut and What is the difference

Page 336: Informatica Interview QnA

between a Shortcut and a Reusable Transformation?QUESTION #385No best answer available. Please pick the good answer available or submit yourfile:///C|/Perl/bin/result.html (354 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

answer.January 29, 2008 23:32:01 #1Sant_parkash Member Since: October 2007 Contribution: 22RE: What is a Shortcut and What is the difference between a Shortcut and a ReusableTransformation?Click Here to view complete documentA Reusable Transformation can only be used with in the folder. but a shortcut can be used anywhere inthe Repository and will point to actual Transformation..=======================================to add to thisthe shortcut points to objects of shared folder only=======================================A shortcut is a reference (link) to an object in a shared folder these are commonly used for sources andtargets that are to be shared between different environments / or projects. A shortcut is created byassigning 'Shared' status to a folder within the Repository Manager and then dragging objects from thisfolder into another open folder; this provides a single point of control / reference for the object -multiple projects don't all have import sources and targets into their local folders. A reusabletransformaion is usually something that is kept local to a folder examples would be the use of a reusablesequence generator for allocating warehouse Customer Id's which would be useful if you were loadingcustomer details from multiple source systems and allocating unique ids to each new source-key. Manymappings could use the same sequence and the sessions would all draw from the same continuous pool

Page 337: Informatica Interview QnA

of sequence numbers generated.=======================================

386.Informatica - In what all transformations the mapplets cant beused in informatica??QUESTION #386No best answer available. Please pick the good answer available or submit youranswer.October 06, 2007 17:50:21 #1nainaRE: In what all transformations the mapplets cant be u...file:///C|/Perl/bin/result.html (355 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentThe mapplets cant use these following transformationsxml source qualifiernormalizernon-reusable sequence generator (it can use only reusable sequence generator)=======================================

387.Informatica - Eliminate Duplicate RecordsQUESTION #387 HiI am having 10000 records in flat file, in that there are 100 records are duplicaterecords..soo we want to eliminate those records..which is the best method we have tofollow.RegardsMahesh ReddyClick Here to view complete documentSubmitted by: vivek1708In order to beable to delete those entriesI think, you'll have to write sql queries in teh data base tableusing rownum/rowid concept.

Page 338: Informatica Interview QnA

Orby using the sorter and distinct option, load the unique rows in a temp tablefollowed by a truncate on the original tableand moving data back to it from the temp table.hope it helps.Above answer was rated as good by the following members:ayappan.a=======================================You can put sorter transformation after source qualifier transformation and in sorter tranformation'properties enable distinct property.Thankskumar=======================================file:///C|/Perl/bin/result.html (356 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

In order to beable to delete those entriesI think you'll have to write sql queries in teh data base tableusing rownum/rowid concept.Orby using the sorter and distinct option load the unique rows in a temp tablefollowed by a truncate on the original tableand moving data back to it from the temp table.hope it helps.=======================================use aggregate on primary keys=======================================

388.Informatica - what is the difference between reusabletransformation and mapplets?QUESTION #388No best answer available. Please pick the good answer available or submit youranswer.November 20, 2007 03:19:32 #1ramesh rajuRE: what is the difference between reusable transforma...Click Here to view complete document

Page 339: Informatica Interview QnA

Reusable transformation is a single transformatin which can be resuable & mapplet is a set oftransformations which can be reusable.=======================================

389.Informatica - What is the functionality of LookupTransformationQUESTION #389 (connected & un connected) Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 17, 2007 06:59:15 #1Thananjayan Member Since: November 2007 Contribution: 15file:///C|/Perl/bin/result.html (357 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

RE: What is the functionality of Lookup Transformation...=======================================Look up transformation compares the soure with specified target table and forwarded the records tonext tranformation only matched records.If not match it returns NULL=======================================

390.Informatica - How do you maintain Historical data and howto retrieve the historical data?QUESTION #390No best answer available. Please pick the good answer available or submit youranswer.October 23, 2007 12:02:59 #1raviRE: How do you maintain Historical data and how to ret...Click Here to view complete documentby using update statergy transformations=======================================You can maintain the historical data by desing the mapping using Slowly changing dimensions types.If you need to insert new and update old data best go for Update strategy.

Page 340: Informatica Interview QnA

If you need to maintain the histrory of the data for ex The cost of the product change happen frequentlybut you would like to maintain all the rate history go to SCD Type2.The design change as per your requirement.If you make your question more clear I can provide yourmore information.CheersThana=======================================

391.Informatica - What is difference between cbl (constaint basedfile:///C|/Perl/bin/result.html (358 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

commit) and target based commit?When we use cbl?QUESTION #391No best answer available. Please pick the good answer available or submit youranswer.January 16, 2008 01:30:51 #1say2joshi Member Since: April 2007 Contribution: 3RE: What is difference between cbl (constaint based commit) and target based commit?When weuse cbl?Click Here to view complete documentcould u pls clear ur question tht CBL stands for constarint based loading?thanks=======================================CBL means constaraint based loading.the data was loaded into the the target table based on theConstraints.i.e if we want to load the EMP&DEPT data first it loads the data of DEPT then EMPbecause DEPT is PARENT table EMP is CHILD table.easily to undestand it loads PARENT table first then CHILD table.=======================================

392.Informatica - Repository deletion

Page 341: Informatica Interview QnA

QUESTION #392 what happens when a repository is deleted?If it is deleted for some time and if we want to delete it permanently?where is stored (address of the file) Click Here to view complete documentNo best answer available. Please pick the good answer available or submit your answer.November 28, 2007 22:36:19 #1chandrarekha Member Since: May 2007 Contribution: 14RE: Repository deletionfile:///C|/Perl/bin/result.html (359 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

=======================================I too want to know the answer for this questionI tried to delete the repository that time it was deleted after that i tried to create the repository with thesame name what i was deleted earliar it is showing the repositort exist in the targetwe have to delete the repository in the target database=======================================repository is stored in database go in repository admin console right click on repository then choosedelete . a dialog box is displayed fill up the user name(database user wr ur repository is reside) givepassword and fill all the field .thereafter u can delete it. if any queries plz mail me [email protected]=======================================

393.Informatica - What is pre-session and post-session?QUESTION #393No best answer available. Please pick the good answer available or submit youranswer.November 13, 2007 23:55:12 #1vizaik Member Since: March 2007 Contribution: 30RE: What is pre-session and post-session?Click Here to view complete document

Page 342: Informatica Interview QnA

Pre-session:=======================================

394.Informatica - What is Informatica basic data flow?QUESTION #394No best answer available. Please pick the good answer available or submit youranswer.November 20, 2007 07:04:44 #1NickRE: What is Informatica basic data flow?file:///C|/Perl/bin/result.html (360 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

Click Here to view complete documentINF Basic Data flow means Extraction Transformation and Loading of data from Source to the target.CheersNick=======================================

395.Informatica - which activities can be performed using therepository manager?QUESTION #395No best answer available. Please pick the good answer available or submit youranswer.October 27, 2007 00:47:25 #1karunaRE: which activities can be performed using the reposi...Click Here to view complete documentUsing Repository manager we can create new folders for the existing repositories and manage therepository from it=======================================Using repository manager*We can create folders under the repository*Create Sub Folders and Folder Management

Page 343: Informatica Interview QnA

*Create Users and USer Groups*Set Security/Access Privilges to the usersand many more...=======================================You can use the Repository Manager to perform the following tasks:=======================================

396.Informatica - what is the economic comparision of all theInformatica versions?QUESTION #396No best answer available. Please pick the good answer available or submit youranswer.file:///C|/Perl/bin/result.html (361 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

April 10, 2008 04:59:08 #1sri.kal Member Since: January 2008 Contribution: 6RE: what is the economic comparision of all the Informatica versions?Click Here to view complete documentVersion controlling=======================================Economic comparision is nothing but the price-tag of informatica available versions.=======================================

397.Informatica - why do we need lookup sql override? Do wewrite sql override in lookup with special aim?QUESTION #397No best answer available. Please pick the good answer available or submit youranswer.November 13, 2007 23:47:42 #1vizaik Member Since: March 2007 Contribution: 30RE: why do we need lookup sql override? Do we write sq...Click Here to view complete document

Page 344: Informatica Interview QnA

Yes sql override in lookup used to lookup more than one value from more than one table.=======================================You can join the data from multiple tables in the same database by using lookup overrideYou can use sql override if1. To use more than one look up in the mapping2. You can use SQL override for filtering records in the cache to remove unwanted data=======================================Lookup override can be used to get some specific records(using filters in where clause) from the lookuptable. Adavantages are that the whole table need not be looked up..=======================================

398.Informatica - What are tracing levels in transformation?QUESTION #398file:///C|/Perl/bin/result.html (362 of 363)4/1/2009 7:50:59 PMfile:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit youranswer.November 19, 2007 00:18:10 #1Abhishek ShuklaRE: What are tracing levels in transformation?Click Here to view complete documentTracing level keeps the information aboaut your mapping and transformation.there are 4 kind of tracing level which is responsible for giving more information on basis of theircharacterictis.ThanksAbhishek Shukla=======================================Tracing level in the case of informatica specifies the level of detail of information that can be recordedin the session log file while executing the workflow.4 types of tracing levels supported1.Normal: It specifies the initialization and status information and summerization of the success rows

Page 345: Informatica Interview QnA

and target tows and the information about the skipped rows due to transformation errors.2.Terse specifies Normal + Notification of data3. Verbose Initialisation : In addition to the Normal tracing specifies the location of the data cachefiles and index cache files that are treated and detailed transformation statistics for each and everytransformation within the mapping.4. Verbose data: Along with verbose initialisation records each and every record processed by theinformatica serverFor better performance of mapping execution the tracing level should be specified as TERSEVerbose initialisation and verbose data are used for debugging purpose.=======================================file:///C|/Perl/bin/result.html (363 of 363)4/1/2009 7:50:59 PM