Sharing Knlowledge With The World…

Month: May 2014

Source Target Level Commit in Informatica

Source Target Level Commit in Informatica

In this post we will discuss how Source Target Level Commit in informatica works.

First of all What is a Commit to a Data Base : – A Commit is a instruction to the database server , which tells it to the save the state of the database session at that point of time.

Informatica supports two commit categories / levels :-

1) Source Level Commit

2) Target Level Commit

Source Level Commit :-  In this type of commit  an integration service issue commit  to the underlying data base (Oracle , SQL Server e.t.c.) when a particular number of rows has been read from the source data base irrespective of the number of rows inserted to the data base.

Let’s Take an Example : –

A (ORACLE) source is reading and it has taken 505 rows till now and in target there you have 300 inserted rows so far , and mean while rest of the rows are getting processed with in the mapping and in session properties you have selected source level commit with row interval of 510. As soon as source will pick 510th row the integration service will issue a commit to the underlying data base however there are still 300 rows in target  , only 300 rows will be committed. Rest rows will be committed either again reading 510 rows or after completion of the session.

Number of source rows reading matters in this case.

Target Level Commit :- In this commit type the integration service issues a commit to the data base once a particular number of rows are inserted to the data base.

Let’s Take an Example : –

your source is reading and has read 500 rows so far and in the session properties you had selected Target Level Commit with commit interval of 300 rows. And in target you have 250 rows so far , now your source has read 800 rows and in your target you just touched 300th row , immediately a commit will fire on the target data base.

Number of target rows affected matters in this case.

Author : – Ankit Kansal & Nayan Naik

 

 

Continue Reading

Simple Informatica Scenarios – Part 1

Informatica Interview Questions

We have decided to come up with some common scenarios in Informatica , this would be an ongoing post where-in we would be adding common issues faced by an informatica developer.We would be introducing complex scenarios as we move ahead. Please feel free to discuss these scenarios in the comment section below .

1)How to  Concat row data through informatica?

Source:

Ename EmpNo
stev 100
methew 100
john 101
tom 101

Target:

Ename EmpNo
Stev methew 100
John tom 101

Ans:

Using Dynamic Lookup on Target table:

If record doen’t exit do insert in target .If it is already exist then get corresponding Ename vale from lookup  and concat in expression with current Ename value then update the target Ename column using update strategy.

Using Var port Approch:

Sort the data in Source Qualifier  based on EmpNo column then Use expression to store previous record information using Var port after that use router to insert a record if it is first time if it is already inserted then update Ename with concat value of prev name and  current name value then update in target.

2) How to send Unique (Distinct) records into One target and duplicates into another target?

Source:

Ename EmpNo
stev 100
Stev 100
john 101
Mathew 102

Output:

Target_1:

Ename EmpNo
Stev 100
John 101
Mathew 102

Target_2:

Ename EmpNo
Stev 100

Ans:

Using Dynamic Lookup on Target table:

If record doen’t exit do insert in target_1 .If it is already exist then send it to Target_2 using Router.

Using Var port Approch:

Sort the data in sq  based on EmpNo column then Use expression to store previous record information using Var ports after that use router to route the data into targets  if it is first time then sent it to first target  if it is already inserted then send it to Tartget_2.

………………………………………………………………………………………..

Below is the solution for processing multiple flat files into table using informatica.

3) How to Process multiple flat files to single target table through informatica if all files are same structure?

We can process all flat files through one mapping and one session using list file.

First we need to create list file using unix script  for all flat file the extension of the list file is .LST.

This list file it will have only flat file names.

At session  level  we need to set

source file directory  as list file path

And source file name as list file name

And file type as indirect.

……………………………………………………………………………………………………..

This is also One of the advanced feature in Informatica 8.

4) How to populate file name to target while loading multiple files using list file concept.

In informatica 8.6 by selecting Add currently processed flatfile name option in the properties tab of source definition after import source file defination in source analyzer.It will add new column as currently processed file name.we can map this column to target to populate filename.

……………………………………………………………………………………………..

5)How to load unique or distinct records from flat file to target?

Ans:Using Aggregator we can load unique records by setting group by on all columns this is one approach

Another method is we can use sorter after source qualifier in sorter  we can use distinct option.

………………………………………………………………………………………………..

6) How to load first record and  last record in to target  from a file  using Informatica?

Solution:

Step 1.

Create  one mapping variable like $$Record_Count and create one Sequence Generator transformation with reset option  then use filter transformation .

In filter transformation put condition like below.

Seq.NEXT  Value =1  or  Seq.NEXT  Value =$$Record_Count

Step 1. Use  Unix script to create/update  parameter file with file record count (wc –l) .this parameter file will supply the value to mapping variable  $$Record_Count

Below is the order of the tasks.

Wf——-àcommand task——-àmain session

Command task—To execute unix script

 

7)How to  add a lengthy query in the source qualifier,if the query length exceeds 32K characters ?

If you are trying to use a very length query in the SQ over-ride, there seems to be a character limit for this. This is something around 32K characters.
This must be mainly due to limitation of saving this query as metadata in underlying database repository.
This issue can be solved by writing this query as a parameter value in the param file.
Since the query will be fetched dynamically, the limitation will not be an issue.

Note – Ensure that the query is written in one single line in the parameter file.

 

 

Continue Reading

Talend Big Data adds SaaS Anlystics to its Arsenal

Talend Blue Yonder parternship

 

Talend, the open source Big Data merchant, has stretched its accomplice biological community through new joint effort with Blue Yonder. Together, the organizations will apply programming as-an administration (Saas)-based prescient examination to Big Data that exists in an assortment of structures.

The association focuses on Blue Yonder’s Forward Demand, a Saas information dissection stage intended to help organizations concentrate esteem from data. The assention between the two organizations will include new “connectors” to the Talend Big Data stage that will make it conceivable to concentrate information from an extent of sources and break down them in Forward Demand.

However that is not all. The organizations are working together in different regions, too, “on undertakings in various areas including retail, saving money, account, utilities and an extent of other vertical markets,” as per an announcement. They will show some of their shared results at approaching occasions and webinars open on the web.

To an extensive degree, the organization is about streamlining Big Data investigation by permitting clients to break down the information they require, regardless of its source or current state, utilizing the apparatuses they need. That result, the organizations trust, will engage clients by sparing cash and time.

“Working with Talend will bring broad profits both to ourselves and to our clients,” said Ralf Werneth, senior director, Alliances at Blue Yonder. “Truly, we associated with our clients’ information sources utilizing custom interfaces. Information Integration was lengthy yet needed to empower access to extra assets of information,” he included. “On account of Talend’s cutting edge joining ability our clients can essentially lessen custom advancement. For the client, this makes an interpretation of straightforwardly into diminished improvement expenses, cohorted support and enhanced time-to-esteem.”

“François Chiche, VP, Alliances at Talend said: “Our association with Blue Yonder highlights by and by the force of Big Data. Talend gathers the information Blue Yonder and its clients need and makes it accessible to them. Blue Yonder’s quality is supporting information driven choice administration and its Saas Solution Forward Demand conveys exact figures from huge volumes of information continuously. It’s an immaculate fit.

 

 

Continue Reading
PageLines