Sharing Knlowledge With The World…

Triggers in Oracle

A trigger is a PL/SQL block or a PL/SQL procedure associated with a table, view, schema or the database. Executes implicitly whenever a particular event takes place. 
Types of Trigger:
1.  DML on Table: If the DML operation such as an (INSERT, UPDATE or DELETE) is issued on the table
2.   DML on view: If the DML operation such as an (INSERT, UPDATE or DELETE) is issued on the view then an INSTEAD OF trigger is fired.
3.  DDL Operations: If the DDL operation such as an (CREATE or ALTER) is issued. No matter which user is connected or which application is used.
4.  User/Schema Level Triggers: like LOGON, LOGOFF.
5. Database Level Trigger: like startup/ shutdown.


GUIDELINES FOR USING TRIGGERS:

1.  To guarantee that when a particular operation is performed related action is performed.
2.  Excessive use of triggers can result in complex interdependencies which is very difficult to maintain in large applications. Use triggers when necessary and beware of recursive and cascading effects.
3.  If the logic for the trigger is very lengthy then create stored procedures with logic and invoke them in the trigger body.

CREATING DML TRIGGERS

A triggering statement consists of:
a. Trigger timing (Is when the trigger fires in relation to the triggering event)
                For table: BEFORE, AFTER
For view: INSTEAD OF
b. Triggering Event: INSERT, UPDATE OR DELETE (DML operation or event on the table or view that makes the trigger to fire)
c. Table Name         : On table, view
d. Trigger Type        : Row or Statement (How many times the trigger body executes)
e. WHEN clause     : Restricting Condition
f. Trigger Body        : PLSQL Block      (What action the trigger performs)
If multiple triggers are defined for the table, be aware that the order in which the multiple trigger of the same type fire is arbitrary. To ensure that the triggers of the same type are fired in a particular order, use FOLLOWS clause to specify order of execution in 11g.

TRIGGER TIMINGS:-

  1. BEFORE Trigger :  It is used to determine whether the triggering statement should be allowed to complete. (This situation enables you to eliminate unnecessary processing of triggering statements and its eventual rollback in cases where an exception is raised in triggering action) . It is used to derive the column values before completing a triggering INSERT or UPDATE statement.
  2. AFTER Trigger  :    It completes the triggering statement after executing the triggering action, mainly used for Auditing/History purpose

    INSTEAD OF Trigger:

  •  It provides the transparent way of modifying views that cannot be modified directly by SQL DML statements because it is not inherently modifiable
  • You can write INSERT, UPDATE and DELETE statements against the view. The INSTEAD OF trigger works invisibly in the background performing the action coded in the triggering body directly on underlying tables.

TRIGGERING EVENT:
The triggering event can be INSERT, UPDATE OR DELETE statement on a table.

  • When a triggering event is an update statement you can include a column list to identify which columns must be changed to fire the trigger. You cannot specify the column list for the insert or delete statement because they affect the entire row
  • The triggering event can contain one two or all three DML operations like –

              INSERT or UPDATE or DELETE

TRIGGERING TYPE:

  1.  Statement Trigger : A statement trigger fires once on the behalf of the triggering event, even if no rows are affected at all.                                                                                                                           For Example: A trigger that performs a complex security check on the current user
  2. Row Trigger : A row trigger fires each time the table is affected by the triggering event. If the triggering event affects no rows, a row trigger is not executed. Row triggers are useful if the trigger action depends on the data from the rows that are affected or on the data provided by the triggering event itself.

TRIGGER BODY  :

The trigger body defines what needs to be done when the triggering event is issued. It consists of the PLSQL block or the call to the PLSQL procedure or JAVA procedure. The PLSQL block consists of the SQL and PLSQL statements and consists of the PLSQL constructs such as variable, cursors, exceptions and so on.

Note: The size of the trigger cannot be more than the 32K

Firing Sequence
    BEFORE statement
    BEFORE row trigger
    Actual DML
    AFTER row trigger
    AFTER statement trigger                       

Syntax:
CREATE OR [REPLACE] TRIGGER trigger_name
    timing
        event1 [or event2 OR event3]            ON table_name
trigger_body

Note: Trigger names must be unique with respect to other triggers within same schema. The trigger names do not need to be unique with respect to other schema objects such as tables, views and procedures.
Using the column name with the update clause in the trigger improves the performance, because the trigger fires only when the particular column is updated and thus avoids the unintended firing of trigger for other updated columns.

Example 1: (To restrict inserts into the EMPLOYEES table to certain business hours, Monday to Friday)
CREATE OR REPLACE TRIGGER secure_emp
    BEFORE INSERT ON EMPLOYEES
    BEGIN
          IF(TO_CHAR(SYSDATE, ‘DY’) IN(‘SAT’,’SUN’)) OR (TO_CHAR(SYSDATE, ‘HH24:MI’) NOT BETWEEN ‘8:00′ AND ’18:00’) THEN
            RAISE_APPLICATION_ERROR(-20500, ‘You may insert into EMPLOYEES table only during business hours.’);
        END IF;
    END;

Trigger created.
INSERT INTO EMPLOYEES(employee_id, last_name, first_name, email, hire_date, job_id, salary, department_id)
VALUES(300, ‘Smith’, ‘Rob’, ‘RSmith’, SYSDATE, ‘IT_PROG’, 4500, 60);

In the above case if the user tries to insert into EMPLOYEES table on saturday, the user sees the message,
ORA-20500 : You may only insert into EMPLOYEES during buisness hours.
the trigger fails and the triggering statement is automatically rolled back by the oracle server
RAISE_APPLICATION_ERROR is server-side built in procedure that returns the error to the user and causes the PLSQL block to fail.

Example 2 :

Create one trigger to restrict all data manipulation events on EMPLOYEES table to certain buisness hours, Monday to Friday.

(This can be done using various conditional predicates like INSERTING, UPDATING AND DELETING within the trigger body)

CREATE OR REPLACE TRIGGER secure_emp
    BEFORE INSERT OR UPDATE OR DELETE ON EMPLOYEES
    BEGIN
          IF(TO_CHAR(SYSDATE, ‘DY’) IN(‘SAT’,’SUN’)) OR (TO_CHAR(SYSDATE, ‘HH24:MI’) NOT BETWEEN ‘8:00′ AND ’18:00’)
        THEN
            IF DELETING THEN   
                RAISE_APPLICATION_ERROR(-20502, ‘You may delete from EMPLOYEES table only during business hours.’);
            ELSIF INSERTING THEN
                RAISE_APPLICATION_ERROR(-20500, ‘You may insert into EMPLOYEES table only during business hours.’);
            ELSIF UPDATING(‘salary’) THEN
                RAISE_APPLICATION_ERROR(-20503, ‘You may update salary only during business hours.’);
            ELSE
                RAISE_APPLICATION_ERROR(-20504, ‘You may update EMPLOYEES table only during normal hours.’);
        END IF;
    END;
   

Syntax of DML Row Triggers
CREATE OR [REPLACE] TRIGGER trigger_name
    timing
        event1 [or event2 OR event3]            ON table_name
        [REFERENCING OLD AS old | NEW AS new]    FOR EACH ROW
        [WHEN (Condition)]trigger_body

REFERENCING– Specifies the correlation names for the old and new values of the current row(The default values are OLD and NEW)
FOR EACH ROW — It designates that it is a row trigger
WHEN– Specifies the trigger restriction (This conditional predicate must be enclosed in parenthesis and is evaluated for each row to determine whether the trigger body executes or not)
Example:
Create a trigger that allows only certain employees (like president and vice-president) to earn a salary more than, 15000.
If the user tries to allocate the salary more than 15000 for an employee other than president and vice-president trigger raises an error.

CREATE OR REPLACE TRIGGER restrict_salary
    BEFORE INSERT OR UPDATE OF SALARY ON EMPLOYEES
        FOR EACH ROW
        BEGIN
            IF NOT(:NEW.job_id IN(‘AD_PRES’, ‘AD_VP’))
               AND :NEW.salary > 15000
            THEN       
            RAISE_APPLICATION_ERROR(-20202, ‘Employee Cannot earn this amount’);
        END IF;
    END;
Trigger created.

If I try to do

UPDATE EMPLOYEES
SET salary = 15500
WHERE last_name = ‘Russell’; — he has job_id as SA_MAN

The application throws the error saying –
ORA-20202: EMPLOYEE CANNOT EARN THIS AMOUNT

Example 3: (Audit trigger) USING OLD AND NEW QUALFIERS

Here, in this case whenever an employee tries to insert, delete or update the employees table it reference the value of the column before and after data change by prefixing it with OLD and NEW Qualifier

CREATE TABLE employees_audit
as select * from employees;     — Created an table where audited data is stored

ALTER TABLE employees_audit
  ADD(
    old_last_name VARCHAR2(30),
    old_job_id    VARCHAR2(30),
    old_salary    NUMBER(8,2)
    );
   
CREATE OR REPLACE TRIGGER audit_emp_values
  AFTER DELETE OR INSERT OR UPDATE ON EMPLOYEES
    REFERENCING OLD AS old NEW AS new
    FOR EACH ROW
  BEGIN
  INSERT INTO EMPLOYEES_AUDIT
    (first_name, hire_date, employee_id, old_last_name, last_name, old_job_id, job_id, old_salary, salary)
    VALUES
    (USER, SYSDATE, :OLD.EMPLOYEE_ID, :OLD.last_name, :NEW.last_name, :OLD.job_id, :NEW.job_id, :OLD.salary, :NEW.salary);
END;

Trigger created.

The OLD and NEW qualifiers are always available with the row triggers. These are default qualifiers or pseducolumns defined by oracle.
Note: Row triggers can decrease the performance if you do a lot of updates on larger tables.

Restriction on Triggers

To restrict the trigger action to those rows that specify a certain condition, we provide a WHEN clause.

Example:
Create a trigger on EMPLOYEES table to calculate an employee’s commission, when a row is added to the EMPLOYEE’s table or when the salary is modified
CREATE OR REPLACE TRIGGER derive_commission_pct
  BEFORE DELETE OR INSERT OR UPDATE OF salary ON EMPLOYEES
    REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    WHEN(NEW.job_id = ‘SA_REP’)
  BEGIN
     IF INSERTING THEN
        :NEW.commission_pct := 0;
     ELSIF :OLD.commission_pct IS NULL THEN
        :NEW.commission_pct := 0;
     ELSE
        :NEW.commission_pct := :OLD.commission_pct + 0.05;
     END IF;
END;
Trigger created.

Here, in above case the NEW qualifier cannot be prefixed with the colon in the WHEN clause because the WHEN clause is outside the PLSQL block

Autonomous Transactions

Autonomous transactions allow you to leave the context of the calling transaction, perform an independent transaction, and return to the calling transaction without affecting its state. The autonomous transaction has no link to the calling transaction, so only committed data can be shared by both transactions.

The following types of PL/SQL blocks can be defined as autonomous transactions:
Stored procedures and functions.
Local procedures and functions defined in a PL/SQL declaration block.
Packaged procedures and functions.
The easiest way to understand autonomous transactions is to see them in action.
CREATE TABLE t1 (x int);
CREATE TABLE t2 (x int);

INSERT INTO t1 VALUES (1);

SELECT * FROM t1;
SELECT * FROM t2;

CREATE OR REPLACE TRIGGER t_trigger
AFTER INSERT
ON t1
FOR EACH ROW

DECLARE
 i PLS_INTEGER;
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM t1;

  INSERT INTO t2
  VALUES
  (i);
END;
/

INSERT INTO t1 VALUES (1);

SQL Error: ORA-04091: table HR.T1 is mutating, trigger/function may not see it

CREATE OR REPLACE TRIGGER t_trigger
AFTER INSERT
ON t1
FOR EACH ROW

DECLARE
 i PLS_INTEGER;
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM t1;

  INSERT INTO t2
  VALUES
  (i);
 
  commit;
END;
/

INSERT INTO t1 VALUES (1);

SELECT * FROM t1;
SELECT * FROM t2;

Dropping an Trigger

We can drop the particular trigger by using below syntax :

DROP TRIGGER audit_emp_values;

DisableEnable an Trigger
    The syntax for disabling a Trigger is:
                     ALTER TRIGGER  trigger_name DISABLE;
    The syntax for disabling all Triggers on a table
        ALTER TABLE  table_name DISABLE ALL TRIGGERS;
    The syntax for enabling a Trigger is:
                     ALTER TRIGGER  trigger_name ENABLE;
    The syntax for enabling all Triggers on a table
        ALTER TABLE  table_name ENABLE ALL TRIGGERS;

Rate this post

nayan

View more posts from this author

Informatica Tutorial – The Definitive Guide

Untitled design (8)

Informatica is the most important (and popular) tool in the Data Integration Industry. Actually, it’s a culmination of several different “Client Soft wares”: you need to master Mapping Designer , Workflow Monitor and good old Workflow Monitor if you want to master Informatica.

INFORMATICA IS ONE OF THE BEST ETL TOOLS OUT THERE, AND WE HAVE THE PERFECT INFORMATICA TUTORIAL GUIDE BOOK WITH THE BEST RESOURCES AROUND THE WEB.

nayan

View more posts from this author

Abstraction in object-oriented programming

Abstraction in object-oriented programming

Abstraction came from the Latin word abs, meaning ‘away’ and trahere, meaning ‘to draw’. So we can define Abstraction in object-oriented programming language as a process of removing or taking away the characteristics from something (object) in order to reduce it to a set of essential characteristics.
Through the Abstraction in object-oriented programming, a programmer shows only the relevant data of an object and omitted all unwanted details of an object in order to reduce complexity and increase efficiency.
In the process of abstraction in object-oriented programming, the programmer tries to ensure that the entity is named in a manner that will make sense and that it will have all the relevant aspects included and none of the extraneous ones.
If we try to describe the process of abstraction in real world scenario then it might work like this:

You (the object) are going to receive your father’s friend from railway station. You two never met to each other. So you would take his phone number fron your father and call him when the train arrives.
On the phone you will tell him that “I am wearing white T-shirt and blue jeans and standing near the exit gate”. Means you will tell him the colour of your clothes and your location so he can identify and loacte you.This is all data that will help the procedure (finding you) work smoothly.

You should include all that information. On the other hand, there are a lot of bits of information about you that aren’t relevant to this situation like your age, your pan card number, your driving licence number which might be must information in some other scenario (like opening a bank account). However, since entities may have any number of abstractions, you may get to use them in another procedure in the future.

Lyncean Patel

View more posts from this author

Encapsulation in object-oriented language

Encapsulation

Encapsulation in object-oriented language or in Java is packing of data and function in to single component which enforce protecting variables, functions from outside of class, in order to better manage that piece of code and having least impact or no impact on other parts of program due to change in protected code.
Encapsulation in object-oriented language can also be described as a protective barrier that prevents the code and data being randomly accessed by other code defined outside the class. Access to the data and code is tightly controlled by an interface. (Through functions, which are exposed to outer world.)
The main benefit of encapsulation is the ability to modify our implemented code without breaking the code of others who use our code. With this feature Encapsulation gives maintainability, flexibility and extensibility to our code.
Example:
public class UserPin {
private int pin;
public void setPin (int pin){
//Saving the pin to database
}
public int getPIn() {
//fetching the pin from db and return back
}
}

Encapsulation in object-oriented language makes sure that the user of the class would be unaware of how class stores its data. Also it makes sure that user of the class do no need to change any of their code if there is any change in the class.
As in the above code example we store the ‘PIN’ of the user as integer but say, due to security reason we have to encrypt the ‘PIN’ and then store the encrypted ‘PIN’. And the algorithm that we use for encryption requires ‘PIN’ as String.
public class UserPin {
private int pin;
public void setPin (int pin){
//Convertin pin from int to String
//Encrytpt the PIN
//Saving the pin to database
}
public int getPIn() {
//fetching the pin from database
//Converting back to int
//Returning the pin

}
}
As we saw there is no change in the signature of the functions so the user of the class do not have to change his code.
Also we can implement the security layer as the user access the field through the function (known as getter and setter).
public class UserPin {
private int pin;
public void setPin (int pin){
//Validate the value of the PIN
//Convertin pin from int to String
//Encrytpt the PIN
//Saving the pin to database
}
public int getPIn() {
//fetching the pin from database
//Converting back to int
//Returning the pin

}
}
The fields can be made read-only (If we don’t define setter methods in the class) or write-only (If we don’t define the getter methods in the class).

The whole idea behind encapsulation is to hide the implementation details from users. That’s why encapsulation is known as data hiding.

The idea of encapsulation in object-oriented language is “don’t tell me how you do it; just do it.”

Lyncean Patel

View more posts from this author

Access Apex Rest API Salesforce from TalenD

images

Hello Readers,

This is our follow post on Talend Interview Questions, below are the all required steps to access Salesforce data using your own Talend Instance using APEX REST API.

Step 1

In SF go to Setup, Create, Apps. Scroll to bottom of page where it says Connected apps and click new by visiting the given url

https://www.salesforce.com/us/developer/docs/api_rest/Content/intro_understanding_authentication.htm

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD

 

Name can be anything as long as you know what it is, callback URL does not really matter, but use same as example. The important thing is selecting the Access and Manage Your data in scopes.

Step  2

After you create it, Consumer Key and Consumer Secret Values are what you use in Call to OAUTH API. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD

 

Step 3

After setting up the Connected App in Salesforce, we need to make a call to OAUth API to get token i.e access token. For making the call we need to have cURL installed. There may be other options but I prefer cURL.

 Step 4

One can download the cURL with SSL for one’s OS  and the required certificate of it from the below link https://support.zendesk.com/hc/en-us/articles/203691436-Installing-and-using-cURL

Step 5

Create a cURL folder on your machine and move the cURL.exe and its certificate to that folder. Setup “Path” environment variable of it so that cURL can be accessed from anywhere in command prompt. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD

 

 

Step 6

Once the cURL is setup, run the below mentioned command in command prompt to get the access token mentioned in Step 3.

curl –data “grant_type=password&client_id=<insert consumer key here>&client_secret=<insert consumer secret here>&username=<insert your username here>&password=<insert your password and token here>” -H “X-PrettyPrint:1” https://test.salesforce.com/services/oauth2/token

Response of this would be something like this

{

  “id” : “https://test.salesforce.com/id/00Dc0000003txdzEAA/005D0000001wi7EIAQ”,

  “issued_at” : “1421777842655”,

  “token_type” : “Bearer”,

  “instance_url” : “https://<instance>.salesforce.com”,

  “signature” : “AJjrVtbIpJkce+T4/1cm/KbUL7d4rqXyjBJBhewq7nI=”,

  “access_token” : “00Dc0000003txdz!ARQAQHJEpvN8IcIYcX8.IfjYi0FJ6_JFICLcMk6gnkcHdzMF1DYd2.ZW9_544ro7CnCpO4zzPmkgQ7bE9oFd8yhBALGiIbx7”

}

Step 7

Use the “access_token” value in tRESTClient in “Bearer Token”. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD

 

 Step 8

Use 2 tLogRow components, one for showing the success result and the other for displaying any error thrown. Please see the screenshot below

Capture

 

Step 9

Execute the job and you see result as below

Capture

 

Thank you very much for reading the article!!!

Please feel free to post your comments.

 

Ankit Kansal

View more posts from this author

Informatica Powercenter Performance Tuning Tips

DABLTUU2uOc

DABLTUU2uOcHere are a few points which will get you started with Informatica Power center Performance Tuning .Some of these tips are very general in nature please consult your project members before implementing them in your projects.

 

1) Optimize the Input Query if it’s Relational (e.g. Oracle table) source –

  1. Reduce no.of rows queried by using where conditions instead of Filter/Router transformations later in the mapping. Since you choose fewer rows to start with, your mapping will run faster
  2. Maker sure appropriate Indexes are defined on necessary columns and analyzed. Indexes must be especially defined on the columns in the ‘where’ clause of your input query.
  • Eliminate columns that you do not need for your transformations
  1. Use ‘Hints’ as necessary.
  2. Use sort order based on need

Note: For # i results will vary based on how the table columns are Indexed/queried…etc.

 

2) Use the Filter transformation as close to the SQ Transformation as possible.

3) Use sorted input data for Aggregator or Joiner Transformation as necessary.

4) Eliminate un-used columns and redundant code in all the necessary transformations.

5) Use Local variables as necessary to improve the performance.

6) Reduce the amount of data caching in Aggregator.

7) Use parameterized input query and file for flexibility.

8) Changed memory related settings at workflow/session level as necessary.

9) When use multiple condition columns in Joiner/Lookup transformation make sure to

use numeric data type  column as first condition.

10) Use persistent cache if possible in Lookup transformations.

11) Go through the sessions Logs CLOSELY to find out any issues and change accordingly

12) Use overwrite queries in Lookup transformation to reduce the amount of data cached.

13) Make sure the data type and sizes are consistent throughout the mapping as much

as possible.

14) For Target Loads use Bulk Load as and when possible.

15) For Target Loads use SQL * Load with DIRECT and UNRECOVERABLE option for large volume of data loads.

16) Use Partitioning options as and when possible. This is true for both Informatica and Oracle. For Oracle, a rule of thumb is to have around 10M rows per partition

17) Make sure that there are NO Indexes for all “Pre/Post DELETE SQLs” used in all

the mappings/Workflows.

18) Use datatype conversions where ever possible

e.g: 1) Use ZIPCODE as Integer instead of Character this improves the speed of the lookup Transformation comparisons.

 2) Use the port to port datatype conversions to improve the session performance.

19) Use operators instead of functions

e.g: For concatenation use “||” instead of CONCAT

20) Reduce the amount of data writing to logs for each transformation by setting

log settings to Terse as necessary only.

21) Use Re-usable transformations and Mapplets where ever possible.

22) In Joiners use less no.of rows as Master ports.

23) Perform joins in db rather than using Joiner transformation where ever possible.

 

DABLTf_fEv0

nayan

View more posts from this author

Informatica Best Practices for Cleaner Development

Informatica Best Practices

 

Don’t you just hate it when you can’t find that one mapping out of the thousand odd mappings present in your repository ??

A best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark. In addition, a “best” practice can evolve to become better as improvements are discovered.Following these Informatica Best Practices guidelines , would allow better Repository Management , which would make your Life Easier. Incorporate these practices when you create informatica objects and your life would be much easier:

Mapping Designer

  • There should be a place holder transformation (expression) immediately after the source and one before the target.
  • Active transformations that reduce the number of records, should be used as early as possible.
  • Connect only the ports that are required in targets to subsequent transformations.
  • If a join must be used in the Mapping, select the driving/master table while using joins.
  • For generic logic to be used across mappings, create a mapplet and reuse across mappings.

 

Transformation Developer

  • Replace complex filter expression with a (Y/N) flags. Filter expression will take lesser time to process the flags than the logic.
  • Persistent caches should used in look ups if the look up data is not expected to change often.

Naming conventions – name the informatica transformations starting with the first 3 letters in small case indicating the transformation. E.g. : lkp_<name of the lookup> for Look Up, rtr_<name of router> for router transformation etc.

 

Workflow Manager

  • Naming convention for session, worklet, workflow- s_<name of the session>, wlt_<name of the worklet>, wkf_<name of the workflow>.
  • Sessions should be created as re usable to be used in multiple workflows.
  • While loading tables for full loads, truncate target table option should be checked.
  • Workflow Property “Commit interval” (Default value : 10,000) should be increased for increased for Volumes more than 1 million records.
  • Pre-Session command scripts should be used for disabling constraints, building temporary tables, moving files etc. Post-Sessions scripts should be used for rebuilding indexes and dropping temporary tables.

 

Performance Optimization Best Practices

We often come across situations where Data Transformation Manager(DTM) takes more time to read from Source or when writing in to a Target. Following standards/guidelines can improve the overall performance.

  • Use Source Qualifier if the Source tables reside in the same schema
  • Make use of Source Qualifier “Filter” properties if the Source type is Relational
  • Use flags as integer, as the integer comparison is faster than the string comparison
  • Use tables as lesser number of records as master table for joins
  • While reading from Flat files, define the appropriate data type instead of reading as String and converting
  • Have all ports that are required connected to Subsequent transformations else check whether we can remove these ports

 

  • Suppress ORDER BY using the ‘- – ’ at the end of the query in Lookup transformations
  • Minimize the number of Update strategies
  • Group by simple columns in transformations like Aggregate, Source qualifier
  • Use Router transformation in place of multiple Filter transformations
  • Turn Off the Verbose logging while moving the mappings to UAT/Production environment
  • For large volume of data drop index before loading and recreate indexes after load
  • For large of volume of records Use Bulk load increase the commit interval to a higher value large volume of data
  • Set ‘Commit on Target’ in the sessions

 

These are a few things a beginner should know when he starts coding in Informatica . These Informatica Best Practices guidelines are a must for efficient Repository and overall project management and tracking.

nayan

View more posts from this author

Top Informatica Questions And Answers

Interview-Preparation-and-Practice-Featured

Hey Folks, As Discussed in our earlier post this our subsequent post regarding Informatica Interview Questions. please subscribe to get the free copy of PDF with answers and leave a comment.

Informatica Questions And Answers :-

1)   What is the difference between reusable transformation & shortcut created ?
2)   Which one is true for mapplets ( can u use source qyalifier, can u use sequence generator, can you use target) ?
3)   What are the ways to recover rows from a failed session ?
4)   Sequence generator, when u move from development to production how will you reset ?
5)   What is global repository ?
6)   How do u set a variable in incremental aggregation ?
7)   What is the basic functionality of pre-load stored procedure ?
8)   What are the different properties for an Informatica Scheduler ?
9)   In a concurrent batch if a session fails, can u start again from that session ?
10)  When you move from development to production then how will you retain a variable value ?
11)  Performance tuning( what was your role) ?
12)  what are conformed dimensions?
13)  Can you avoid static cache in the lookup transformation? I mean can you disable caching in a lookup transformation?
14)  What is the meaning of complex transformation?
15)  In any project how many mappings they will use(minimum)?
16)  How do u implement un-connected Stored procedure In a mapping?
17)  Can you access a repository created in previous version of Informatica?
18)  What happens if the info. Server doesn’t find the session parameter in the parameter file?
19)  How did you handle performance issues If you have data coming in from multiple sources, just walk through the process of loading it into the target
20)  How will u convert rows into columns or columns into rows
21)  What are the steps involved in the migration from older version to newer version of Informatica Server?
22)  What are the main features of Oracle 11g with context to data warehouse?
24)  How to run a session, which contains mapplet?
25)  Differentiate between Load Manager and DTM?
26)  What are session parameters ? How do you set them?
27)  What are variable ports and list two situations when they can be used?
28)  Describe Informatica Architecture in Detail ?
29)  How does the server recognise the source and target databases.
30)  What is the difference between sequential batch and concurrent batch and which is recommended and why?
31)  A session S_MAP1 is in Repository A. While running the session error message has displayed
‘server hot-ws270 is connect to Repository B ‘. What does it mean?
32)  How do you do error handling in Informatica?
33)  How can you run a session without using server manager?
34)  Consider two cases:
1. Power Center Server and Client on the same machine
2. Power Center Sever and Client on the different machines
what is the basic difference in these two setups and which is recommended?
35)  Informatica Server and Client are in different machines. You run a session from the server manager by specifying the source and target databases. It displays an error. You are confident that everything is correct. Then why it is displaying the error?
36)  What is the difference between normal and bulk loading? Which one is recommended?
37)  What is a test load?
38)  How can you use an Oracle sequences in Informatica? You have an Informatica sequence generator transformation also. Which one is better to use?
39)  What are Business Components in Informatica?
40)  What is the advantage of persistent cache? When it should be used.
41)  When will you use SQL override in a lookup transformation?

Please provide your name and email address for your free download.


Ankit Kansal

View more posts from this author

Leave a Reply

Your email address will not be published. Required fields are marked *

PageLines