Sharing Knlowledge With The World…

Month: March 2013

[PCSF 46008] Cannot Connect to Domain to Lookup Services Oreservices Error

[PCSF 46008] Cannot Connect to Domain to Lookup Services Oreservices Error is the most common error encountered by a informatica developer while connecting to a repository.

I will discuss some things to check to about resolving these type of errors:

1)Check if the services of informatica and the db services where the metadata is stored for informatica is up.

2)Check if there is an entry in the host file for the server name provided in the domain configuration in informatica.
      a)Right-Click on the repository in the repository navigator.
      b)select configure domain
      c)Edit the domain.

3)Once you have the server name make an entry for that server name into your host file.
     a)Open C:WINDOWSsystem32driversetchosts in a notepad
     b)add server ip address followed by server name.
     c)Save file.

4)type this command : telnet <server_name> <port_number>
   If an error occurs while connecting to the port contact the network administrator to open the Informatica Server port. If the command executes successfully then the problem must lie in the DB side.

5)Once this is done you need to check the database connections.
     a)go to C:InformaticaPowerCenter8.6.1serverconfig
     b)open the nometadata.xml in textpad ,to check the primary database string where informatica stores its metadata.
     c)see if your able to connect to the schema and it is not locked by any chance.

6)After making all these changes then restart the Informatica services and the error will be resolved!!

Continue Reading

Triggers in Oracle

A trigger is a PL/SQL block or a PL/SQL procedure associated with a table, view, schema or the database. Executes implicitly whenever a particular event takes place. 
Types of Trigger:
1.  DML on Table: If the DML operation such as an (INSERT, UPDATE or DELETE) is issued on the table
2.   DML on view: If the DML operation such as an (INSERT, UPDATE or DELETE) is issued on the view then an INSTEAD OF trigger is fired.
3.  DDL Operations: If the DDL operation such as an (CREATE or ALTER) is issued. No matter which user is connected or which application is used.
4.  User/Schema Level Triggers: like LOGON, LOGOFF.
5. Database Level Trigger: like startup/ shutdown.


GUIDELINES FOR USING TRIGGERS:

1.  To guarantee that when a particular operation is performed related action is performed.
2.  Excessive use of triggers can result in complex interdependencies which is very difficult to maintain in large applications. Use triggers when necessary and beware of recursive and cascading effects.
3.  If the logic for the trigger is very lengthy then create stored procedures with logic and invoke them in the trigger body.

CREATING DML TRIGGERS

A triggering statement consists of:
a. Trigger timing (Is when the trigger fires in relation to the triggering event)
                For table: BEFORE, AFTER
For view: INSTEAD OF
b. Triggering Event: INSERT, UPDATE OR DELETE (DML operation or event on the table or view that makes the trigger to fire)
c. Table Name         : On table, view
d. Trigger Type        : Row or Statement (How many times the trigger body executes)
e. WHEN clause     : Restricting Condition
f. Trigger Body        : PLSQL Block      (What action the trigger performs)
If multiple triggers are defined for the table, be aware that the order in which the multiple trigger of the same type fire is arbitrary. To ensure that the triggers of the same type are fired in a particular order, use FOLLOWS clause to specify order of execution in 11g.

TRIGGER TIMINGS:-

  1. BEFORE Trigger :  It is used to determine whether the triggering statement should be allowed to complete. (This situation enables you to eliminate unnecessary processing of triggering statements and its eventual rollback in cases where an exception is raised in triggering action) . It is used to derive the column values before completing a triggering INSERT or UPDATE statement.
  2. AFTER Trigger  :    It completes the triggering statement after executing the triggering action, mainly used for Auditing/History purpose

    INSTEAD OF Trigger:

  •  It provides the transparent way of modifying views that cannot be modified directly by SQL DML statements because it is not inherently modifiable
  • You can write INSERT, UPDATE and DELETE statements against the view. The INSTEAD OF trigger works invisibly in the background performing the action coded in the triggering body directly on underlying tables.

TRIGGERING EVENT:
The triggering event can be INSERT, UPDATE OR DELETE statement on a table.

  • When a triggering event is an update statement you can include a column list to identify which columns must be changed to fire the trigger. You cannot specify the column list for the insert or delete statement because they affect the entire row
  • The triggering event can contain one two or all three DML operations like –

              INSERT or UPDATE or DELETE

TRIGGERING TYPE:

  1.  Statement Trigger : A statement trigger fires once on the behalf of the triggering event, even if no rows are affected at all.                                                                                                                           For Example: A trigger that performs a complex security check on the current user
  2. Row Trigger : A row trigger fires each time the table is affected by the triggering event. If the triggering event affects no rows, a row trigger is not executed. Row triggers are useful if the trigger action depends on the data from the rows that are affected or on the data provided by the triggering event itself.

TRIGGER BODY  :

The trigger body defines what needs to be done when the triggering event is issued. It consists of the PLSQL block or the call to the PLSQL procedure or JAVA procedure. The PLSQL block consists of the SQL and PLSQL statements and consists of the PLSQL constructs such as variable, cursors, exceptions and so on.

Note: The size of the trigger cannot be more than the 32K

Firing Sequence
    BEFORE statement
    BEFORE row trigger
    Actual DML
    AFTER row trigger
    AFTER statement trigger                       

Syntax:
CREATE OR [REPLACE] TRIGGER trigger_name
    timing
        event1 [or event2 OR event3]            ON table_name
trigger_body

Note: Trigger names must be unique with respect to other triggers within same schema. The trigger names do not need to be unique with respect to other schema objects such as tables, views and procedures.
Using the column name with the update clause in the trigger improves the performance, because the trigger fires only when the particular column is updated and thus avoids the unintended firing of trigger for other updated columns.

Example 1: (To restrict inserts into the EMPLOYEES table to certain business hours, Monday to Friday)
CREATE OR REPLACE TRIGGER secure_emp
    BEFORE INSERT ON EMPLOYEES
    BEGIN
          IF(TO_CHAR(SYSDATE, ‘DY’) IN(‘SAT’,’SUN’)) OR (TO_CHAR(SYSDATE, ‘HH24:MI’) NOT BETWEEN ‘8:00′ AND ’18:00’) THEN
            RAISE_APPLICATION_ERROR(-20500, ‘You may insert into EMPLOYEES table only during business hours.’);
        END IF;
    END;

Trigger created.
INSERT INTO EMPLOYEES(employee_id, last_name, first_name, email, hire_date, job_id, salary, department_id)
VALUES(300, ‘Smith’, ‘Rob’, ‘RSmith’, SYSDATE, ‘IT_PROG’, 4500, 60);

In the above case if the user tries to insert into EMPLOYEES table on saturday, the user sees the message,
ORA-20500 : You may only insert into EMPLOYEES during buisness hours.
the trigger fails and the triggering statement is automatically rolled back by the oracle server
RAISE_APPLICATION_ERROR is server-side built in procedure that returns the error to the user and causes the PLSQL block to fail.

Example 2 :

Create one trigger to restrict all data manipulation events on EMPLOYEES table to certain buisness hours, Monday to Friday.

(This can be done using various conditional predicates like INSERTING, UPDATING AND DELETING within the trigger body)

CREATE OR REPLACE TRIGGER secure_emp
    BEFORE INSERT OR UPDATE OR DELETE ON EMPLOYEES
    BEGIN
          IF(TO_CHAR(SYSDATE, ‘DY’) IN(‘SAT’,’SUN’)) OR (TO_CHAR(SYSDATE, ‘HH24:MI’) NOT BETWEEN ‘8:00′ AND ’18:00’)
        THEN
            IF DELETING THEN   
                RAISE_APPLICATION_ERROR(-20502, ‘You may delete from EMPLOYEES table only during business hours.’);
            ELSIF INSERTING THEN
                RAISE_APPLICATION_ERROR(-20500, ‘You may insert into EMPLOYEES table only during business hours.’);
            ELSIF UPDATING(‘salary’) THEN
                RAISE_APPLICATION_ERROR(-20503, ‘You may update salary only during business hours.’);
            ELSE
                RAISE_APPLICATION_ERROR(-20504, ‘You may update EMPLOYEES table only during normal hours.’);
        END IF;
    END;
   

Syntax of DML Row Triggers
CREATE OR [REPLACE] TRIGGER trigger_name
    timing
        event1 [or event2 OR event3]            ON table_name
        [REFERENCING OLD AS old | NEW AS new]    FOR EACH ROW
        [WHEN (Condition)]trigger_body

REFERENCING– Specifies the correlation names for the old and new values of the current row(The default values are OLD and NEW)
FOR EACH ROW — It designates that it is a row trigger
WHEN– Specifies the trigger restriction (This conditional predicate must be enclosed in parenthesis and is evaluated for each row to determine whether the trigger body executes or not)
Example:
Create a trigger that allows only certain employees (like president and vice-president) to earn a salary more than, 15000.
If the user tries to allocate the salary more than 15000 for an employee other than president and vice-president trigger raises an error.

CREATE OR REPLACE TRIGGER restrict_salary
    BEFORE INSERT OR UPDATE OF SALARY ON EMPLOYEES
        FOR EACH ROW
        BEGIN
            IF NOT(:NEW.job_id IN(‘AD_PRES’, ‘AD_VP’))
               AND :NEW.salary > 15000
            THEN       
            RAISE_APPLICATION_ERROR(-20202, ‘Employee Cannot earn this amount’);
        END IF;
    END;
Trigger created.

If I try to do

UPDATE EMPLOYEES
SET salary = 15500
WHERE last_name = ‘Russell’; — he has job_id as SA_MAN

The application throws the error saying –
ORA-20202: EMPLOYEE CANNOT EARN THIS AMOUNT

Example 3: (Audit trigger) USING OLD AND NEW QUALFIERS

Here, in this case whenever an employee tries to insert, delete or update the employees table it reference the value of the column before and after data change by prefixing it with OLD and NEW Qualifier

CREATE TABLE employees_audit
as select * from employees;     — Created an table where audited data is stored

ALTER TABLE employees_audit
  ADD(
    old_last_name VARCHAR2(30),
    old_job_id    VARCHAR2(30),
    old_salary    NUMBER(8,2)
    );
   
CREATE OR REPLACE TRIGGER audit_emp_values
  AFTER DELETE OR INSERT OR UPDATE ON EMPLOYEES
    REFERENCING OLD AS old NEW AS new
    FOR EACH ROW
  BEGIN
  INSERT INTO EMPLOYEES_AUDIT
    (first_name, hire_date, employee_id, old_last_name, last_name, old_job_id, job_id, old_salary, salary)
    VALUES
    (USER, SYSDATE, :OLD.EMPLOYEE_ID, :OLD.last_name, :NEW.last_name, :OLD.job_id, :NEW.job_id, :OLD.salary, :NEW.salary);
END;

Trigger created.

The OLD and NEW qualifiers are always available with the row triggers. These are default qualifiers or pseducolumns defined by oracle.
Note: Row triggers can decrease the performance if you do a lot of updates on larger tables.

Restriction on Triggers

To restrict the trigger action to those rows that specify a certain condition, we provide a WHEN clause.

Example:
Create a trigger on EMPLOYEES table to calculate an employee’s commission, when a row is added to the EMPLOYEE’s table or when the salary is modified
CREATE OR REPLACE TRIGGER derive_commission_pct
  BEFORE DELETE OR INSERT OR UPDATE OF salary ON EMPLOYEES
    REFERENCING OLD AS OLD NEW AS NEW
    FOR EACH ROW
    WHEN(NEW.job_id = ‘SA_REP’)
  BEGIN
     IF INSERTING THEN
        :NEW.commission_pct := 0;
     ELSIF :OLD.commission_pct IS NULL THEN
        :NEW.commission_pct := 0;
     ELSE
        :NEW.commission_pct := :OLD.commission_pct + 0.05;
     END IF;
END;
Trigger created.

Here, in above case the NEW qualifier cannot be prefixed with the colon in the WHEN clause because the WHEN clause is outside the PLSQL block

Autonomous Transactions

Autonomous transactions allow you to leave the context of the calling transaction, perform an independent transaction, and return to the calling transaction without affecting its state. The autonomous transaction has no link to the calling transaction, so only committed data can be shared by both transactions.

The following types of PL/SQL blocks can be defined as autonomous transactions:
Stored procedures and functions.
Local procedures and functions defined in a PL/SQL declaration block.
Packaged procedures and functions.
The easiest way to understand autonomous transactions is to see them in action.
CREATE TABLE t1 (x int);
CREATE TABLE t2 (x int);

INSERT INTO t1 VALUES (1);

SELECT * FROM t1;
SELECT * FROM t2;

CREATE OR REPLACE TRIGGER t_trigger
AFTER INSERT
ON t1
FOR EACH ROW

DECLARE
 i PLS_INTEGER;
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM t1;

  INSERT INTO t2
  VALUES
  (i);
END;
/

INSERT INTO t1 VALUES (1);

SQL Error: ORA-04091: table HR.T1 is mutating, trigger/function may not see it

CREATE OR REPLACE TRIGGER t_trigger
AFTER INSERT
ON t1
FOR EACH ROW

DECLARE
 i PLS_INTEGER;
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  SELECT COUNT(*)
  INTO i
  FROM t1;

  INSERT INTO t2
  VALUES
  (i);
 
  commit;
END;
/

INSERT INTO t1 VALUES (1);

SELECT * FROM t1;
SELECT * FROM t2;

Dropping an Trigger

We can drop the particular trigger by using below syntax :

DROP TRIGGER audit_emp_values;

DisableEnable an Trigger
    The syntax for disabling a Trigger is:
                     ALTER TRIGGER  trigger_name DISABLE;
    The syntax for disabling all Triggers on a table
        ALTER TABLE  table_name DISABLE ALL TRIGGERS;
    The syntax for enabling a Trigger is:
                     ALTER TRIGGER  trigger_name ENABLE;
    The syntax for enabling all Triggers on a table
        ALTER TABLE  table_name ENABLE ALL TRIGGERS;

Continue Reading

Interpreting output of a TKPROF in Oracle

For a particular query executed , the TKPROF output looks something like this:

call      count    cpu       elapsed   disk      query      current    rows
‐‐‐‐‐‐‐ ‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐
Parse       1        0.00        0.00        0           0             0            0
Execute   1        0.00        0.00        0           0             0            0
Fetch       2        1.20        1.21        0       86091         4           15
‐‐‐‐‐‐‐ ‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐
total        4         1.20        1.21        0       86091         4           15

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 69

Lets try and read the TKPROF output and dissect each section in brief:

The main phases of a query are:
1)Parse Phase: Oracle checks if the query exists in shared pool (soft parse) or create a plan for it(hard parse)
2)Execute: This is done when Oracle executes a query.For a select statement it will always be a small value as compared to the update statement.
3)Fetch: This is where all the work is done for a Select statement but will be a small value for update.

Now coming to the column headings:
1)Call:It always has the Parse,Execute,Fetch,Total in it.Displays the query processing event basically.
2)Count:The number of times the event occured.
3)CPU: the CPU seconds spent on respective phase of query(if blank,then enable timed_statistic on session level).
4)Elapsed: the normal clock time for exection of ech query phase.
5)Disk : number of physical I/O s performed
6)Query:the number of blocks processed in consistent-read mode.
7)Current:the number of blocks processed in current mode.
(to find more on consistent-read and current mode check this link out:http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:878847787577)

8)Rows:the number of rows affected.

Now coming to interpretation and some facts that need to be observed:

1)Count(parses)/Count(executes) : if the ratio comes to 1 then it implies that query is parsed and executed everytime and needs to be corrected.(not using bind variables are usually the culprits in these scenarios)

2)Large difference between CPU time and elapsed time:it indicates that though the CPU took less time to complete while the Elapsed time is high it means that time was spent waiting.Several factors contribute to this (one of them could be a high physical I/O). Determine the cause of the wait.

3)Fetch (count) / Rows Fetched :If this value approaches one or greater means there is high number of round trips from client to server. There is no bulk fetching.This can resolved using a BULK statement in PL/SQL block.(set array-size in case of SQL client).

4)A high disk count:a high physical read.

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 69

The misses in library cache indicates that it was a soft parse as the query was found in the shared pool,if sql is executed the very first time then 1 is returned indicating hard parse.

Optimizer goal: shows the optimizer_mode used during execution of the query.

User_Id used during the parsing of query.

Continue Reading

LAG Analytical Function in Informatica

Overview

Hello readers,

The lag() analytical function in oracle is quite a handy feature in Oracle, recently we had to implement this in Informatica.

Input:
empno      ename     salary
———     ———   ———
1               ABC       1000
2               DEF        2000
3               GHI        3000
4               JKL        4000

Expected Salary

empno      ename     salary     previous_sal
———     ———   ———   ————
1               ABC       1000       NULL
2               DEF        2000        1000
3               GHI        3000        2000
4               JKL        4000        3000

The mapping looks something like this:

Now configure the expression transformation as shown below:

This will give you the functionality the lag analytical function provides in oracle.

Thanks for reading!!

Continue Reading

SCD-2 Implementations in Informatica

Complete_img

Hello reader,today we ll try and see how we can attain SCD-2(Slowly changing dimensions) implementation in informatica.

(Assumption :
1.assumed the source and target definitions are defined
2.assumed you have a basic understanding of SCD implementations.
3.possess the basic understanding of adding connections to the components.
4.possess basic understanding of working of all components used in the mapping.
)

Please have a look at the overall mapping.We will discuss each of them part by part in detail.

Lets explore this in detail:

1)Drag drop the Source onto your mapping.
2)Drag drop all the transformations required as shown in image above.

3)The Dynamic lookup is created on the target table on which SCD-2 is implemented ,in our case the table name is emp_SCD2, we have applied a filter to get records which are active.we have added a condition for matching employee_id.
5)Next we pass the o/p to the router transformation:we have created two groups in the router transformation namely :
i)Inser_New_Rec: which checks if the record is a new entry.
ii)Update_Insrt_grp: group for updating the existing record ‘s end date and inserting a new record with      end date as NULL.
We have applied SCD on the deptno of the employee, hence in the Update_Insrt_grp we are tracing if the deptno of the source and target are different.Change your conditions ,depending on the columns on which you want to implement SCD.
Now the further remaining mapping i have extended the components so that you can make out how the data flows in the subsequent steps.
6)A sequence generator is used to create SKEY for new inserts into the target.
7)Now the output from Insert_New_Rec are given directly to the target output table as shown in the image above.(emp_scd2)
8)two update sttratergies are implemented for the Update_Insrt_grp coming from the router. One update stratergy is used to update the record in the target with the end date we have defined using the dt column.And the second update stratergy is required to insert a new record for the changed records with end_dt as NULL.Connect the targets as shown in the image.
9)Connect to the target.
And there, we are set for implementing SCD-2 ,now create a workflow for the mapping.
Thanks for reading !!!

Continue Reading

Deployment Stratergies Talend

pic3
Talend provides the flexibility to provide multiple deployment strategies namely:

1) Deployment using script/batch files.
2) Deployment using a Web Service.

Deployment using script/batch files

Deployment Stratergies Talend involves generation of .bat or .sh files that can be executed on their respective OS. When we export a job from Talend, a set of  files will be generated which includes class and jar files specific to that job, a .bat and .sh files are also generated which can be executed for a particular job. The job executed through these files run in silent-mode. One of the main advantages of this strategy is that the client or user is not exposed to the complexity of the job all he needs is java installed on his system and simply run the batch file.

Procedure to export a job:
1) Right –Click on the job that needs to be exported from the Repository Panel.

2) Click on the Export Job option.

3) A dialog box appears, specify the path to which you want to export the job.

4) Choose Export type as Autonomous job and click on okay.

5) The job will be exported and be saved as a ZIP file on the specified path.

Deployment using a Web Service(Exeuting jobs REMOTELY!!)

This method involves remote execution of the job via a web service. This method is independent of the OS used and can be executed via a web browser. As we know Talend is based on java, the web deployment feature is used extensively.
A WAR file is generated for the specified job, which is used to deploy the job on the application server.

Procedure to export a job:
1) Right –Click on the job that needs to be exported from the Repository Panel.

2) Click on the Export Job option.

3) A dialog box appears , specify the path to which you want to export the job.

4) Choose Export type as WAR file and click on okay.

5) The job will be exported and be saved as a WAR file on the specified path.

6) Run Application server ,like Apache Tomcat and add the WAR file to the Tomcat directory as shown in the screenshot below.
Deployment Stratergies Talend

Deployment Stratergies Talend

OR
you can directly go the webapps folder of your Tomcat Server and simply paste the WAR files over there.
7)Once deployed we can run the job using the following link on the web browser.
http://localhost:8989/File_Input_0.1/services/File_Input?method=runJob
or
localhost:8989/File_Input_0.1/services/File_Input?runJob()
where File_Input refers to the job name. Localhost refers to the server method refers to the execution method.8989 is the application server port no.

8)On successful execution the following screen is seen.

Deployment Stratergies Talend

Deployment Stratergies Talend

 

Thank you for taking interest…

Talend Interview Questions

Author : Ankit Kansal & Nayan Naik
Continue Reading

Version Control in Talend Open Studio

Pic1

                                                                                -by Ankit Kansal & Nayan Naik
TOS always puts the job version number in the package name during Java code generation. While this allows side-by-side execution of different job versions, I find it difficult to embed such a job into a larger system. On every update of a single job, one has to update all references to this job as well. Talend Integration Suite offers complete version control system integrated with SVN. SVN manages the changes in versions.
                Talend Open Studio lacks the integration with SVN but offers a job versioning which is not as efficient as the capabilities offered by SVN but provides a descent approach to versioning.
As depicted the screen shots below we can place individual jobs under version control and read or restore changes from previous versions of the same job. There are two ways of versioning in TOS namely:
1) Major
2) Minor
Major changes involve change of version from a lower version to a higher one depicting a major change in the product or a job(change from v1.0 to 2.0).
Minor changes involve minimal or small changes which affect a job and may be considered as a small enhancement to an existing job.
For e.g. version change from v1.0 To v1.1
Step1:Assign a version to your job
Step2: Reading another version of the job

Thank you for reading….

Talend Interview Questions

Author : Ankit Kansal & Nayan Naik
Continue Reading
PageLines