Sharing Knlowledge With The World…

Data Aggregation FrameWork



Ankit Kansal & Nayan Naik

Data Aggregation is a powerful technique provided by MongoDB for performing aggregation analysis. This technique is mostly similar with what we have in oracle as GROUP BY clause.
There are some of guidelines provided by MongoDB developer zone which you must follows to obtain the desired output.
Data flow inside an aggregation function moves from left to right.
               db.collection_name.aggregate(–set of operators required) 
 Operators Lists:-
  1. $project
  2. $match 
  3. $skip
  4. $limit
  5. $group
  6. $unwind
  7. $sort
  8. $sum      

Let’s understand the working of Aggregation framework with the use of Operators listed above.


               $project is a PROJECTION operator and is used with aggregation framework and mainly its work is to project the selected fields from a  given document.

In this query $project selects name filed from things collection by writing name:1 and as we know that _id field is by default selected so to avoid displaying _id field we have written _id:0.
NOTE:- Aggregation framework works upon the principle of pipe lines that is once you have select the fields and you have moved to next part then only those selected fields will be available to perform operations for you and other fields from the collection will be vanished.more….


 In the above figure the query is selecting/projecting only the names from the collection and later on when i am using $match/WHERE operator to put a condition on the data the output which received is empty. This shows working as a pipe line in aggregation framework.


              $match operator is used to filter the documents based upon some defined conditions.This operator is mostly similar with WHERE clause of SQL.
 In the previous example i have used $match operator with $project operator. As described earlier $match operator is used for filtering the data based upon some required condition and then $project operator is used which helps in selecting the desired fields only.

When you are using _id operator which tells about the column on which aggregation is to be performed, there you have to use “$” symbol before the field name and it tells that data should be taken of the field, and keep in mind that when you are using double group by aliasing of columns must be done.

Scenario 1:-

For this scenario i have a collection name employee which is somehow resembles emp table of oracle.

$group operator is similar as group by clause in oracle. As, we already know that $project is used for selecting the columns from the collection. _id operator is used to specify the column on which group by operation is to be performed (such as group by deptno). count is a variable name that is used to display the values generated by $sum operator for each group. $sum is used to add a value specified in the clause within a group for each member. In my example i have used 1 so, in this case 1 will addded for each member within a group. If i have taken 2 instead of 1 then for each member in a group 2 is added.Here 1 behaves as count(empno) function in ORACLE.

Scenario 2:-
In this scenario same employee collection is used and $sort operator is used for sorting the result. Now, 
If you want to perform some operations on the data generated within aggregation function then it should be performed to the next level in which it is produced. Such as count data is produced with in the group operator so to perform some functions onto it you must perform onto the next level in which it is produced. In this scenario i want to sort the result on the basis of count generated. That’s why $sort operator was applied onto the next level.
Scenario 3:-
Same employee collection is used in this case 
1) Firstly $skip operator is used to skip the top rows generated. 
2) Secondly $limit operator is used to limit the number of rows to be processed further.

Scenario 4:-

This Scenario helps you to understand how a double group by works in MongoDB. Like in Oracle where we do as group by deptno,jobid. Now to achieve a similar kind of functionality in MobgoDB it should be done as-
Note:- In this case i have sorted the data on the basis of deptno which is a sub-document of _id field. So in this case we have to use “_id.deptno” field name to sort the final data. Nested documents can be accessed using . operator and “” quotes were also necessary.(-) negative symbol signifies that data should be sorted in descending order only.
Scenario 5:-
This scenario considered the usage of $unwind operator. $unwind operator is only used when you have array as a data type and $unwind operator is used to separate the values present in the array. For each element in the array a separate document is created with the same object_Id.
collection formed:-

Output Received After the query:-
Scenario 6:-
In this scenario i ll show you the usage of $substr operator to be used in mongoDB. Like in other technologies, here also $substr is used for data extraction from a particular selected field, but currently it is only supported by String Fileds.
So, by writing this type of query one can extract data from the given selected field and later on group by operation can also be performed onto it. The first integer inside the syntax specifies the location to start from the string and the next integer tells how many characters you want from the string to be extracted. By writing this query and later on by applying $group operator you can identify and count the number of people who have same aliases.

Scenario 7:-

In this scenario we will see the usage of $subtract operator. This operator provides functionality of deleting one field from the other.

This will return you the difference of two numbers. Second number is subtracted from the first one.
Similarly, you can work upon $sum operator.

$strcasecmp:-This operator is used to compare the two given string in the document if the length of the first string is greater then that of second string then the result is positive, and if the length of the second string is greater then that of first string then the result is in negative. If both the string are same then return output is 0.    
Rate this post

Ankit Kansal

View more posts from this author

Informatica Tutorial – The Definitive Guide

Untitled design (8)

Informatica is the most important (and popular) tool in the Data Integration Industry. Actually, it’s a culmination of several different “Client Soft wares”: you need to master Mapping Designer , Workflow Monitor and good old Workflow Monitor if you want to master Informatica.



View more posts from this author

Abstraction in object-oriented programming

Abstraction in object-oriented programming

Abstraction came from the Latin word abs, meaning ‘away’ and trahere, meaning ‘to draw’. So we can define Abstraction in object-oriented programming language as a process of removing or taking away the characteristics from something (object) in order to reduce it to a set of essential characteristics.
Through the Abstraction in object-oriented programming, a programmer shows only the relevant data of an object and omitted all unwanted details of an object in order to reduce complexity and increase efficiency.
In the process of abstraction in object-oriented programming, the programmer tries to ensure that the entity is named in a manner that will make sense and that it will have all the relevant aspects included and none of the extraneous ones.
If we try to describe the process of abstraction in real world scenario then it might work like this:

You (the object) are going to receive your father’s friend from railway station. You two never met to each other. So you would take his phone number fron your father and call him when the train arrives.
On the phone you will tell him that “I am wearing white T-shirt and blue jeans and standing near the exit gate”. Means you will tell him the colour of your clothes and your location so he can identify and loacte you.This is all data that will help the procedure (finding you) work smoothly.

You should include all that information. On the other hand, there are a lot of bits of information about you that aren’t relevant to this situation like your age, your pan card number, your driving licence number which might be must information in some other scenario (like opening a bank account). However, since entities may have any number of abstractions, you may get to use them in another procedure in the future.

Lyncean Patel

View more posts from this author

Encapsulation in object-oriented language


Encapsulation in object-oriented language or in Java is packing of data and function in to single component which enforce protecting variables, functions from outside of class, in order to better manage that piece of code and having least impact or no impact on other parts of program due to change in protected code.
Encapsulation in object-oriented language can also be described as a protective barrier that prevents the code and data being randomly accessed by other code defined outside the class. Access to the data and code is tightly controlled by an interface. (Through functions, which are exposed to outer world.)
The main benefit of encapsulation is the ability to modify our implemented code without breaking the code of others who use our code. With this feature Encapsulation gives maintainability, flexibility and extensibility to our code.
public class UserPin {
private int pin;
public void setPin (int pin){
//Saving the pin to database
public int getPIn() {
//fetching the pin from db and return back

Encapsulation in object-oriented language makes sure that the user of the class would be unaware of how class stores its data. Also it makes sure that user of the class do no need to change any of their code if there is any change in the class.
As in the above code example we store the ‘PIN’ of the user as integer but say, due to security reason we have to encrypt the ‘PIN’ and then store the encrypted ‘PIN’. And the algorithm that we use for encryption requires ‘PIN’ as String.
public class UserPin {
private int pin;
public void setPin (int pin){
//Convertin pin from int to String
//Encrytpt the PIN
//Saving the pin to database
public int getPIn() {
//fetching the pin from database
//Converting back to int
//Returning the pin

As we saw there is no change in the signature of the functions so the user of the class do not have to change his code.
Also we can implement the security layer as the user access the field through the function (known as getter and setter).
public class UserPin {
private int pin;
public void setPin (int pin){
//Validate the value of the PIN
//Convertin pin from int to String
//Encrytpt the PIN
//Saving the pin to database
public int getPIn() {
//fetching the pin from database
//Converting back to int
//Returning the pin

The fields can be made read-only (If we don’t define setter methods in the class) or write-only (If we don’t define the getter methods in the class).

The whole idea behind encapsulation is to hide the implementation details from users. That’s why encapsulation is known as data hiding.

The idea of encapsulation in object-oriented language is “don’t tell me how you do it; just do it.”

Lyncean Patel

View more posts from this author

Access Apex Rest API Salesforce from TalenD


Hello Readers,

This is our follow post on Talend Interview Questions, below are the all required steps to access Salesforce data using your own Talend Instance using APEX REST API.

Step 1

In SF go to Setup, Create, Apps. Scroll to bottom of page where it says Connected apps and click new by visiting the given url

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD


Name can be anything as long as you know what it is, callback URL does not really matter, but use same as example. The important thing is selecting the Access and Manage Your data in scopes.

Step  2

After you create it, Consumer Key and Consumer Secret Values are what you use in Call to OAUTH API. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD


Step 3

After setting up the Connected App in Salesforce, we need to make a call to OAUth API to get token i.e access token. For making the call we need to have cURL installed. There may be other options but I prefer cURL.

 Step 4

One can download the cURL with SSL for one’s OS  and the required certificate of it from the below link

Step 5

Create a cURL folder on your machine and move the cURL.exe and its certificate to that folder. Setup “Path” environment variable of it so that cURL can be accessed from anywhere in command prompt. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD



Step 6

Once the cURL is setup, run the below mentioned command in command prompt to get the access token mentioned in Step 3.

curl –data “grant_type=password&client_id=<insert consumer key here>&client_secret=<insert consumer secret here>&username=<insert your username here>&password=<insert your password and token here>” -H “X-PrettyPrint:1”

Response of this would be something like this


  “id” : “”,

  “issued_at” : “1421777842655”,

  “token_type” : “Bearer”,

  “instance_url” : “https://<instance>”,

  “signature” : “AJjrVtbIpJkce+T4/1cm/KbUL7d4rqXyjBJBhewq7nI=”,

  “access_token” : “00Dc0000003txdz!ARQAQHJEpvN8IcIYcX8.IfjYi0FJ6_JFICLcMk6gnkcHdzMF1DYd2.ZW9_544ro7CnCpO4zzPmkgQ7bE9oFd8yhBALGiIbx7”


Step 7

Use the “access_token” value in tRESTClient in “Bearer Token”. Please see the screenshot below.

Access Apex Rest API Salesforce from TalenD

Access Apex Rest API Salesforce from TalenD


 Step 8

Use 2 tLogRow components, one for showing the success result and the other for displaying any error thrown. Please see the screenshot below



Step 9

Execute the job and you see result as below



Thank you very much for reading the article!!!

Please feel free to post your comments.


Ankit Kansal

View more posts from this author

Informatica Powercenter Performance Tuning Tips


DABLTUU2uOcHere are a few points which will get you started with Informatica Power center Performance Tuning .Some of these tips are very general in nature please consult your project members before implementing them in your projects.


1) Optimize the Input Query if it’s Relational (e.g. Oracle table) source –

  1. Reduce no.of rows queried by using where conditions instead of Filter/Router transformations later in the mapping. Since you choose fewer rows to start with, your mapping will run faster
  2. Maker sure appropriate Indexes are defined on necessary columns and analyzed. Indexes must be especially defined on the columns in the ‘where’ clause of your input query.
  • Eliminate columns that you do not need for your transformations
  1. Use ‘Hints’ as necessary.
  2. Use sort order based on need

Note: For # i results will vary based on how the table columns are Indexed/queried…etc.


2) Use the Filter transformation as close to the SQ Transformation as possible.

3) Use sorted input data for Aggregator or Joiner Transformation as necessary.

4) Eliminate un-used columns and redundant code in all the necessary transformations.

5) Use Local variables as necessary to improve the performance.

6) Reduce the amount of data caching in Aggregator.

7) Use parameterized input query and file for flexibility.

8) Changed memory related settings at workflow/session level as necessary.

9) When use multiple condition columns in Joiner/Lookup transformation make sure to

use numeric data type  column as first condition.

10) Use persistent cache if possible in Lookup transformations.

11) Go through the sessions Logs CLOSELY to find out any issues and change accordingly

12) Use overwrite queries in Lookup transformation to reduce the amount of data cached.

13) Make sure the data type and sizes are consistent throughout the mapping as much

as possible.

14) For Target Loads use Bulk Load as and when possible.

15) For Target Loads use SQL * Load with DIRECT and UNRECOVERABLE option for large volume of data loads.

16) Use Partitioning options as and when possible. This is true for both Informatica and Oracle. For Oracle, a rule of thumb is to have around 10M rows per partition

17) Make sure that there are NO Indexes for all “Pre/Post DELETE SQLs” used in all

the mappings/Workflows.

18) Use datatype conversions where ever possible

e.g: 1) Use ZIPCODE as Integer instead of Character this improves the speed of the lookup Transformation comparisons.

 2) Use the port to port datatype conversions to improve the session performance.

19) Use operators instead of functions

e.g: For concatenation use “||” instead of CONCAT

20) Reduce the amount of data writing to logs for each transformation by setting

log settings to Terse as necessary only.

21) Use Re-usable transformations and Mapplets where ever possible.

22) In Joiners use less no.of rows as Master ports.

23) Perform joins in db rather than using Joiner transformation where ever possible.




View more posts from this author

Informatica Best Practices for Cleaner Development

Informatica Best Practices


Don’t you just hate it when you can’t find that one mapping out of the thousand odd mappings present in your repository ??

A best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark. In addition, a “best” practice can evolve to become better as improvements are discovered.Following these Informatica Best Practices guidelines , would allow better Repository Management , which would make your Life Easier. Incorporate these practices when you create informatica objects and your life would be much easier:

Mapping Designer

  • There should be a place holder transformation (expression) immediately after the source and one before the target.
  • Active transformations that reduce the number of records, should be used as early as possible.
  • Connect only the ports that are required in targets to subsequent transformations.
  • If a join must be used in the Mapping, select the driving/master table while using joins.
  • For generic logic to be used across mappings, create a mapplet and reuse across mappings.


Transformation Developer

  • Replace complex filter expression with a (Y/N) flags. Filter expression will take lesser time to process the flags than the logic.
  • Persistent caches should used in look ups if the look up data is not expected to change often.

Naming conventions – name the informatica transformations starting with the first 3 letters in small case indicating the transformation. E.g. : lkp_<name of the lookup> for Look Up, rtr_<name of router> for router transformation etc.


Workflow Manager

  • Naming convention for session, worklet, workflow- s_<name of the session>, wlt_<name of the worklet>, wkf_<name of the workflow>.
  • Sessions should be created as re usable to be used in multiple workflows.
  • While loading tables for full loads, truncate target table option should be checked.
  • Workflow Property “Commit interval” (Default value : 10,000) should be increased for increased for Volumes more than 1 million records.
  • Pre-Session command scripts should be used for disabling constraints, building temporary tables, moving files etc. Post-Sessions scripts should be used for rebuilding indexes and dropping temporary tables.


Performance Optimization Best Practices

We often come across situations where Data Transformation Manager(DTM) takes more time to read from Source or when writing in to a Target. Following standards/guidelines can improve the overall performance.

  • Use Source Qualifier if the Source tables reside in the same schema
  • Make use of Source Qualifier “Filter” properties if the Source type is Relational
  • Use flags as integer, as the integer comparison is faster than the string comparison
  • Use tables as lesser number of records as master table for joins
  • While reading from Flat files, define the appropriate data type instead of reading as String and converting
  • Have all ports that are required connected to Subsequent transformations else check whether we can remove these ports


  • Suppress ORDER BY using the ‘- – ’ at the end of the query in Lookup transformations
  • Minimize the number of Update strategies
  • Group by simple columns in transformations like Aggregate, Source qualifier
  • Use Router transformation in place of multiple Filter transformations
  • Turn Off the Verbose logging while moving the mappings to UAT/Production environment
  • For large volume of data drop index before loading and recreate indexes after load
  • For large of volume of records Use Bulk load increase the commit interval to a higher value large volume of data
  • Set ‘Commit on Target’ in the sessions


These are a few things a beginner should know when he starts coding in Informatica . These Informatica Best Practices guidelines are a must for efficient Repository and overall project management and tracking.


View more posts from this author

Top Informatica Questions And Answers


Hey Folks, As Discussed in our earlier post this our subsequent post regarding Informatica Interview Questions. please subscribe to get the free copy of PDF with answers and leave a comment.

Informatica Questions And Answers :-

1)   What is the difference between reusable transformation & shortcut created ?
2)   Which one is true for mapplets ( can u use source qyalifier, can u use sequence generator, can you use target) ?
3)   What are the ways to recover rows from a failed session ?
4)   Sequence generator, when u move from development to production how will you reset ?
5)   What is global repository ?
6)   How do u set a variable in incremental aggregation ?
7)   What is the basic functionality of pre-load stored procedure ?
8)   What are the different properties for an Informatica Scheduler ?
9)   In a concurrent batch if a session fails, can u start again from that session ?
10)  When you move from development to production then how will you retain a variable value ?
11)  Performance tuning( what was your role) ?
12)  what are conformed dimensions?
13)  Can you avoid static cache in the lookup transformation? I mean can you disable caching in a lookup transformation?
14)  What is the meaning of complex transformation?
15)  In any project how many mappings they will use(minimum)?
16)  How do u implement un-connected Stored procedure In a mapping?
17)  Can you access a repository created in previous version of Informatica?
18)  What happens if the info. Server doesn’t find the session parameter in the parameter file?
19)  How did you handle performance issues If you have data coming in from multiple sources, just walk through the process of loading it into the target
20)  How will u convert rows into columns or columns into rows
21)  What are the steps involved in the migration from older version to newer version of Informatica Server?
22)  What are the main features of Oracle 11g with context to data warehouse?
24)  How to run a session, which contains mapplet?
25)  Differentiate between Load Manager and DTM?
26)  What are session parameters ? How do you set them?
27)  What are variable ports and list two situations when they can be used?
28)  Describe Informatica Architecture in Detail ?
29)  How does the server recognise the source and target databases.
30)  What is the difference between sequential batch and concurrent batch and which is recommended and why?
31)  A session S_MAP1 is in Repository A. While running the session error message has displayed
‘server hot-ws270 is connect to Repository B ‘. What does it mean?
32)  How do you do error handling in Informatica?
33)  How can you run a session without using server manager?
34)  Consider two cases:
1. Power Center Server and Client on the same machine
2. Power Center Sever and Client on the different machines
what is the basic difference in these two setups and which is recommended?
35)  Informatica Server and Client are in different machines. You run a session from the server manager by specifying the source and target databases. It displays an error. You are confident that everything is correct. Then why it is displaying the error?
36)  What is the difference between normal and bulk loading? Which one is recommended?
37)  What is a test load?
38)  How can you use an Oracle sequences in Informatica? You have an Informatica sequence generator transformation also. Which one is better to use?
39)  What are Business Components in Informatica?
40)  What is the advantage of persistent cache? When it should be used.
41)  When will you use SQL override in a lookup transformation?

Please provide your name and email address for your free download.

Ankit Kansal

View more posts from this author

Leave a Reply

Your email address will not be published. Required fields are marked *