Sunday, February 2, 2014

Premier

How does one eliminate duplicates rows from a table?

 

Choose one of the following queries to identify or remove duplicate rows from a table leaving only unique records in the table:

Method 2:

SQL> DELETE FROM table_name A WHERE ROWID > (
  2    SELECT min(rowid) FROM table_name B
  3    WHERE A.key_values = B.key_values);
Delete all rowids that is BIGGER than the SMALLEST rowid value (for a given key).

Method 2:

SQL> create table table_name2 as select distinct * from table_name1;
SQL> drop table table_name1;
SQL> rename table_name2 to table_name1;
This method is usually faster. However, remember to recreate all indexes, constraints, triggers, etc. on the table when done.

Method 3:


SQL> delete from my_table t1
SQL> where  exists (select 'x' from my_table t2
SQL>                 where t2.key_value1 = t1.key_value1
SQL>                   and t2.key_value2 = t1.key_value2
SQL>                   and t2.rowid      > t1.rowid);

1)    Consider a table (t) where each row has two integers values x and y.   

How can you find the largest of x and y for each row?

     
  Id
x
y
001
1
2
002
4
3
003
5
5

       ANSWER: SELECT id, x, y, (x+y+ABS(x-y))/2 FROM t;

      ORACLE ANSWER:  SELECT GREATEST(x,y) FROM t;



2)       Consider this:

  Id
x
y
z
001
1
2
3
002
4
3
2
003
5
5
4

 Using standard SQL and without using any propriety SQL code, how would you find the max for three fields? 


ANSWER: SELECT p.x,p.y, p.z, theMax.b  
FROM UNIT_TEST.TEST p
INNER JOIN  (SELECT id, MAX(field) AS b FROM (SELECT id, x AS field  FROM UNIT_TEST.TEST
                                              UNION
                                              SELECT id, y AS field FROM UNIT_TEST.TEST 
                                              UNION
                                              SELECT id, z AS field FROM UNIT_TEST.TEST
                                              ) AS t GROUP BY id) AS theMax ON p.ID=theMax.id



http://searchsoa.techtarget.com/tip/REST-vs-SOAP-How-to-choose-the-best-Web-service

http://blog.manishchhabra.com/2013/04/rest-and-soap-web-services-analogy/


Why multiple reducer ? 



Whats is complex data type format you have worked upon ? 

When do we use custom writable ? 

If the 13 partient file is coming with 13 attribute for patient and you want to make some small calculation on all attribute and what is your approch ? 


Which one is faster than all other approach like Java MapReduce , Hive , Pig 

Mapreduce
Strength:
      works both on structured and unstructured data.
      good for writing complex business logic.

Weakness:

     long development type

     hard to achieve join functionality
Hive :
     Strength:
          less development time.
          suitable for adhoc analysis.
          easy for joins
    weakness :
             not easy for complex business logic.
             deals only structured data.
Pig
      Strength :
               Structured and unstructured data.
               joins are easily written.
     Weakness:
              new language to learn.
              converted into mapreduce.






If You Enjoyed This Post Please Take 5 Seconds To Share It.

0 comments:

Pages

Copyright by Alpesh. Powered by Blogger.