Posts tagged ‘ADO’

August 7, 2013

Oracle v 12c … vs. Greenplum MPP — #STEM, #Oracle, #Greenplum, #BigData

by C. Michael Eliasz-Solomon

Studying up on Oracle v. 12c. As usual, there are many new features to recommend migrating or deploying to the new version of Oracle. Last blog, I talked about just a few: ILM, ADO, HEAT_MAP and how these buzz-worthy acronyms were related to compression inside the database. Before,  I get into today’s topic, I wanted to talk about  a bit more about the Automatic Data Optimization (ADO).

I failed to make clear yesterday, that the ADO, automatically relocates your cold data or compresses your data as it ages through its Lifecycle automatically. That is the magic. You define the policies and the database will relocate or compress a segment(s) or a row(s) to save space or to clear space on more expensive hard disk, by relocating to slower/less accessible disk storage media. Pretty nifty idea.

By the way, you may be wondering … 8i, 9i, 10g, 11g, 12c what is the pattern or meaning of these major release versions from Oracle.?  Well, “8i / 9i” were from the era, when Oracle was the “Internet” database (you know  like iPhone, or i-<Anything>). Then “10g / 11g” were to be the “Grid”. Grid never really achieved buzz-worthy status. Now we have “12c”. It should not surprise you that we are now in the “Cloud” era. So Oracle’s letters are for: Internet, Grid, and Cloud . Now you know.

That Cloud and yesterday’s ADO  will figure in today’s blog too. You see, I was recently asked about Greenplum. Could I use it? As is my wont, I took a step back and studied the question. Here is my answer.

GreenPlum

Oracle

MPP platform

MPP – RAC(aka Oracle parallel server)

Full SQL (Postgres)

Full SQL (Oracle, ANSI)

Compression

Compression since 11g, ADO/ILM 12c

B-Tree / BitMap Indexes

B-Tree / BitMap Indexes

JDBC/ODBC/OLE

JDBC/ODBC/OLE/Pro*C (etc.)

Parallel Query Optimizer

Parallel Query Optimizer

External Tables

External Tables

GreenPlum HD (HDFS)

External Tables using an HDFS

I believe that as an Oracle expert (28+ years from v2.0-11g inclusive), that I could effectively use Greenplum on a project. If you look at the above chart, I think you will see what I am about to explain.

Green is an MPP platform. Very nice acrhitecture. Oracle can sit on top of any architecture (MPP, SMP, or any cluster or Highly Available or Fault-Tolerant Failover set of servers) you can setup.

Both use FULL SQL.  That means ANSI compliance and with enhancements (POSTGRES for Greenplum and ORACLE, uh, for Oracle).

B-Tree and Bit Map Indexes for both — yawn old hat. Parallel Query Optimizer – been there, seen that for a while.

Greenplum has JDBC/ODBC/OLE interfaces. Oracle has those too, plus a full complement of Pro*C (or many other languages) embedded pre-compiled 3GL languages. Oracle is well supported by Shell Scripts like PHP or PERL that have their interfaces to Oracle. Slight advantage to Oracle. But the point is, Oracle professionals have done this for more than a decade.

External Tables too are a feature in both databases.  GreenPlum HD uses the External Table to provide HDFS access in GreenPlum via SQL or other in-database features. Now I had not previously thought to try and use HDFS with Oracle. But the External Table is precisely the feature I would use. Can it be done? A look at Oracle’s documentation answers that:

LINK: http://docs.oracle.com/cd/E27101_01/doc.10/e27365/directhdfs.htm

CREATE TABLE [schema.]table
   ( column datatype, ... )
   ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER
                        DEFAULT DIRECTORY directory
                        ACCESS PARAMETERS
                            ( PREPROCESSOR HDFS_BIN_PATH:hdfs_stream access_parameters
                        ... )
                        LOCATION (file1,file2...)
                      );

CONCLUSION
So I recommend that companies fell free to utilize Oracle consultants on Greenplum databases. There is an awful lot of overlap that the Oracle specialist can leverage from his/her background and transfer to the Greenplum database.

Of course, for companies without Greenplum, it looks like you can use many of the same features already in Oracle including using HDFS filesystems with External Tables.

So get to that BigData, your friendly Oracle expert can help you.

August 6, 2013

This Jester Has Been Consulting the Oracle — #STEM, #ILM, #ADO, #Oracle

by C. Michael Eliasz-Solomon

Dateline 06 Aug 2013 — 

OracleLogo
If you are the same age as Stanczyk, then when you see the acronym, ILM, you probably think of George Lucas’ Industrial Light & Magic. But this article is about the Oracle of Larry Ellison. Oracle has released its latest version of its database, 12c, on June 25th, 2013.

So the ILM, of this blog is about Information Lifecycle Management. I thought you might need a buzz-word upgrade too — hence this blog. In the latest 12c, Oracle is advancing its ILM paradigm to make Automatic Data Optimization (ADO) a differentiator in Data / Databases. You see data storage is eating the planet or at least the IT budgets of many large companies. That Big Data has to live somewhere and the costs to house that data is very significant. Ergo, Oracle is giving you a way to  Tier your data storage  amongst differing costs media (hi to low) and using differing levels of compression, depending on your data’s lifecycle. Hence ILM.

ILM_ora

Source:  Oracle Documentation

The idea is that data ages from very active, to less active, to historical, to archival. You ideally would want to place the most active data on the fastest, most reliable, … most costly hardware. Likewise, as the data ages, it would be preferable to place on less costly storage devices or in a more compressed state to save space and costs. How can you do that effectively and without a large staff of IT professionals?  This is where the ADO comes in.

Using your familiar create table or alter table commands you can add an ILM policy to compress or relocate your data. Oracle provides segment level or  even row level granularity for these policies. How do you know what data is active vs inactive? Oracle has implemented a HEAT_MAP facility for detecting data usage. HEAT_MAP is a db parameter. Set it on in your init.ora file or via an alter session command in sql*plus (to do it on a session basis instead of database wide.

 ALTER SESSION SET HEAT_MAP=ON;

You can check on things via:

 SELECT * FROM V$HEAT_MAP_SEGMENT;

There is even a PL_SQL stored package:  DBMS_HEAT_MAP.

So this is a quick update on ILM, ADO, and HEAT_MAP in Oracle 12c database. Go to the Oracle yourself and see what you can get on this new technology.

Tags: , , , ,
Follow

Get every new post delivered to your Inbox.

Join 431 other followers

%d bloggers like this: