Archive for ‘STEM’

November 12, 2014

Philae Has LANDED On Comet 67P ! #STEM #CometLanding

by C. Michael Eliasz-Solomon

Philae_picOf_ROSETTAThe ESA’s 10 year mission of the Rosetta satellite to rendezvous with Comet 67P and to launch Philae the Comet Lander which has landed and will commence a one year full studies of this comet with pics from the lander and satellite and data of a variety of on-board instruments.

Comet 67P/Churyumov–Gerasimenko is a very interesting comet just from its shape alone.

The first image is from Philae as it was launched from Rosetta. The bubbles are sun-flare – glare on the RAW image with the Sun and its rays on the center bottom of the raw image.

Rosetta’s Experiments

RosettasExperimentsThese 11 experiments  will run for about 64 hours before exhausting batteries. Afterwards, every other day after battery recharge, they will run for about 1 hr. [since the landing bounced, the lander is not in an optimal location/position to optimally charge so the original estimates may now differ.]

This is a tremendous engineering and science acheivement of the ESA to be the first to land upon a comet. The rendezvouse began on  6-August-2014 and for the last three months mapped the comet’s contours and emissions looking for an interesting and viable landing spot. 11:06 ET has the landed and is communicatinging – images to come.

DLR – Rosetta website

Poland is also a part of ESA now. On the Philae Lander is one instrumentation that was built in Poland. More details are here .

Quote More than 70 scientific instruments that were built in Poland, has[sic] already been sent to space. The instruments built at the Space Research Center of the Polish Academy of Sciences have studied Titan’s surface, have been on board ESA’s Venus and Mars Express missions, and are now studying the comet 67P/Churyumov–Gerasimenko.

First Image Of Landing Spot From Philae

1stImageFrom67P

June 21, 2014

Is RootsWeb Dead? — #Genealogy, #Cloud

by C. Michael Eliasz-Solomon

MyCloudStanczyk is wondering … “Is Rootsweb dead”? Please give me date and place of death if that is the case. Earlier in the week Ancestry.com had a multi-day outage due to a DDoS.  That is  a distributed denial of service, whereby a ‘botnet  make an overwhelming number of requests from a website until it crashes or ceases to be able to respond to requests.

So since Ancestry, hosts Rootsweb, I was thinking perhaps that DDoS took out Rootsweb too. I tweeted  @Ancestry and asked if anyone was working on Rootsweb being down and did not receive any response — so I am blogging in hopes that Ancestry.com will respond. Now I know parts of Ancestry came back a little at a time. Searching, then trees, the blog, finally message boards (connected to Rootsweb n’est c’est pas?). How many days now has Rootsweb been down and when will Ancestry get around to fixing the problem. Mundia is also down and perhaps Ancestry will never bring Mundia back, since they had already announced  (June 12th, 2014) that it  was going away. Likewise for MyCanvas and also Genealogy.com are dead too and they were scheduled for termination too.

This is a Cloud problem. When you live upon someone else’s cloud and it crashes you are down too and you do not come back until their cloud is reconstituted. I guess in this case maybe longer. Maybe you remember the news when in 2012 Amazon’s cloud crashed and that cloud crash took out Netflix, Instagram, Pinterest, and even GitHub (a nod to my Developers guild) or the when Amazon’s  cloud crashed in 2011 taking out FourSquare and Reddit.

My advice is from the Rolling Stones in 1965:  http://youtu.be/pq3YdpB6N9M [enjoy]. Mick Jagger was way ahead of his time.

 

 

August 18, 2013

Speculative STEM Thoughts — #Musings, #Pangaea

by C. Michael Eliasz-Solomon

pangea_politik

Image Source: Popular Science

Popular Science posted an article on Pangaea a week ago or so (8/8/2013). It had a beautiful graphic that caught my eye on what the supercontinent looked like, if we super-impose today’s geo-political boundaries upon the supercontinent for reference. This is the source of my musing today.

Pangaea was a supercontinent that existed during the late Paleozoic and early Mesozoic eras, forming about 300 million years ago. I look at this image and it immediately evoked a serious of questions in my mind:

  • Did any landmasses disappear and should the map be more fuller?
  • For example, Atlantis, would it have appeared near England/France water areas?
  • How about the four great rivers of Genesis?
  • Did the islands, like Hawaii or the Azores appear after Pangaea?
  • Were there other islands that existed when Pangea did, but disappeared since?
  • Would Canada’s Hudson Bay & Great Lakes areas be land or water?
  • What about Caribbean Islands?
  • Does this SuperContinent shed any light on dinosaur fossil finds, if plotted against this map?

Well those were some of my musings when I saw that map? How about you? Did it give you pause to wonder? Email me!

August 12, 2013

Oracle 12c – Multi-Tenant Databases — #STEM, #Oracle

by C. Michael Eliasz-Solomon

Oracle12c

Oracle 12c

Oracle’s newest database (version 12c) has many new features, the discussion of which are too big for a single blog article (or even a series of blogs). The substantial high-level bulleted list of new features is in the 12c New Features Manual . But the concepts and low level SQL language details show a much larger change then you might perceive.

Multitenant Database

The new paradigm shift, Multitenant Databases, will stop DBAs pretty quick, particularly in Windows where the installer creates a Container DB. Previous to 12c, all databases were Non-Container DBs. With 12c you can create a Non-Container DB or a Container DB. The Container DB can have zero, one, or more Pluggable DBs within. A Non-Container DB can never have a Pluggable DB. So that becomes an upfront database creation decision.

You can and you should read the Oracle Intro to Multitenant Databases .

I first relaized the Oracle Installer had created a container database for me when I went through the normal process of creating a database user using the same old techniques I always did and received an Oracle Error: ORA-65096. WHAM, I slammed right into the new paradigm without me even knowing it existed. The error description and the necessary action introduced to another part of the Multitenant Database paradigm: Comon User vs. Local User. That quickly led to Containers. Of course, with any new features, comes an array of new database schema tables like, v$pdbs for example. You will also probably use a new Sql*Plus command a LOT: SHOW CON_NAME to know what container (root or pluggable database) you are connected to. Some DBA commands must be done in the root container (CDB$ROOT). Your pluggable databases (in Windows) will be by default: PDB$SEED and PDBORCL. Every container database has precisely one seed pluggable database from which all pluggable databases are created from.

This paradigm shift will be  seriously disorienting feeling to long time DBAs, especially if were not aware this was coming. Fortunately, there are many DBA bloggers out there sharing their 12c experiences. They were a help for me to gather the necessary jargon. But it was not until I discovered that Oracle had created a tutorial on Multitenant Databases and I had spent an hour or two playing with the tutorial on my newly created sandbox database (on Windows) which was by default a Container DB. This tutorial is an excellent way to jump start your understanding of the new paradigm.

By the way, I think either the default should be a NON-CONTAINER DB (so you are backwards compatible) or the Oracle Intsaller needs to make it clear that a CONTAINER DB will require new DBA processes (i.e. a learning curve) and give you an OVERT option to create a NON-CONTAINER DB for backwards compatibility.

Conclusion

Read the Oracle Introduction to Multitenant Databases to understand the concepts. Then immediately work your way through the tutorial in a test database that is a Container DB. Ultimately, I think Container DBs are the way to go. I think this is what you want to do to implement a CLOUD or in a Virtualized Environment.

August 8, 2013

Wordless Wednesday — #Oracle, #12c, #STEM, #GEEK

by C. Michael Eliasz-Solomon

Oracle 12c installed . Getting my #GEEK on this week.

 

Ora12c_Installed

August 7, 2013

Oracle v 12c … vs. Greenplum MPP — #STEM, #Oracle, #Greenplum, #BigData

by C. Michael Eliasz-Solomon

Studying up on Oracle v. 12c. As usual, there are many new features to recommend migrating or deploying to the new version of Oracle. Last blog, I talked about just a few: ILM, ADO, HEAT_MAP and how these buzz-worthy acronyms were related to compression inside the database. Before,  I get into today’s topic, I wanted to talk about  a bit more about the Automatic Data Optimization (ADO).

I failed to make clear yesterday, that the ADO, automatically relocates your cold data or compresses your data as it ages through its Lifecycle automatically. That is the magic. You define the policies and the database will relocate or compress a segment(s) or a row(s) to save space or to clear space on more expensive hard disk, by relocating to slower/less accessible disk storage media. Pretty nifty idea.

By the way, you may be wondering … 8i, 9i, 10g, 11g, 12c what is the pattern or meaning of these major release versions from Oracle.?  Well, “8i / 9i” were from the era, when Oracle was the “Internet” database (you know  like iPhone, or i-<Anything>). Then “10g / 11g” were to be the “Grid”. Grid never really achieved buzz-worthy status. Now we have “12c”. It should not surprise you that we are now in the “Cloud” era. So Oracle’s letters are for: Internet, Grid, and Cloud . Now you know.

That Cloud and yesterday’s ADO  will figure in today’s blog too. You see, I was recently asked about Greenplum. Could I use it? As is my wont, I took a step back and studied the question. Here is my answer.

GreenPlum

Oracle

MPP platform

MPP – RAC(aka Oracle parallel server)

Full SQL (Postgres)

Full SQL (Oracle, ANSI)

Compression

Compression since 11g, ADO/ILM 12c

B-Tree / BitMap Indexes

B-Tree / BitMap Indexes

JDBC/ODBC/OLE

JDBC/ODBC/OLE/Pro*C (etc.)

Parallel Query Optimizer

Parallel Query Optimizer

External Tables

External Tables

GreenPlum HD (HDFS)

External Tables using an HDFS

I believe that as an Oracle expert (28+ years from v2.0-11g inclusive), that I could effectively use Greenplum on a project. If you look at the above chart, I think you will see what I am about to explain.

Green is an MPP platform. Very nice acrhitecture. Oracle can sit on top of any architecture (MPP, SMP, or any cluster or Highly Available or Fault-Tolerant Failover set of servers) you can setup.

Both use FULL SQL.  That means ANSI compliance and with enhancements (POSTGRES for Greenplum and ORACLE, uh, for Oracle).

B-Tree and Bit Map Indexes for both — yawn old hat. Parallel Query Optimizer – been there, seen that for a while.

Greenplum has JDBC/ODBC/OLE interfaces. Oracle has those too, plus a full complement of Pro*C (or many other languages) embedded pre-compiled 3GL languages. Oracle is well supported by Shell Scripts like PHP or PERL that have their interfaces to Oracle. Slight advantage to Oracle. But the point is, Oracle professionals have done this for more than a decade.

External Tables too are a feature in both databases.  GreenPlum HD uses the External Table to provide HDFS access in GreenPlum via SQL or other in-database features. Now I had not previously thought to try and use HDFS with Oracle. But the External Table is precisely the feature I would use. Can it be done? A look at Oracle’s documentation answers that:

LINK: http://docs.oracle.com/cd/E27101_01/doc.10/e27365/directhdfs.htm

CREATE TABLE [schema.]table
   ( column datatype, ... )
   ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER
                        DEFAULT DIRECTORY directory
                        ACCESS PARAMETERS
                            ( PREPROCESSOR HDFS_BIN_PATH:hdfs_stream access_parameters
                        ... )
                        LOCATION (file1,file2...)
                      );

CONCLUSION
So I recommend that companies fell free to utilize Oracle consultants on Greenplum databases. There is an awful lot of overlap that the Oracle specialist can leverage from his/her background and transfer to the Greenplum database.

Of course, for companies without Greenplum, it looks like you can use many of the same features already in Oracle including using HDFS filesystems with External Tables.

So get to that BigData, your friendly Oracle expert can help you.

August 6, 2013

This Jester Has Been Consulting the Oracle — #STEM, #ILM, #ADO, #Oracle

by C. Michael Eliasz-Solomon

Dateline 06 Aug 2013 — 

OracleLogo
If you are the same age as Stanczyk, then when you see the acronym, ILM, you probably think of George Lucas’ Industrial Light & Magic. But this article is about the Oracle of Larry Ellison. Oracle has released its latest version of its database, 12c, on June 25th, 2013.

So the ILM, of this blog is about Information Lifecycle Management. I thought you might need a buzz-word upgrade too — hence this blog. In the latest 12c, Oracle is advancing its ILM paradigm to make Automatic Data Optimization (ADO) a differentiator in Data / Databases. You see data storage is eating the planet or at least the IT budgets of many large companies. That Big Data has to live somewhere and the costs to house that data is very significant. Ergo, Oracle is giving you a way to  Tier your data storage  amongst differing costs media (hi to low) and using differing levels of compression, depending on your data’s lifecycle. Hence ILM.

ILM_ora

Source:  Oracle Documentation

The idea is that data ages from very active, to less active, to historical, to archival. You ideally would want to place the most active data on the fastest, most reliable, … most costly hardware. Likewise, as the data ages, it would be preferable to place on less costly storage devices or in a more compressed state to save space and costs. How can you do that effectively and without a large staff of IT professionals?  This is where the ADO comes in.

Using your familiar create table or alter table commands you can add an ILM policy to compress or relocate your data. Oracle provides segment level or  even row level granularity for these policies. How do you know what data is active vs inactive? Oracle has implemented a HEAT_MAP facility for detecting data usage. HEAT_MAP is a db parameter. Set it on in your init.ora file or via an alter session command in sql*plus (to do it on a session basis instead of database wide.

 ALTER SESSION SET HEAT_MAP=ON;

You can check on things via:

 SELECT * FROM V$HEAT_MAP_SEGMENT;

There is even a PL_SQL stored package:  DBMS_HEAT_MAP.

So this is a quick update on ILM, ADO, and HEAT_MAP in Oracle 12c database. Go to the Oracle yourself and see what you can get on this new technology.

Tags: , , , ,
Follow

Get every new post delivered to your Inbox.

Join 458 other followers

%d bloggers like this: