Quantcast
Channel: Oracle Blogs | Oracle Warehouse Builder (OWB) Blog
Viewing all 65 articles
Browse latest View live

OWB 11gR2 - Creating Interval Partitions

$
0
0

Designing partitioned tables in OWB is done in the table editor in the partitioned tab, the partitions tab let’s you design and deploy complex partitioning strategies. Here we will see how to define an interval partition (see an example in the Oracle Database VLDB and Partitioning documentation here), we will partition the SALES fact table using a date column (TIMES) in the table below.

owb_partition_1

On the partitioning tab there is a table with a tree control inside, essentially there are 4 steps for this example; defining the partition type, define the key columns, define the interval expression and the initial partition details. The buttons Add/Add Subpartition/Add Hash Count/Delete get enabled when you select rows, so you can modify the definition.

owb_partition_2

Generating the code we can see the DDL for the Oracle partitioning clause has been included.

owb_partition_3

To create the table definition in OMB you can do something like the following – note there is some double quoting in the expressions.

OMBCREATE TABLE 'SALES_TAB' ADD COLUMN 'TIMES' SET PROPERTIES (DATATYPE) VALUES ('DATE')
# Plus the rest of your table definition....
OMBALTER TABLE 'SALES_TAB' ADD PARTITION_KEY 'TIMES' SET PROPERTIES (TYPE,INTERVAL) VALUES ('RANGE','NUMTOYMINTERVAL(1,''MONTH'')')
OMBALTER TABLE 'INTERVAL_TAB' ADD PARTITION 'PART_01' SET PROPERTIES (VALUES_LESS_THAN) VALUES ('TO_DATE(''01-NOV-2007'',''DD-MON-YYYY'')')

That’s it!


Custom Java Activity for XML Loading

$
0
0

Always nice to see consultants post real-world scenarios using Oracle tools and technology. Michael Reitsma has posted an entry here on using custom java activities in process flows for loading large XML into a data warehouse. There is also a useful insight into using the runtime public views for monitoring the activities and seeing the low level errors. More postings from Michael I hope sharing his knowledge and experience, thanks Michael for taking the time!

It's definitely a small world - he used a SAX parser (Saxonica) from Michael Kay who I worked with a long way back in ICL where he was driving a next generation metadata repository...sounds familiar:-) Back then we were using a cool persistent Prolog (Megalog) language from the ECRC.

OWB 11gR2 - MySQL and Data types

$
0
0

Thought I’d write a quick post on data type support for MySQL or any other system type for that matter to show some of the intricacies of why things happen. I was helping someone the other day import some tables from MySQL into OWB and there were some columns not imported – OWB mentioned they were skipped, this was because the platform definition at the time did not have the types expected (what types I hear you cry). The platform did have tinyint defined, strange, MySQL did have the column defined with tinyint(1), so what was wrong? The piece in the middle – the JDBC driver was projecting the column as having BIT datatype. The BIT datatype wasn’t in the MySQL platform’s definition when I first published it on the blog (it is now updated here).

You can retrieve the types supported by a platform using the OMBRETRIEVE PLATFORM command as follows;

owb_11gr2_mysql_platform_omb

On the OTN page for the OWB SDK there is a link to some useful utilities that let you visually see the platform definition including the types, and type mappings. Here is an example of the types for MySQL with the useful type properties captured (this expert was built using a simple java component here).

owb_11gr2_mysql_platform_types

Some useful pointers to the experts and how things work.

Sting'ing in the rain?

$
0
0

Hopefully this is a good omen for the weather for the appreciation event tonight in San Francisco! Great seeing all of the ODI and OWB folks at OpenWorld this week!

hq

Generating XML with Experts

$
0
0

The leveraging XDB post (here) from a few years ago is one of the most actively read posts, since that was done there have been a few more updates on the expert posted within it. One of the updated areas was in the generation of XML using the Oracle Database using the expert., the areas include supporting generated a single document vs multiple documents and the ability to include/exclude attributes from the content, plus whether to create the attributes as XML properties or XML elements.

A recent query was regarding how the ‘Create XML from objects’ menu option gets created. This is added just by enabling the expert on the ‘Tables’ node in the tree, here we see the sequence of actions to do this in OWB 11gR2, you must first import the expert’s MDL, then add the expert to the tree as follows.

First right click on Tables node and select ‘Maintain Creation Experts Here' (you can add any of your own custom experts to parts of the tree also);

owb_11gr2_xmlgen1

Then in the XML_ETL folder within public experts, enable the CREATE_XML_FROM_OBJECTS expert;

 owb_11gr2_xmlgen2

That’s it! Now you can run the expert from the tree. For example now click on the Tables node, you will see the ‘Create XML from objects’ option.

owb_11gr2_xmlgen3

This then runs the expert, the dialog was enhanced to include a ‘Generate Root’ option – this was added so that all generated XML fragments are wrapped in a single element rather than created as XML documents. Using this lets you generate one document like;

<AllDepartments>
<Department name=’ACCOUNTING’/>
<Department name=’RESEARCH’/>
</AllDepartments>

rather than multiple documents like (where Department is the route node);

<Department name=’ACCOUNTING’/>
<Department name=’RESEARCH’/>

So let’s select ‘Generate Root’ and see how it works….

owb_11gr2_xmlgen4

As before we get to enter the name of the pluggable map that gets generated.

owb_11gr2_xmlgen5

We then choose the tables for or document, and order the master to the detail, we will have departments and the employees nested inside the department;

owb_11gr2_xmlgen6

We then can define the element name for the root (because we selected generate root), and the dept and emp tables.

owb_11gr2_xmlgen7

For each table we can then define the XML element/attribute names for the columns also, we can also define whether to exclude attributes, or define an element name for the attribute rather than a property name.

owb_11gr2_xmlgen8 

For the EMP XML element details we will exclude the foreign key column DEPTNO, and provide nice business names for the properties.

owb_11gr2_xmlgen9

After this, the pluggable mapping is generated. We can use the table function from the earlier post and the pluggable mapping to write the XML to file, for example we generate the following from the SCOTT schema.

owb_11gr2_xmlgen10 

Fairly simple example of leveraging the database along with experts to generate based on some basic inputs from the guided expert.

Example XDB/XML Mapping

$
0
0

The mapping to construct the XML for the example 3-46 in the XDB documentation can be found in the xdb_example_3_46.mdl MDL file. This MDL is for 11.2.0.2, so you will need at least that version.

The example was described in the earlier post on leveraging XDB, the Oracle doc has changed since that older post and example 3.44 is now 3.46 … so by the time you read this it might be different.

The code can be generated and you can inspect with the SQL in the XDB documentation to see how the different parts have been composed.

The other technique that can be used and described here is the inline view, so you can effectively bury your own SQL in a view that is not deployed in the database but the code is generated inline when used in the mapping.

Both of these illustrations are included in the MDL file mentioned above.

Parallel Processing with DBMS_PARALLEL_EXECUTE

$
0
0

Here is another illustration of some of the powerful capabilities of the DBMS_PARALLEL_EXECUTE package in the Oracle database, carrying on from the earlier post here. One of the comments from that post was on how to insert into a different named table within each chunk and that insert can perform parallel DML also. This kind of scenario could be interesting for very high end processing, it could be end point target tables or tables that are prepared and then you perform partition exchanges with them or something.

The image below shows a variation on the original post where rather than inserting into a specific partition, you write into a specific table.

Driving the whole process can be your own chunking criteria, the question was how to drive this process from a table using SQL such as ‘select distinct level_key, level_key from chunk_table’ where chunk_table has the level_key and the target table name. For example it could contain the following data;

level_keytable_name
1sales_level1
2sales_level2
3sales_level3
4sales_level4
 

So the first chunk with level_key 1 will write the results to table sales_level1 etc.

You can use the DBMS_PARALLEL_PACKAGE as follows to create this functionality. The start/end values have to be of data type NUMBER, so you will have to lookup the (target) table name inside your PLSQL block within the statement provided in the run_task call.

This block has the query to determine the chunks .....

begin
   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;
   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',
   sql_stmt =>'select distinct level_key, level_key from chunk_table', by_rowid => false);
end;

Then the next block will construct the and process the tasks......

begin
   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       begin
         select table_name into table_name from chunk_table where level_key=vstart_id;
         s:=''insert into ''||table_name||'' select /*+ PARALLEL(STG, 8) */ colx from STAGING_TABLE STG
           where level_key =:vstart_id'';
         execute immediate s using vstart_id;
         commit;
     end;',
     language_flag => DBMS_SQL.NATIVE, parallel_level => 2 );
end;

The anonymous PLSQL block can be any arbitrary code, you can see the table name of the target is retrieved, the example does a parallel INSERT using the hint PARALLEL(STG,8). Anyway, good to share.

Debugging OWB generated SAP ABAP code executed through RFC

$
0
0

Within OWB if you need to execute ABAP code using RFC you will have to use the SAP Function Module RFC_ABAP_INSTALL_AND_RUN. This function module is specified during the creation of the SAP source location. Usually in a Production environment a copy of this function module is used due to security restrictions.

When you execute the mapping by using this Function Module you can’t see the actual ABAP code that is passed on to the SAP system. In case you want to take a look at the code that will be executed on the SAP system you need to use a custom Function Module in SAP. The easiest way to do this is to make a copy of the Function Module RFC_ABAP_INSTALL_AND_RUN and call it say Z_TEST_FM. Then edit the code of the Function Module in SAP as below

FUNCTION Z_TEST_FM .

DATA: BEGIN OF listobj OCCURS 20.
INCLUDE STRUCTURE abaplist.
DATA: END OF listobj.

DATA: begin_of_line(72).
DATA: line_end_char(1).
DATA: line_length type I.
DATA: lin(72).

loop at program.
append program-line to WRITES.
endloop.

ENDFUNCTION.

Within OWB edit the SAP Location and use Z_TEST_FM as the “Execution Function Module” instead of  RFC_ABAP_INSTALL_AND_RUN. Then register this location. The Mapping you want to debug will have to be deployed. After deployment you can right click the mapping and click on “Start”.

 

After clicking start the “Input Parameters” screen will be displayed. You can make changes here if you need to. Check that the parameter BACKGROUND is set to “TRUE”.

After Clicking “OK” the log for the execution will be displayed. The execution of Mappings will always fail when you use the above function module. Clicking on the icon “I” (information) the ABAP code will be displayed.

 

The ABAP code displayed is the code that is passed through the Function Module. You can also find the code by going through the log files on the server which hosts the OWB repository. The logs will be located under <OWB_HOME>/owb/log.

Patch #12951045 is recommended while using the SAP Connector with OWB 11.2.0.2. For recommended patches for other releases please check with Oracle Support at http://support.oracle.com

OWB 11gR2 - Windows and Linux 64-bit clients available

$
0
0

In addition to the integrated release of OWB in the 11.2.0.3 Oracle database distribution, the following 64-bit standalone clients are now available for download from Oracle Support.

  • OWB 11.2.0.3 Standalone client for Windows 64-bit - 13365470
  • OWB 11.2.0.3 Standalone client for Linux X86 64-bit - 13366327

This is in addition to the previously released 32-bit client on Windows.

  • OWB 11.2.0.3 Standalone client for Windows 32-bit - 13365457

The support document Major OWB 11.2.0.3 New Features Summary has details for OWB 11.2.0.3 which include the following.

Exadata v2 and oracle Database 11gR2 support capabilities;
  • Support for Oracle Database 11gR2 and Exadata compression types
  • Even more partitioning: Range-Range, Composite Hash/List, System, Reference
  • Transparent Data Encryption support
  • Data Guard support/certification
  • Compiled PL/SQL code generation
Capabilities to support data warehouse ETL best practices;
  • Read and write Oracle Data Pump files with external tables
  • External table preprocessor
  • Partition specific DML
  • Bulk data movement code templates: Oracle, IBM DB2, Microsoft SQL Server to Oracle

Integration with Fusion Middleware capabilities;

  • Support OWB's Control Center Agent on WLS

Lots of interesting capabilities in 11.2.0.3 and the availability of the 64-bit client I'm sure is welcome news for many!

OWB – SQLLoader, big data files and logging

$
0
0

OWB’s flat file loading support is rich and there are a lot of configuration properties for mappings in general and the properties exposed for SQLLoader mappings are expansive. One of the things that OWB does is load the resultant logs from SQLLoader into the runtime audit tables through the runtime service and also scrapes audit information (number of rows etc) from the file.

The thing to be wary of is whether a verbose output is used for feedback – combined with the rows per commit property (default is 200) the SQLLoader mapping will write a feedback message every 200 rows committed. Imagine…big data files and filling a log file with this kind of message every n rows can be quite large!

With this setting your log will have a lot of ‘Commit point reached’ messages in the log.

 

The ‘Supress’ properties (a typo I know) can be used to hide this information and make the logs compact, so switch on the Supress Feedback property as follows and the log shrinks, with no verbose output;

 

 

This equates to the SQLLoader SILENT option, if we look at the control file generated by OWB, we see the SILENT=(FEEDBACK) option is now added;

 

The tooltip for the misspelled ‘Supress Feedback’ is ‘Suppresses the “commit point reach” messages that normally appear on the screen’. Others have come across this on the forum also.

OWB 11gR2 – Windows and Linux 64-bit clients on OTN

OWB 11gR2 - Dimensional Modeling Paper

$
0
0

At the recent DOAG conference in November 2011, Maren Eschermann from Trivadis presented a session on Dimensional Modeling with OWB 11gR2. Maren has kindly posted an English translation of the presentation and paper, both well worth reading where she provides some valuable opinions on the features and how to get the most from them.

There’s a lot of useful stuff in there, thank you for sharing Maren!

There were a lot of OWB sessions at DOAG, another interesting geographic statistic is from the ‘owbland’ Sourceforge project, if you look at the download statistics there have been 2300 downloads of files from that project in the past 14 months, not bad….good job Oleg. Also interesting statistics on the top countries with downloads, no surprises…Germany and Netherlands!

So quite some activity, if you get time have a read through the paper and presentation on Dimensional Modeling with OWB 11gR2, and thanks again Maren.

ODI Time Generation – SQL as a Source

$
0
0

Came across a nice use of the earlier SQL as a Source KM posting where the source was a time dimension generator. The forum entry is here, so the temporary interface is a data generator which when nested can be used in an interface to merge or load into a target.

Click on the image to see more …

 

Nice way to capture within tool and leverage different integration KMs on the target.

ODI 11g – More accelerator options

$
0
0

A few more options added into the interface accelerator that I blogged about earlier here in initial post and a later one here. Added options for doing position based and case sensitive/insensitive options. These were simple changes added into the auto map class. You can now find the latest updates below;

So just like the initial post you will compile and execute the code, but use the different classname OdiInterfaceAccelerator;

java –classpath <cp> OdinterfaceAccelerator jdbc:oracle:thin:@localhost:1521:ora112 oracle.jdbc.OracleDriver ODI_MASTER mypwd WORKREP1 SUPERVISOR myodipwd STARTERS SDK <icontrol.csv

In the automapper I created a couple of options that can drive the accelerator, it supports;

  • positional based match (match columns by position from source to target)
  • exact match case sensitive  (match EMPNO with EMPNO, but not empno with EMPNO)
  • exact match case insensitive (match EMPNO with empno)
  • src/target ends with sensitive/insensitive (match PFX_empno with empno/EMPNO)
  • src/target starts with sensitive/insensitive (match empno_col with empno/EMPNO)

Note, you can also use the “diagrams” in the models to greatly accelerate development if source and targets have the same structure – if not then you have to go through the SDK route above if you want to accelerate.

ODI 11g – Interface Builder

$
0
0

In the previous blogs such as the one here I illustrated how to use the SDK to perform interface creation using various auto mapping options for generating 1:1 interfaces either using positional based matching, like names ignoring case and so on. Here we will see another example (download OdiInterfaceBuilder.java) showing a different aspect using a control file which describes the interface in simple primitives which drives the creation. The example uses a tab delimited text file to control the interface creation, but it could be easily taken and changed to drive from Excel, XML or whatever you wanted to capture the design of the interface.

The interface can be as complete or incomplete as you’d like, so could just contain the objects or could be concise and semantically complete.

The control file is VERY simple and just like ODI requests the minimal amount of information required. The basic format is as follows;

DirectiveColumn 2Column 3Column 4Column 5
source<model><datastore>  
   can add many    
target<model><datastore>  
mapping<column><expression>  
   can add many    
join<expression>   
   can add many    
filter<expression>   
    can repeat many    
lookup<model><datastore><alias><expression>
   can add many    

So for example the control file below can define the sources, target, joins, mapping expressions etc;

source    SCOTT    EMP
source    SCOTT    DEPT
target    STG_MODEL_CASE    TGTEMP
mapping    ENAME    UPPER(EMP.ENAME)
mapping    DNAME    UPPER(DEPT.DNAME)
mapping    DEPTNO    ABS(EMP.EMPNO)
join    EMP.DEPTNO = DEPT.DEPTNO
lookup    SCOTT    BONUS    BONUS    BONUS.ENAME = EMP.ENAME
filter    EMP.SAL > 1
mapping    COMM    ABS(BONUS.COMM)

When executed, this generates the interface below with the join, filter, lookup and target expressions from the file.

You should be able to join the dots between the control file sample and the interface design above.

So just like the initial post you will compile and execute the code, but use the different classname OdiInterfaceBuilder;

java –classpath <cp> OdinterfaceBuilder jdbc:oracle:thin:@localhost:1521:ora112 oracle.jdbc.OracleDriver ODI_MASTER mypwd WORKREP1 SUPERVISOR myodipwd STARTERS SDK DEMO1 <myinterfacecontrolfile.tab

The interface to be created is passed from the command line. You can intersperse other documentation lines between the control lines so long as the control keywords in first column don’t clash.

Anyway some useful snippets of code for those learning the SDK, or for those wanting to capture the design outside and generate ODI Interfaces. Have fun!


OWB Forum Reaches 50k

ODI 11g - Getting Scripting with Groovy

$
0
0

The addition of the groovy interpreter to the ODI designer now let’s you easily script any tasks that you repeatedly perform. The documentation has illustrations here, so using the ODI 11g SDK you can encapsulate common tasks in simple groovy functions.

Groovy can be executed by executing a script, you can create a new one or open an existing groovy script;

You will then see a new groovy window appear in the IDE plus the execute green button is enabled on the toolbar.

I have taken the script defined here and shown below in its more minimal groovy form and parameterized the script in a groovy function‘createProject’. I can then call createProject with whatever values for the project and folder I wish to create.

import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;
import oracle.odi.domain.project.OdiProject;
import oracle.odi.domain.project.OdiFolder;

def createProject(projectName, projectCode, folderName) {
  txnDef = new DefaultTransactionDefinition();
  tm = odiInstance.getTransactionManager()
  txnStatus = tm.getTransaction(txnDef)
  project = new OdiProject(projectName, projectCode)
  folder = new OdiFolder(project, folderName)
  odiInstance.getTransactionalEntityManager().persist(project)
  tm.commit(txnStatus)
}

createProject("EDW Staging", "EDW", "Initialization")

So in the UI if I execute as follows;

After executing the script I refresh the Designer tree and see my new project.

ODI 11g - Scripting the Model and Topology

$
0
0

Scripting is the ideal mechanism to automate start up and teardown for repeated tasks and those that you just want to automate. Here are a couple of more illustrations of how to easily construct a model in ODI, the script will also create all of the topology objects. The script uses two methods; createLogicalSchema and createModel. The createLogicalSchema creates the logical schema, data server, physical schema and logical schema to physical schema mapping via a context all from one function call.

The signature of these methods looks like this;

createLogicalSchema

contextCode – the ODI code for the context used to map the logical schema to the physical

technologyCode – the ODI code for the technology

nameForLogicalSchema – the name for the logical schema to create

NameForDataserver – the name for the data server to create

userNameForAuthentication – the username for the connection to the data server

passwordForAuthentication – the password for the connection to the data server

urlForAuthentication – the URL for the connection to the data server

driverForAuthentication – the JDBC driver for the connection to the data server

schemaForAuthentication – the schema to use for the ODI physical schema

createModel

logicalSchemaObject – the ODI logical schema object (instance of ODILogicalSchema)

contextCode – the ODI context code for reverse engineering

nameForModel – the name for the model to create

codeForModel – the code for the model to create

So with these two methods or variations of them you can easily construct your topology objects and models. For example the call below creates a new model named ORACLE_MODEL and all of the topology objects that will allow me to go straight to reverse engineering when the script has been run.

lschema = createLogicalSchema("GLOBAL", "ORACLE", "ORACLE_EBS", "ORACLE_HQLINUX_DEV", "SCOTT",

    ObfuscatedString.obfuscate("<password>"), "jdbc:oracle:thin:@localhost:1521:orcl", "oracle.jdbc.OracleDriver", "SCOTT")

createModel(lschema, "GLOBAL", "ORACLE_MODEL", "ORACLE_MODEL")

Here is the source code for the script

import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;
import oracle.odi.domain.util.ObfuscatedString;
import oracle.odi.domain.model.OdiModel;
import oracle.odi.domain.topology.OdiLogicalSchema;
import oracle.odi.domain.topology.OdiPhysicalSchema;
import oracle.odi.domain.topology.OdiDataServer;
import oracle.odi.domain.topology.OdiContext;
import oracle.odi.domain.topology.OdiTechnology;
import oracle.odi.domain.topology.OdiContextualSchemaMapping;
import oracle.odi.domain.topology.AbstractOdiDataServer;
import oracle.odi.domain.topology.finder.IOdiContextFinder;
import oracle.odi.domain.topology.finder.IOdiTechnologyFinder;

def createLogicalSchema(contextCode, techCode, schName, dataserverName, userName, password, url, driver, schema) {
  txnDef = new DefaultTransactionDefinition();
  tm = odiInstance.getTransactionManager()
  txnStatus = tm.getTransaction(txnDef)

  contextFinder = (IOdiContextFinder) odiInstance.getTransactionalEntityManager().getFinder(OdiContext.class);
  context = contextFinder.findByCode(contextCode);

  techFinder = (IOdiTechnologyFinder) odiInstance.getTransactionalEntityManager().getFinder(OdiTechnology.class);
  tech = techFinder.findByCode(techCode);

  lschema = new OdiLogicalSchema(tech, schName)
  dserver = new OdiDataServer(tech, dataserverName)
  con = new AbstractOdiDataServer.JdbcSettings(url, driver)
  dserver.setConnectionSettings(con)
  dserver.setUsername(userName)
  dserver.setPassword(password)
  pschema = new OdiPhysicalSchema(dserver)
  pschema.setSchemaName(schema)
  pschema.setWorkSchemaName(schema)
  cschema = new OdiContextualSchemaMapping(context, lschema, pschema)

  odiInstance.getTransactionalEntityManager().persist(lschema)
  odiInstance.getTransactionalEntityManager().persist(dserver)
  tm.commit(txnStatus)
  return lschema
}

def createModel(lschema, contextCode, modName, modCode) {
  txnDef = new DefaultTransactionDefinition();
  tm = odiInstance.getTransactionManager()
  txnStatus = tm.getTransaction(txnDef)

  contextFinder = (IOdiContextFinder) odiInstance.getTransactionalEntityManager().getFinder(OdiContext.class);
  context = contextFinder.findByCode(contextCode);

  mod = new OdiModel(lschema, modName, modCode)
  mod.setReverseContext(context)
  odiInstance.getTransactionalEntityManager().persist(mod)
  tm.commit(txnStatus)
  return mod
}

lschema = createLogicalSchema("GLOBAL", "ORACLE", "ORACLE_EBS", "ORACLE_HQLINUX_DEV", "SCOTT", ObfuscatedString.obfuscate("<password>"),
"jdbc:oracle:thin:@localhost:1521:orcl", "oracle.jdbc.OracleDriver", "SCOTT")

createModel(lschema, "GLOBAL", "ORACLE_MODEL", "ORACLE_MODEL")

Have fun scripting!

ODI 11g – Expert Accelerator for Model Creation

$
0
0

Following on from my post earlier this morning on scripting model and topology creation tonight I thought I’d add a little UI to make those groovy functions a little more palatable. In OWB we have experts for capturing user input, with the groovy console we open up opportunities to build UI around the scripts in a very easy way – even I can do it;-)

After a little googling around I found some useful posts on SwingBuilder, the most useful one that I used for the dialog below was this one here. This dialog captures user input for the technology and context for the model and logical schema etc to be created. You can see there are a variety of interesting controls, and its really easy to do.

The dialog captures the users input, then when OK is pressed I call the functions from the earlier post to create the logical schema (plus all the other objects) and model. The image below shows what was created, you can see the model (with typo in name), the model is Oracle technology and references the logical schema ORACLE_SCOTT (that I named in dialog above), the logical schema is mapped via the GLOBAL context to the data server ORACLE_SCOTT_DEV (that I named in dialog above), and the physical schema used was just the user name that I connected with – so if you wanted a different user the schema name could be added to the dialog.

In a nutshell, one dialog that encapsulates a simpler mechanism for creating a model. You can create your own scripts that use dialogs like this, capture input and process.

You can find the groovy script for this is here odi_create_model.groovy, again I wrapped the user capture code in a groovy function and return the result in a variable and then simply call the createLogicalSchema and createModel functions from the previous posting. The script I supplied above has everything you will need. To execute use Tools->Groovy->Open Script and then execute the green play button on the toolbar.

Have fun.

ODI 11g – Insight to the SDK

$
0
0

This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here.

Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6.

Entry to the Platform

ObjectFinder SDK
odiInstanceodiInstance (groovy variable for console)OdiInstance

Topology Objects

ObjectFinderSDK
TechnologyIOdiTechnologyFinderOdiTechnology
ContextIOdiContextFinderOdiContext
Logical SchemaIOdiLogicalSchemaFinderOdiLogicalSchema
Data ServerIOdiDataServerFinderOdiDataServer
Physical SchemaIOdiPhysicalSchemaFinderOdiPhysicalSchema
Logical Schema to Physical MappingIOdiContextualSchemaMappingFinderOdiContextualSchemaMapping
Logical AgentIOdiLogicalAgentFinderOdiLogicalAgent
Physical AgentIOdiPhysicalAgentFinderOdiPhysicalAgent
Logical Agent to Physical MappingIOdiContextualAgentMappingFinderOdiContextualAgentMapping
Master RepositoryIOdiMasterRepositoryInfoFinderOdiMasterRepositoryInfo
Work RepositoryIOdiWorkRepositoryInfoFinderOdiWorkRepositoryInfo

Project Objects

ObjectFinder SDK
ProjectIOdiProjectFinderOdiProject
FolderIOdiFolderFinderOdiFolder
InterfaceIOdiInterfaceFinderOdiInterface
PackageIOdiPackageFinderOdiPackage
ProcedureIOdiUserProcedureFinderOdiUserProcedure
User FunctionIOdiUserFunctionFinderOdiUserFunction
VariableIOdiVariableFinderOdiVariable
SequenceIOdiSequenceFinderOdiSequence
KMIOdiKMFinderOdiKM

Load Plans and Scenarios

ObjectFinder SDK
Load PlanIOdiLoadPlanFinderOdiLoadPlan
Load Plan and Scenario FolderIOdiScenarioFolderFinderOdiScenarioFolder

Model Objects 

ObjectFinder SDK
ModelIOdiModelFinderOdiModel
Sub ModelIOdiSubModelOdiSubModel
DataStoreIOdiDataStoreFinderOdiDataStore
ColumnIOdiColumnFinderOdiColumn
KeyIOdiKeyFinderOdiKey
ConditionIOdiConditionFinderOdiCondition

Operator Objects

ObjectFinder SDK
Session FolderIOdiSessionFolderFinderOdiSessionFolder
SessionIOdiSessionFinderOdiSession
ScheduleOdiSchedule

 

How to Create an Object?

Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object.

import oracle.odi.domain.project.OdiProject;
import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;

txnDef = new DefaultTransactionDefinition();
tm = odiInstance.getTransactionManager()
txnStatus = tm.getTransaction(txnDef)

project = new OdiProject("Project For Demo", "PROJECT_DEMO")
odiInstance.getTransactionalEntityManager().persist(project)
tm.commit(txnStatus)

How to Update an Object?

This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted.

import oracle.odi.domain.project.OdiProject;
import oracle.odi.domain.project.finder.IOdiProjectFinder;
import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;

txnDef = new DefaultTransactionDefinition();
tm = odiInstance.getTransactionManager()
txnStatus = tm.getTransaction(txnDef)

prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class);
project = prjFinder.findByCode("PROJECT_DEMO");

project.setName("A Demo Project");

odiInstance.getTransactionalEntityManager().persist(project)
tm.commit(txnStatus)

How to Delete an Object?

Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object.

import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder;
import oracle.odi.domain.runtime.session.OdiSession;
import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition;

txnDef = new DefaultTransactionDefinition();
tm = odiInstance.getTransactionManager()
txnStatus = tm.getTransaction(txnDef)

sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class);
sessc = sessFinder.findAll();
sessItr = sessc.iterator()
while (sessItr.hasNext()) {
  sess = (OdiSession) sessItr.next()
  odiInstance.getTransactionalEntityManager().remove(sess)
}
tm.commit(txnStatus)

This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

Viewing all 65 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>