web analytics

Data, data everywhere 

Twitter LinkedIn RSS
magnify
formats

Using the debugger in Oracle 12c SQL Developer

One of the significant features of Oracle’s SQL Developer client is the built-in debugger for PL/SQL stored objects: PL/SQL stored procedures, stored functions, and triggers. In this tutorial we describe how to use the SQL Developer 4.1.0 debugger with an Oracle 12.0.1 server. Setting up the correct permissions and authorizations so that development staff can use the tool is more complex with Oracle 12 than with Oracle 11g, and I will go into some of the gory detail near the end of this article. Nonetheless – here we go.

Step 1: Create a PL/SQL function or procedure. Consider this simple example, “SQUARE”, that returns the square of the given NUMBER value:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE OR REPLACE FUNCTION SQUARE( X_IN IN INTEGER ) RETURN INTEGER
  IS
    result INTEGER := 0;
  BEGIN
    SELECT POWER( X_IN, 2.0 ) INTO result FROM dual;
    RETURN result;
  EXCEPTION
    WHEN VALUE_ERROR THEN
      DBMS_OUTPUT.PUT_LINE('Value: ' || TO_CHAR(X_IN) || ' caused an overflow error');
      RETURN result;
    WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE('An unknown exception occurred in function SQUARE');
      RETURN result;
  END SQUARE;

SQUARE_procedure

Note the existence of SQUARE in the left sidebar of the SQL Developer client:

SQUARE_in_sidebar

Step 3: Compile the SQUARE function in debug mode, by right-clicking the SQUARE function and selecting “Compile for Debug” from the pop-up menu.

Compile_SQUARE_in_debug_mode

Step 4: Right-click again on the SQUARE function in the left sidebar and choose the “Edit” menu item. This creates a new edit window within the SQL Developer client containing just the code of this procedure.

Step 5: Within the editor window for the SQUARE function, right-click in the line number area to toggle a suitable breakpoint – for example, one could set a breakpoint on the SELECT statement that computes the square of the result:

Toggle_breakpoint

Breakpoint_on_SELECT_statement

Step 6: Within SQL Developer, open another SQL window, and execute the following procedure:

15
EXECUTE ADMIN.ENABLE_DEBUGGING();

Note that the brackets () here are mandatory. If you had enabled the DBMS_OUTPUT pane, you’ll see the result of the procedure in that window, which verifies that your Java access control for debugging has been enabled for your userid and current IP address:

Call_ENABLE_DEBUGGING

You can execute the ADMIN.ENABLE_DEBUGGING() procedure at any time prior to starting your use of the SQL Developer debugging tools. See the “Oracle 12c Debugging Setup” section at the end of this post for details of the ENABLE_DEBUGGING() procedure.

Step 7: Return to the SQL window for the SQUARE() function. Click on the “ladybug” in the menu bar above the source code, which displays this dialog:

debugging_wizard_dialog

The dialog is a debugging wizard that presents you with a PL/SQL program to invoke the SQUARE function with the value “1”. If you click “OK” you will execute the program, which will invoke the SQUARE function and halt execution at the breakpoint located within the function at line 5 (the SELECT statement). The SQL Developer client will look like this:

At_desired_breakpoint

Step 8: To control execution, you can use the command icons located just above the debugging log at the bottom of the SQL Developer window, or you can use the following function keys:

  • Resume: F9
  • Step out: SHIFT-F7
  • Step info: F7
  • Step over: F8

As with other debuggers you can use the Smart Data and Data panes to view the values of variables at any step through the function as it executes:

Viewing_data_values

Step 9: If you press F8 several times in succession, you’ll exit the SQUARE function and return to the anonymous PL/SQL block that was generated for you by the DEBUG wizard – the PL/SQL block is now in a separate pane in the SQL Developer editor:

Debugging_anonymous_block

Now you can step through the anonymous block and verify that SQUARE actually returned the correct value (1).

Step 10. To debug your own anonymous PL/SQL block, then, one merely has to create a PL/SQL function or procedure that your anonymous block calls – and once the debugger returns from that function or procedure, one can debug your PL/SQL code in the anonymous block to verify its execution semantics. Note: there can be an issue when replacing the debugger wizard anonymous block code. See the last section of this article.

The simplest, most straightforward PL/SQL stored procedure is the example below, which I’ve called DO_NOTHING():

16
17
18
19
20
CREATE OR REPLACE PROCEDURE DO_NOTHING
IS
BEGIN
  NULL;
END;

Compile DO_NOTHING for debug mode using SQL Developer pane by

  1. display DO_NOTHING in your list of procedures;
  2. right click; and
  3. select Compile With Debug

You can then open the DO_NOTHING procedure in a separate pane in SQL Developer, and set a breakpoint on the NULL statement within the DO_NOTHING procedure.

To debug an anonymous block, insert a statement within the anonymous block’s BEGIN section that calls DO_NOTHING(), after ensuring that DO_NOTHING is first compiled in debug mode. You can, for example, cut-and-paste your anonymous block into the debugging wizard once you click on the “ladybug” icon within SQL Developer. Stepping out of the DO_NOTHING procedure when you are within the debugger will permit you to debug your anonymous block.

Debugger Setup with Oracle 12c

To permit a user to use the debugger from the SQL Developer client, the user must have the following privileges:

21
22
23
GRANT DEBUG CONNECT SESSION TO <username-or-role>;
GRANT CREATE PROCEDURE TO <username-or-role>;
GRANT EXECUTE ANY PROCEDURE TO <username-or-role>;

The latter privilege isn’t strictly necessary – it can be done via specific GRANT statements to the required procedures – but such fine-grained access restrictions makes little sense for developers, particularly in a test environment. In addition, you must enable debugging via Java for your specific userid and IP address, a new restriction imposed by Oracle 12c that was not required with Oracle 11g. This is the reason for Step 6 above, which is the call to the ADMIN.ENABLE_DEBUGGING() procedure. The procedure contents is as follows:

24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
CREATE OR REPLACE PROCEDURE ENABLE_DEBUGGING
IS 
   ip_address VARCHAR2(40);
BEGIN
  ip_address := SYS_CONTEXT( 'userenv', 'ip_address' );
  DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE
       (HOST=>ip_address,
        ace=> SYS.XS$ACE_TYPE(privilege_list=>sys.XS$NAME_LIST('JDWP') ,
                           principal_name=>USER,
                           principal_type=>sys.XS_ACL.PTYPE_DB) );
     DBMS_OUTPUT.PUT_LINE( 'Added Java debugging privilege for user: ' || USER );
     DBMS_OUTPUT.PUT_LINE( 'At IP address: ' || ip_address );
EXCEPTION
   WHEN OTHERS THEN
      DBMS_OUTPUT.PUT_LINE( 'ENABLE_DEBUGGING: An unknown exception has occurred.' );
      RAISE;
END;

The “ADMIN” account referred to above, that creates the ENABLE_DEBUGGING() procedure, has SYSDBA privileges. Note that authorization to the package DBMS_NETWORK_ACL_ADMIN is granted to the DBA role; to embed the call to DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE() within a stored procedure, the SYS account for the database must explictly GRANT EXECUTE ON DBMS_NETWORK_ACL_ADMIN TO ADMIN. The procedure, then, permits developers (in my case, students) to add themselves to the server’s Access Control List on demand, and it works even if the student-developers have signed in from remote locations over VPN.

If the ENABLE_DEBUGGING() procedure has been called and is working correctly, when you invoke the debugger by clicking OK on the debugging wizard dialog, you should see the following lines in the debugging log that will appear on your screen (the IP address and port number will differ):

41
42
43
Executing PL/SQL: ALTER SESSION SET PLSQL_DEBUG=TRUE
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP('10.191.59.133', '50504')
Debugger accepted connection FROM DATABASE ON port 3480.

If the ENABLE_DEBUGGING() procedure has not worked correctly, you will see an error similar to the following:

44
45
46
47
48
49
50
51
52
Connecting TO the DATABASE Oracle IT Server - gpaulley.
Executing PL/SQL: ALTER SESSION SET PLSQL_DEBUG=TRUE
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP('10.191.59.133', '50504')
 
ORA-24247: network ACCESS denied BY ACCESS control list (ACL)
ORA-06512: AT "SYS.DBMS_DEBUG_JDWP", line 68
ORA-06512: AT line 1
Process exited.
Disconnecting FROM the DATABASE Oracle IT Server - gpaulley.

If you encounter error ORA-24247 after executing the ENABLE_DEBUGGING() procedure, please contact your instructor.

Debugging your own anonymous block

If you wish, you can replace the anonymous block generated by the SQL Developer debugger completely with your own anonymous block code – as long as it calls a compiled-for-debug PL/SQL procedure or function, such as DO_NOTHING, that has a breakpoint set, you will be able to debug your own anonymous block code as if it was a PL/SQL stored function or procedure.

There is a glitch, however, in the current implementation of the debugging wizard in the SQL Developer 4.1.0 and 4.1.1 clients. Currently, the SQL Developer client requires the anonymous block to assign a value to a specific “bind variable”, v_Return, within the block. For example, the default code generated to debug the SQUARE stored function looks like this:

53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
DECLARE
  X_IN NUMBER;
  v_Return NUMBER;
BEGIN
  X_IN := 1;
 
  v_Return := SQUARE(
    X_IN => X_IN
  );
  /* Legacy output: 
DBMS_OUTPUT.PUT_LINE('v_Return = ' || v_Return);
*/ 
  :v_Return := v_Return;
--rollback; 
END;

Note the assignment to the special bind variable :v_Return on line 65. This variable must be given a value in your anonymous block to prevent the debugging session from terminating with an obscure error, “Invalid Column Index”, a known problem documented here. From that bug report comes the following description:

This error seems to happen if you are debugging a function (or any subprogram with OUT parameters) and the debug call block does not assign the function’s return value (and/or OUT parameters) to bind variable.

The default call block generated by SQL Developer for debugging a function ends with a statement like:

:v_Return := v_Return;

assigning the return value from the function call to a bind variable.

If you remove this bind variable entirely (say, because you used the “From File” option to load some existing script for calling the function in question), then you get the “Invalid Column Index” error. Presumably SQL Developer expects to be able to get the return value of the function out of a bind variable, so if there is nothing to bind to it gets an error.

Now, there doesn’t seem to be any very good reason for this, since as far as I can tell you can’t view the return value in any way after completing the debug run (and in fact, in many debug runs you may not even run the function to completion).

If you explicitly assign a value – any value – to the bound variable :v_Return then your anonymous block debugging session should work without error.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Java EE development: the PermGen error

When developing a tiered Java EE database application there are a tremendous number of program errors that will only be discovered at runtime. If one is using Hibernate as the Object-relational mapping tool, even JPQL queries embedded within your Java application will (likely) pass Java compilation because the syntax and semantics of the JPQL query isn’t checked at compile time, but only at runtime when the Hibernate JPA library attempts to convert the JPQL query into an SQL query for the specific DBMS used by the application. In my case, recently that database has been Oracle 12c but the same principle applies to any DBMS.

Two things in particular exacerbate the runtime error issue when working with Java EE. First are the heavy use of Java annotations (or XML files if you like to avoid annotations) for configuring the interactions between the Java application, the ORM mapper (in my case, Hibernate), and the underlying DBMS. Second is the (certain) requirement to use native SQL queries within your application, because it will be the rare application that doesn’t require either (a) an SQL language feature that Hibernate does not support, such as outer joins over derived tables, or (b) a more streamlined, set-oriented result set, such as one that uses SQL analytic functions, that return a concise result from the server in a single SQL request.

I’ve written about using native SQL queries with JPA and Java annotations previously; a recent blog post by Lukas Eder nicely outlines the issues with native queries and JPA. However, these problems exist even for JDBC applications since (again) there is no compile-time checking of SQL queries.

Consequently, one usually goes through many compile/run/debug cycles in developing a Java EE database application. For this reason it is essential to ensure that the majority of your program code executes within Java try/catch blocks, and that when exceptions are thrown then an indication of the error is output. When using GlassFish as the application server, the server’s log files (by default in the C:\glassfish4\glassfish\domains\domain1\logs directory) will contain any error stack traces generated by printStackTrace() methods from anywhere within your application.

Another error that can come up is a known out-of-memory condition with the Java VM, caused by a memory leak in the class loader when deploying Java EE applications to GlassFish repeatedly. When using Eclipse to deploy the application, the error can manifest itself in a couple of different ways, but the end result is that the deployment will fail and it may not be obvious within Eclipse’s IDE that the PermGen error has occurred. However, a look at the GlassFish server’s current log file will normally confirm the problem:

1
2
3
4
5
6
7
8
9
10
11
12
[2015-08-25T13:05:27.893-0400] [glassfish 4.1] [WARNING] [] [javax.enterprise.ejb.container] [tid: _ThreadID=27 _ThreadName=http-listener-1(1)] [timeMillis: 1440522327893] [levelValue: 900]
 
[[javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean: java.lang.OutOfMemoryError: PermGen space
	at com.sun.ejb.containers.EJBContainerTransactionManager.checkExceptionClientTx(EJBContainerTransactionManager.java:665)
	at com.sun.ejb.containers.EJBContainerTransactionManager.postInvokeTx(EJBContainerTransactionManager.java:507)
	at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4566)
	at com.sun.ejb.containers.StatefulSessionContainer.postInvokeTx(StatefulSessionContainer.java:1853)
	at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2074)
	at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2044)
	at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:220)
	at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:88)
	at com.sun.proxy.$Proxy398.open(Unknown Source)

The workaround is to simply bounce the GlassFish server, easy to do from within Eclipse using the “Servers” view.

Alas, inconsistencies with SQL queries from within a Java EE application aren’t the only source of runtime errors for developers; there are others. More on another class of such errors in my next post.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
3 Comments  comments 
formats

Teaching critical thinking skills through visual analytics

One way or another – either by being a university student, an industry practitioner, an industrial researcher, and now as an instructor – I’ve been doing “Computer Science” since 1978. Over the past 37 years I have worked with a variety of different hardware and software technologies, and at least a score of different programming languages. But the one overriding constant in all this work has been the mathematics. Not (merely) calculus, linear algebra and trigonometry – which remain essential for computer graphics – but other branches of mathematics as well, such as discrete mathematics (for understanding recurrence relations and series), numerical analysis (for solving modeling problems with real numbers), numerical algebra (solving problems modeled using vectors or matrices), differential equations (for modeling feedback control), dependency theory (for database normalization), relational algebra and calculus (database query languages), queuing theory (for understanding the performance of systems, including database systems), logic (for understanding program semantics) and, last but not least, statistics (for understanding trends in a variety of real world problems, from software defect rates to developer productivity).

All of these areas are important, but except for relational database theory, statistics is the branch of mathematics I have more heavily relied upon during my career than the others. Identifying trends, and being able to offer solid evidence of correlation, is a useful skill that has application in many areas of software development and management.

August_ShowcaseAs an example, I was pleased that Conestoga students Mark Parsons, George Michael, and Alex Farquahar developed a sophisticated set of data visualizations for production line management at a local technical firm in Kitchener and displayed their work at the recent August Showcase, held at Conestoga’s Cambridge campus (see left). The Showcase permits the students judged with the best Capstone project in their program to compete against projects from other programs within the College to win the College’s Mastercraft award at their convocation.

The term data science has been coined to describe the multi-faceted and multi-disciplinary combination of statistics, algorithms, data mining, analytics, data visualization, and related fields. At the heart of data science, however, are a researcher’s critical thinking skills, necessary to be able to reason about data and distinguish between correlation and causation.

A recent article [1] by Ronald A. Rensink, a faculty member in the departments of Psychology and Computer Science at the University of British Columbia and published in the IEEE magazine Computer Graphics and Applications, presents a very interesting approach to the teaching of data visualization and analysis to undergraduate students at UBC that may well be worth a detailed look by other diploma and undergraduate degree programs across the country. In the article, Mr. Rensink describes COGS 303, an upper-level course in his department’s interdisciplinary Cognitive Systems Program. What I find really interesting about COGS 303 is that the course is a precursor to a much more technical course that involves the application of statistical analysis to real data sets. COGS 303, rather, is a course to develop a student’s analytical, research and critical thinking skills. The topics include analyzing the structure of an argument, understanding assumptions, the role of hypothesis, experiment design, effective communication, and so on. The main text for the course is the volume Asking the Right Questions [2] by M. Neil Browne and Stuart Keeley, which is supplemented by a variety of articles from other authors. Here is the complete syllabus of COGS 303 as presented in the paper:

COGS 303 Syllabus

With this background, students then go on to apply these ideas to real-world analytics, using appropriate data mining and visualization tools, such as SPSS, R language, or Tableau. The idea is for COGS 303 to provide the critical thinking skills necessary for the students to be able to reason about the data under analysis, with much greater understanding, so that the inferences that are drawn are important and “real”, rather than simply coincidental, and convey that meaning in the best possible way.

[1] Ronald A. Rensink (March-April, 2015). Preparing Undergraduates for Visual Analytics. IEEE Computer Graphics and Applications 35(2). Reprinted in IEEE Computing Edge (July 2015), pp. 30-34.

[2] M. N. Browne and S. Keeley (2013). Asking the Right Questions: A Guide to Critical Thinking, 11th Edition. Pearson Education.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Performance overhead of SQL user-defined functions

Some time ago I wrote an article that described the performance tradeoffs one encounters when using SQL user-defined functions – sometimes called UDFs or PSM (Persistent Stored Module) functions – from within an SQL query. In that article, I wrote:

A recent email exchange with software consultant Breck Carter regarding the performance of user-defined functions in SQL has prompted me to re-post this article that I originally wrote in 2008. User-defined functions in SQL have their uses; unfortunately their performance characteristics are highly variable because it is very difficult impossible for a query optimizer to take their execution characteristics into account when deciding on the best access plan for a given query. For example, a change to the join order lower down in an execution plan may completely alter the ordering of rows in an intermediate result. If a user-defined function sits above the join, then the execution characteristics of the UDF may be negatively impacted, particularly if the ordering of the rows negates any benefit of memoization – the practice of caching prior execution results of the UDF for subsequent re-use, rather than naively re-executing the function each time. The re-execution of a user-defined function in SQL is prohibitively expensive due to the creation and tear-down of the UDF’s execution context (essentially the same as when executing a stored procedure), which destroys memory and CPU cache locality when executing the query plan.

Anticipating the performance of such complex queries in a production environment can be exceedingly difficult because the performance of the UDF may be affected not only by the size of the underlying tables but also on the amount of correlation between the values referenced within the function, which impacts the benefit of memoization. Testing such queries over small test databases will typically not alert the developer to the possibility of performance issues in a production setting.

With SQL Anywhere, the performance of user-defined SQL functions is also impacted because stored procedures and functions in SQL Anywhere are interpreted, rather than compiled, though this is not really a black-and-white distinction; there are optimizations in SQL Anywhere’s implementation that avoid the complete re-parsing of stored procedures. The tradeoff with this approach is ease-of-use; with SQL Anywhere procedures can be altered at (almost) any point, and can reference objects, such as tables or temporary tables, that do not exist when the procedure or function is created.

In query processing, switching within a plan operator to the execution of a stored module, even if the module is compiled, as with Oracle, is relatively expensive. Precisely how expensive can be difficult to estimate, because of factors like memoization which will have differing properties with different access plans.

An estimate of how expensive UDF stored functions are in relation to “ordinary” processing of a SELECT statement is provided in this Oracle 12c new feature document, written by Bryn Llewellyn, a Distinguished Product Manager with Oracle’s Database Server Technologies Division. Bryn’s example compares the execution of a SELECT statement projecting four attributes, three of which whose value is a CASE expression returning a string:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
SELECT
   PK,
   CASE
      WHEN n1 <= 1023 THEN TO_CHAR(n1, '999999')||' byte'
      WHEN n1/1024 <= 1023 THEN TO_CHAR(n1/1024, '999999')||' K'
      WHEN n1/1048576 <= 1023 THEN TO_CHAR(n1/1048576, '999999')||' M'
      WHEN n1/1073741824 <= 1023 THEN TO_CHAR(n1/1073741824, '999999')||' G'
      ELSE TO_CHAR(n1/1099511627776, '999999')||' T'
   END
   "n1",
   CASE
      WHEN n2 <= 1023 THEN TO_CHAR(n2, '999999')||' byte'
      WHEN n2/1024 <= 1023 THEN TO_CHAR(n2/1024, '999999')||' K'
      WHEN n2/1048576 <= 1023 THEN TO_CHAR(n2/1048576, '999999')||' M'
      WHEN n2/1073741824 <= 1023 THEN TO_CHAR(n2/1073741824, '999999')||' G'
      ELSE TO_CHAR(n2/1099511627776, '999999')||' T'
   END
   "n2",
   CASE
      WHEN n3 <= 1023 THEN TO_CHAR(n3, '999999')||' byte'
      WHEN n3/1024 <= 1023 THEN TO_CHAR(n3/1024, '999999')||' K'
      WHEN n3/1048576 <= 1023 THEN TO_CHAR(n3/1048576, '999999')||' M'
      WHEN n3/1073741824 <= 1023 THEN TO_CHAR(n3/1073741824, '999999')||' G'
      ELSE TO_CHAR(n3/1099511627776, '999999')||' T'
   END
   "n3"
FROM t

The contrasting example is semantically equivalent to the query above, but this version uses a SELECT statement that specifies an in-lined a user-defined PL/SQL function to convert the values.

The

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
WITH
   FUNCTION Print(n IN INTEGER) RETURN VARCHAR2 IS
   K CONSTANT NUMBER NOT NULL := 1024;
   M CONSTANT NUMBER NOT NULL := K*K;
   G CONSTANT NUMBER NOT NULL := M*K;
   T CONSTANT NUMBER NOT NULL := G*K;
   BEGIN
     RETURN
        CASE
           WHEN n <= K-1 THEN TO_CHAR(n, '999999')||' byte'
           WHEN n/K <= K-1 THEN TO_CHAR(n/K, '999999')||' K'
           WHEN n/M <= K-1 THEN TO_CHAR(n/M, '999999')||' M'
           WHEN n/G <= K-1 THEN TO_CHAR(n/G, '999999')||' G'
           ELSE TO_CHAR(n/T, '999999')||' T'
        END;
   END Print;
// Utilize the user-defined PL/SQL-FUNCTION within the query
SELECT PK,
    Print(n1) "n1",
    Print(n2) "n2",
    Print(n3) "n3"
FROM t

With Oracle 12c, version 12.1, here is the performance comparison:



Execution method Speedup
SELECT statement without UDF 5x
Schema-level function using pragma UDF 3.9x
In-lined UDF using the with clause 3.8x
Pre-12.1 ordinary schema-level function 1.0 – base case

The take-away from these results is that the in-lined execution cost (3.8x, versus an ordinary PL/SQL stored function) is that the cost includes the compilation of the PL/SQL function. That is a considerable achievement, though it is impossible to know to what extent the compilation cost is amortized per-row of the query since the size of table “t” is unknown. Nonetheless, all database application developers should be aware that using UDF functions in SQL statements results in additional query evaluation overhead. There is always going to be an increase in query execution cost, and that cost can vary due to a variety of factors at execution time, including the possible benefits of memoization, placement of the function within the query’s execution plan, and the query’s degree of parallelism.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Book review: Software Architecture in Practice, 3rd edition

software_architecture_in_practice

If you are looking for a single volume to explain software architecture, then I would recommend Software Architecture in Practice [1] by Bass, Clements, and Kazman. Software Architecture in Practice provides a comprehensive, superb, and highly readable overview of the software engineering practices termed “software architecture”, which include the design, analysis, documentation, and implementation of architectures along with related material on the business goals and quality attributes that influence architecture designs. What makes the work so informative, and so readable, is that the authors do an excellent job in weaving together descriptions and analysis of the many facets of software architecture with their own consulting experience. The result is a text that not only explains software architecture practice in-depth, but describes the application of the information from a variety of perspectives with examples of particular situations to make the descriptions of these software abstractions more concrete to the reader. For instructors, a complete set of Powerpoint slides that cover each chapter is available from Pearson Education.

Bass, Clements, and Kazman define software architecture as follows:

The software architecture of a system is the set of structures needed to reason about the system, which comprise software elements, relations among them, and properties of both.

The set of structures that make up an architecture can be classified into three categories: static structures, such as modules and their decomposition; dynamic structures, such as how components of the system interact at run time; and finally mapping structures that relate software structures to organizational elements, development teams, or their execution environments.

What I particularly enjoyed about the book is its inclusion of several over-arching themes that are woven into the detailed descriptions within each chapter:

  • The authors try very hard to be precise, defining their terminology with clear definitions, to avoid confusion, particularly if the reader is already somewhat familiar with the topic from the available literature.
  • The authors make the point that software architecture is fundamentally about software quality: quality in terms of the software system being able to deliver support for the system’s security requirements, correctness requirements, performance requirements, availability requirements, extensibility requirements, and so on. Readers may be familiar with these issues as non-functional requirements and these are front-and-center in this volume, which the authors term Architecturally Significant Requirements (ASRs).
  • The authors provide numerous examples of placing software architecture in a system development context. This context not only includes the determination of functional requirements from business analysts, but also includes additional constraints such as the development paradigm (e.g. waterfall or agile) being used.
  • The authors go to great lengths to show that a software architecture is much, much more than a single diagram. An architecture will have multiple views of its various elements, some of which are static and some dynamic. These must be documented so that development teams understand where their specific development tasks fit within the system and so that the software architect can continuously monitor the development process to ensure that the project is continuing on the correct course, following the architecture as intended.

As an example, Chapter 15 describes the purpose of software in an agile development context. One might mistakenly believe that agile development should eschew significant planning tasks such as designing a software architecture, but the authors – correctly – point out that a software architecture provides a unifying force – perhaps the only unifying force – with which to combine separate development efforts into a cohesive whole. However, there are, as always, tradeoffs; the more complex the system, the greater the architectural effort required:

how_much_architecture_mediumsize

In summary, with Software Architecture in Practice the authors comprehensively document the design, analysis, and implementation of software architectures that are intended to deliver the following benefits (taken from Chapter 2):

  1. An architecture will inhibit or enable a system’s driving quality attributes.
  2. The decisions made in an architecture allow you to reason about and manage change as the system evolves.
  3. The analysis of an architecture enables early prediction of a system’s qualities.
  4. A documented architecture enhances communication among stakeholders.
  5. The architecture is a carrier of the earliest and hence most fundamental, hardest-to-change design decisions.
  6. An architecture defines a set of constraints on subsequent implementation.
  7. The architecture dictates the structure of an organization, or vice versa.
  8. An architecture can provide the basis for evolutionary prototyping.
  9. An architecture is the key artifact that allows the architect and project manager to reason about cost and schedule.
  10. An architecture can be created as a transferable, reusable model that form the heart of a product line.
  11. Architecture-based development focuses attention on the assembly of components, rather than simply on their creation.
  12. By restricting design alternatives, architecture channels the creativity of developers, reducing design and system complexity.
  13. An architecture can be the foundation for training a new team member.

I read this book cover-to-cover and I highly recommend it for any software developer. I am now reading a closely-related volume, Documenting Software Architectures: Views and Beyond, 2nd Edition, to better understand additional documentation techniques for describing software architectures to various sets of stakeholders.

[1] Len Bass, Paul Clements, and Rick Kazman (2013). Software Architecture in Practice, 3rd Edition. Addison-Wesley, Upper Saddle River, New Jersey. ISBN 978-0-321-81573-6.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Configuring Transparent Data Encryption (TDE) with Oracle 12c

Oracle_12c_security

I am currently building a Java EE 7 application that will utilize an Oracle 12c database for persistent storage. Because some of the data is sensitive, I decided to enable Oracle’s Transparent Data Encryption (TDE) feature, so that I could encrypt particular columns to guard against disclosure of this sensitive data either through the theft of my computer, or the theft of my backup system. Enabling TDE also permits me to research various facets of Oracle security that I can use in instructing my students in my forthcoming advanced database application programming class.

TDE offers a no-code solution to the “clear text on disk” problem, where relational database data can be rendered vulnerable simply through access to the physical data file storing the data, rather than through access through the RDBMS in the usual manner, which would require authentication. Encrypting a column is done simply by adding the ENCRYPT clause to a column definition, along with the precise specification of which encryption algorithm should be used (eg, AES-256). Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
CREATE TABLE ATHLETE(
	athleteID INTEGER NOT NULL PRIMARY KEY, /* Uses sequence */
	lastName VARCHAR2(40) ENCRYPT USING 'AES256' NOT NULL,
	firstName VARCHAR2(40) ENCRYPT USING 'AES256' NOT NULL,
	gender CHAR(1) ENCRYPT USING 'AES256' NOT NULL,
	handedness CHAR(1),
	DOB DATE ENCRYPT USING 'AES256',
	streetAddress VARCHAR2(50) ENCRYPT USING 'AES256',
	city VARCHAR2(40) ENCRYPT USING 'AES256',
	state_province VARCHAR2(30) ENCRYPT USING 'AES256',
	postalCode VARCHAR2(7) ENCRYPT USING 'AES256',
	country VARCHAR2(30) ENCRYPT USING 'AES256',
	phone VARCHAR2(15) ENCRYPT USING 'AES256',
	mobile VARCHAR2(15) ENCRYPT USING 'AES256',
	email VARCHAR2(100) ENCRYPT USING 'AES256',
	notes VARCHAR2(400) ENCRYPT USING 'AES256'
);

Before one can specify a CREATE TABLE statement with an encrypted column, a user with the SYSKM management role must create a keystore to store the cryptographic keys in encryption. The keystore is stored in a specific directory, typically a subdirectory under the Oracle network administration directory. As my Oracle 12c installation is on Windows 7 SP 1, and my system ID is ORA12C, my location for the keystore directory, which I called “encryption_keystore”, was:

18
c:\oracle12c\product\12.1.0\dbhome_1\NETWORK\ADMIN\ora12c\encryption_keystore

For help in configuring TDE with my 12c installation, I referred to:

  • Chapter 11, “Oracle Transparent Data Encryption”, in Oracle Database 12c Security;
  • Configuring Transparent Data Encryption in the online Oracle 12c Database Advanced Security Guide; and
  • a very helpful page from ORACLE-BASE that describes configuring TDE for Oracle 12c installations with pluggable databases.

The three references above are excellent, and explain in detail what has to be done to create a keystore and configure server encryption using the ADMINISTER KEY MANAGEMENT statement. For my purposes, particularly because I’m running Oracle 12c on a laptop, I decided to implement auto-login keystore access so that I could avoid being prompted for the master encryption key each time the server started. As well, to simply things I decided to only implement a single keystore for my entire Oracle container, rather than implement separate keystores for every pluggable database.

The purpose of this post is to document a TDE configuration issue that sidetracked me for some time; hopefully by documenting this issue here I can save other developers, students, and Oracle database administrators some time when setting up their own installations.

The directory location of the TDE keystore, or “wallet” has to be placed into the sqlnet.ora file, which is by default located in the \NETWORK\ADMIN directory. After configuring the keystore using the checklists from the references above, my sqlnet.ora file then looked as follows:

19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# sqlnet.ora Network Configuration FILE: C:\oracle12c\product\12.1.0\dbhome_1\network\admin\sqlnet.ora
# Generated BY Oracle configuration tools.
 
# This FILE IS actually generated BY netca. But IF customers choose TO 
# install "Software Only", this FILE wont exist AND without the native 
# authentication, they will NOT be able TO CONNECT TO the DATABASE ON NT.
 
SQLNET.AUTHENTICATION_SERVICES= (NTS)
 
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
 
ENCRYPTION_WALLET_LOCATION=
  (SOURCE=
    (METHOD=FILE)
    (METHOD_DATA=
      DIRECTORY=c:\oracle12c\product\12.1.0\dbhome_1\NETWORK\ADMIN\ora12c\encryption_keystore\)
    )
   )

The ENCRYPTION_WALLET_LOCATION settings were copied from the Oracle reference pages, substituting my Oracle installation directory and wallet directory for the example given in the documentation. Upon restarting Oracle, however, I could not connect to the server from the SQL Developer client; instead I got the error message that a server connection could not be established. I could properly connect to the database server using the command-line SQLPLUS client, which uses a shared-memory OCI connection to the server. So I could verify that my pluggable databases had been started, and I could create tables that included encrypted columns, so my encryption configuration had been successful.

Checking my list of services in Windows’ Control Panel, I found that, sure enough, the Oracle listener had failed to start. Restarting the listener service manually resulted in an immediate failure, but as a service the error message does not contain a cause. I then attempted to start the Oracle listener manually using the “lsnrctl” utility (ensure the command shell is started with “Run as Administrator”:

lsnrctl

That confirmed the listener startup errors (TNS-12560 and TNS-00530); in digging through the listener logs, the error contained a bit more detail: “could not parse configuration parameters”. Clearly this is looking like my edits of sqlnet.ora were incorrect. Some Google searches led me to this very helpful issue on the Oracle Community Forums that documented the problem: the trailing backslash for the wallet directory was an issue for Windows installations. Additionally, it seemed that the case-sensitivity of directory name is equally important, including using “C:” rather than “c:”. While a lowercase drive letter appeared to fail, and an uppercase “C” worked, I could not reproduce this behaviour a second time. Finally, my sqlnet.ora file looks like this:

37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# sqlnet.ora Network Configuration FILE: C:\oracle12c\product\12.1.0\dbhome_1\network\admin\sqlnet.ora
# Generated BY Oracle configuration tools.
 
# This FILE IS actually generated BY netca. But IF customers choose TO 
# install "Software Only", this FILE wont exist AND without the native 
# authentication, they will NOT be able TO CONNECT TO the DATABASE ON NT.
 
SQLNET.AUTHENTICATION_SERVICES= (NTS)
 
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
 
ENCRYPTION_WALLET_LOCATION=
  (SOURCE=
    (METHOD=FILE)
    (METHOD_DATA=
      DIRECTORY=C:\oracle12c\product\12.1.0\dbhome_1\NETWORK\ADMIN\ora12c\encryption_keystore)
    )
   )

and, with the listener now started, I could continue developing my application.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

What exactly is thread deadlock?

Performance testing is a key component of system testing, since a non-performant application is often viewed as just as poor as an application that contains other varieties of defects. Care and attention are required to ensure that even simple processes will not cause contention in a production environment – and sometimes these issues are difficult to discover in during testing. Serialization of any resource in a database application may turn into a critical outage, but which may only occur with peak workloads.

This blog post, from 2009, nicely illustrates this point. Note that a significant difference in SQL Anywhere since the article was written is that SQL Anywhere now – by default – supports automatic, dynamic adjustments of the server’s multiprogramming level in an attempt to maximize server throughput, based in part on research undertaken by SAP engineer Mohammed Abouzour and his supervisor, Ken Salem, at the University of Waterloo.

This blog entry is copyright by Sybase, Inc., an SAP Company, and first appeared on Glenn Paulley’s Sybase blog (http://iablog.sybase.com/paulley/) on 2 September 2009. It is reprinted here with permission.

What exactly is thread deadlock?

Thread deadlock is a specific error (SQLCODE -307, SQLSTATE ’40W06′) that a SQL Anywhere server will return for a specific request. In this post I want to document why and how thread deadlock can occur, and the mechanisms that can be used to diagnose the problem.

SQL Anywhere’s threading architecture

Like other database management systems, SQL Anywhere implements its own threading architecture rather than rely solely on the threading model of the underlying operating system. Because SQL Anywhere supports a wide variety of OS and hardware platforms – Windows, Linux, Windows CE, Sun OS, AIX, HP-UX, and Mac OS/X to name a few – SQL Anywhere will utilize “lightweight” threads (often called fibers) on the operating systems (ie. Windows, Linux) that support them, and regular OS threads on those OS platforms that do not.

Moreover, in SQL Anywhere the server does not dedicate a thread (fiber) to a specific connection. Rather, a fixed-size pool of server threads are assigned dynamically to tasks as they enter the server for execution. Often a task is an SQL statement from an application or stored procedure, but there are many different types of tasks that a thread can service. Once a task is scheduled on a thread (fiber), that thread is assigned to process that task until the task completes or is cancelled.

Read more…

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Configuring SonarQube with Oracle 12c

For a while now I’ve wanted to install SonarQube for doing static analysis of my Java EE 7 applications. The current downloadable version of SonarQube is 5.1.1. The SonarQube requirements documentation indicates that SonarQube supports the following RDBMS for its repository:

  • Microsoft SQL Server 2008 and 2012;
  • Oracle 10g and 11g;
  • MySQL 5.1, 5.5, 5.6, and 5.7;
  • PostgreSQL 8.x and 9.x.

Since I’ve now migrated to Oracle 12c, I thought I would attempt installing SonarQube with Oracle 12c Enterprise Edition 12.1.0.1.0 64-bit on Windows 7.

Setting up the database

SonarQube uses a database system to store metadata about the software projects that it analyzes. Running SonarQube for the first time will create and populate all of the necessary tables; the userid used by the SonarQube process must be able to create various types of objects in the database, including indexes. With Oracle 12c I created a separate pluggable database for Sonarqube. I created a separate user with the DBA role for myself, and a separate user, “sonarqube”, for use by the SonarQube process. To permit the sonarqube account to create all of its necessary objects without issues – there is no documentation outlining precisely what is required – I granted the sonarqube Oracle account a variety of privileges:

1
2
3
4
5
6
7
8
9
10
11
12
GRANT CREATE TABLE TO sonarqube;
GRANT CREATE VIEW, CREATE SYNONYM, CREATE SEQUENCE TO sonarqube;
GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE, MANAGE TABLESPACE TO sonarqube;
GRANT CREATE TRIGGER TO sonarqube;
GRANT CREATE PROCEDURE, EXECUTE ANY PROCEDURE TO sonarqube;
GRANT CREATE SESSION, ALTER SESSION TO sonarqube;
GRANT CREATE MATERIALIZED VIEW TO sonarqube;
GRANT CREATE ANY INDEX TO sonarqube;
GRANT CREATE TYPE, EXECUTE ANY TYPE TO sonarqube;
GRANT DEBUG CONNECT SESSION TO sonarqube;
GRANT DEBUG ANY PROCEDURE TO sonarqube;
ALTER USER sonarqube QUOTA UNLIMITED ON SYSTEM;

What also isn’t clear is if SonarQube uses custom DDL statements for each underlying DBMS, or if it relies on Hibernate to issue the DDL. SonarQube 5.1.1 itself contains three object-relational mapping toolkits within its software suite: Java Hibernate 3.3.2, iBatis 3.2.7, and JRuby 1.7.9.

Configuring SonarQube

The SonarQube 5.1.1 install couldn’t be easier – one simply unzips the .zip archive. However, before one can move forward with Oracle 12c there are a number of things to take care of. Most of these involve editing either the sonar.properties or the wrapper.conf files in /conf.

I edited wrapper.conf to explicitly specify the location of my 64-bit Java 7 SDK. Note that one cannot (merely) specify the directory of the JDK; what SonarQube requires is the complete path to java.exe. My edit looks like:

1
wrapper.java.command=/Java/JDK7_71_SE64/bin/java

The sonar.properties file requires a number of edits. For Oracle 12c, the required changes are the userid/password combination for the sonarqube account, along with the JDBC URL. My installation uses the following:

2
3
4
sonar.jdbc.username=sonarqube
sonar.jdbc.password=XXXXXXX
sonar.jdbc.url=jdbc:oracle:thin:@//localhost:1521/sonarqube

Line 4 requires some explanation. Oracle 12c supports “servicename” as a variant for a JDBC URL. As I’ve setup a pluggable database for SonarQube, the PDB name “sonarqube” becomes the servicename in the URL. Note that the leading “//” in the URL are not optional.

I also uncommented a number of settings related to JDBC connection pooling:

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#----- Connection pool settings
# The maximum number of active connections that can be allocated
# at the same time, or negative for no limit.
sonar.jdbc.maxActive=50
 
# The maximum number of connections that can remain idle in the
# pool, without extra ones being released, or negative for no limit.
sonar.jdbc.maxIdle=5
 
# The minimum number of connections that can remain idle in the pool,
# without extra ones being created, or zero to create none.
sonar.jdbc.minIdle=2
 
# The maximum number of milliseconds that the pool will wait (when there
# are no available connections) for a connection to be returned before
# throwing an exception, or <= 0 to wait indefinitely.
sonar.jdbc.maxWait=5000

In addition to specifying the Oracle connection parameters, I modified the sonar.properties file in a number of other ways:

  1. I explicitly set the port numbers for SonarQube’s embedded Tomcat server, which with the 5.1.1 SonarQube release is Tomcat 8.0.18:
    22
    23
    
    sonar.web.port=9000
    sonar.search.port=9005

    The default SonarQube port is 9000, and is properly reserved. However, the default search port with SonarQube is 9001, which conflicts with Microsoft Sharepoint; so I set it to 9005 instead.

  2. Logging is turned off by default. Logging should be turned on, because otherwise if one has issues with the installation or setup you will have no idea what has occurred, because the SonarQube process will (likely) simply fail to start and you won’t know why.
    24
    25
    26
    
    sonar.path.logs=logs
    sonar.log.rollingPolicy=size:10MB
    sonar.log.maxFiles=7

Finally, I copied my Oracle 12.1.0.1 JDBC driver, ojdbc7.jar, from the Oracle server’s product/12.1.0/dbhome_1/jdbc/lib directory and placed it in the SonarQube directory /extensions/jdbc-driver/oracle as per the installation guide.

With these settings, starting the SonarQube 5.1.1 64-bit executable using the included StartSonar.bat file worked well. The SonarQube process started a JVM instance, connected to the Oracle 12c database, and created all of its tables for project metadata, which I confirmed by connecting to the PDB via SQL Developer. With the message that the SonarQube Tomcat server was up:

SonarQube_shell

I opened a Chrome browser and tried to get to the SonarQube server’s home page. Alas, what I got was this:

oracle12c_jdbc_driver_exception

Fortunately, this null pointer exception problem with Oracle’s JDBC driver in method oracle.jdbc.driver.AutoKeyInfo.initMetaDataColumnIndexes is a known issue and is fixed in version 12.1.0.2 of the Oracle JDBC drivers, which has just been released and can be found here. With the patched JDBC driver copied to the SonarQube /extensions directory, I now have a properly-working SonarQube server.

Takeaways:

  • Get the latest patches for Oracle’s JDBC drivers.
  • Turn on logging with your SonarQube installation.
  • There doesn’t seem to be any issues with using SonarQube 5.1.1 with Oracle 12c.
 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
6 Comments  comments 
formats

Configuration of Oracle 12c – part deux

In this second article regarding the installation and configuration of Oracle 12c, I’ll discuss the server’s memory configuration and the settings chosen for the default installation. Again, I’m working with Oracle 12c Enterprise Edition Release 12.1.0.1.0 – 64-bit on Windows 7.

Like many, I accepted all of the default configuration options when installing Oracle 12c Enterprise Edition, but I was subsequently surprised to see how much memory the server process was consuming. A quick check confirmed what was going on:

oracle12c_memory_usage_default

oracle12c_SGA_sizes

Right – the server’s SGA (System Global Area) size was set by the installation program to 4832MB, and combined with other (lesser) server memory requirements, the server’s memory footprint was over 5GB, a bit excessive for my Thinkpad. The default configuration options aren’t specified in the Oracle 12c Complete Reference, but a bit of sleuthing found the online “managing memory” portion of the online Oracle 12c Database Administrator’s Guide, which reads as follows:

If you create your database with Database Configuration Assistant (DBCA) and choose the basic installation option, automatic memory management is enabled when system memory is less than or equal to 4 gigabytes. When system memory is greater than 4 gigabytes, automatic memory management is disabled, and automatic shared memory management is enabled. If you choose advanced installation, then DBCA enables you to select automatic memory management or automatic shared memory management.

So with Automatic Memory Management (AMM) turned off, Oracle 12c is using the (simpler) Automatic Shared Memory Management (ASSM) to control memory allocation on the server – but the crux of the issue is the default setting for the server’s cache size of 3.8GB, far too large for my needs. The server’s SGA encompasses the bulk of the memory used by the Oracle server, including the buffer pool. In prior releases (8i) of Oracle, the SGA could not change its size dynamically; in 12c that is no longer true, but with AMM turned off the effect was the same. The diagram below is from online Oracle 12c Database Administrator’s Guide:

oracle12c_memory_hierarchy

Oracle has two different configuration file options which are termed PFILE and SPFILE. A PFILE – Parameter File – is a simply a text-mode configuration file. In prior releases this text-mode configuration file was universally named init.ora; in my installation it is called init.ora.4272015182352 in directory “c:\oracle12c\admin\ora12c\pfile”, as in my installation I named the Oracle service “ora12c”. Changes to a PFILE are made manually with a text editor and only take effect on server restart. It is important to note that a text-mode PFILE is only used if an encoded SPFILE cannot be found. An SPFILE – Server Parameter File – is an encoded parameters file that can be updated directly by the server when it executes ALTER SERVER statements. Oracle also supports CREATE PFILE and CREATE SPFILE statements to create either version from an existing file. Ordinarily an ALTER SERVER statement to change the memory configuration only affects the running server; if one specifies the optional SCOPE clause, the SPFILE configuration file will also be modified so that the change will be persistent across server restarts.

My first change was to reduce the amount of memory required for buffer pools and for aggregate processing:

7
8
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 100m SCOPE=BOTH;
ALTER SYSTEM SET DB_CACHE_SIZE = 450M SCOPE=BOTH;

Once these were set, I was able to set a much lower MEMORY_TARGET and MEMORY_MAX_TARGET for the server as a whole. After a server restart, my memory consumption for the Oracle 12c server is now considerably reduced:

oracle12c_memory_usage_tuned

I am going to leave these settings for the moment since I plan on installing the SonarQube service on my machine so I can investigate that tool’s utility in doing static analysis of Java projects; it may well be that I can reduce Oracle’s memory footprint even more. Certainly one can also re-configure the Oracle server service so that the database server is started manually, rather than when Windows 7 boots, though it can be inconvenient.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
formats

Configuration of Oracle 12c – part un

Oracle 12c represents a fairly dramatic departure from Oracle 11g in terms of setup and configuration of the server on personal workstations. The 12c release was released in 2013, but I had delayed its adoption due primarily to a lack of available reference texts. With that issue now rectified, it’s time to “take the plunge” with Oracle 12c. There is considerable material now available online to aid in the installation and configuration of an Oracle 12c instance; my goal in writing this article is to discuss Oracle 12c from the perspective of an undergraduate or graduate student who is installing their own copy of Oracle 12c for project work. I am working with Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64-bit on Windows 7.

In this article I’ll focus on Oracle’s new pluggable database architecture. In a subsequent article, I’ll discuss Oracle 12c’s server memory usage.

Server installation

The Enterprise Edition of 12c is large – the download for Windows 7 64-bit is two .zip files totalling 2.645GB in size. Once unzipped to an installation directory, one merely executes the “setup.exe” Windows installation executable and the server’s installation image is created in the directory of your choice, along with a number of Windows services: the server itself, TNS listener, batch mode listener, and so on. Aside: While the installation is relatively painless, Oracle’s uninstall of 12c is horrid. The “Oracle Universal Installer” uninstall is a mix of Windows programs firing off scripts executing in command shells, which in my case (just to try it) gave me an unintelligible error message and ended up not doing anything of substance – so that if I really did have to uninstall, I’d have to do it all manually, including updating the Windows registry.

Once the server is installed you can download and install two of the Oracle Developer clients, SQL Developer and SQL Modeler. I had been using SQL Developer 4.0.2 with Oracle 11g, but with 12c you will want to upgrade to the latest SQL Developer client – I am now using version 4.1.0. This newer release uses the Java 8 SDK and supports the new commands and security model implemented in the Oracle 12c server.

Container and pluggable databases

A very significant change from Oracle 11g is that Oracle 12c now implements a “pluggable” database architecture, also called a “multi-tenant architecture”. With Oracle 12c, a server can support multiple “container” databases, created with the CREATE DATABASE statement, and these in turn can support multiple “pluggable” databases with their own schemas and user accounts and authentication, which are created with the separate CREATE PLUGGABLE DATABASE statement. The idea is similar to the “master” database concept supported in Sybase ASE and Microsoft SQL Server, but differs in several respects, including security and authentication. The schematic below is taken from the Oracle 12c Complete Reference Manual, pp. 891:

pluggable_database_architecture

With container database support, the security model has changed significantly from Oracle 11g. As part of the installation of an Oracle 12 Enterprise Edition server, you will have to create an administrative account for the default container database, also called a “CDB”. This default account can create additional accounts for the container database, but these CDB account names must begin with “C##_” as their prefix. One cannot create an “ordinary” table in a container database; rather, the DBA must first create one (or more) “pluggable” databases (PDBs) that are “plugged” into the CDB. Once a pluggable database is created, additional accounts can be created so that users can login to the pluggable database as would be expected with Oracle 11g. An advantage of multiple pluggable databases is that one can completely isolate the schemas and accounts for different, unrelated applications.

A CDB has two kinds of user accounts: common users and local users. A common user has the same identity in the root and all PDBs-both current and those created in the future. Common users may or may not have the same privileges in each container, however. Common users with administrative privileges can plug in, unplug, or drop PDBs.

Local users, in contrast, exist only in each PDB or in the CDB. They can have the same username but will have different credentials in each container. Having local users facilitates separation of duties across all containers within a CDB.

Oracle 12c Complete Reference, pp. 891.

One gotcha with Oracle 12c version 12.1 is that the server will not automatically start pluggable databases when the server is started; only the container database. If one tries to connect to a PDB in this case, one receives the slightly misleading message “ORA-01033 Oracle initialization or shutdown in progress”. The message is misleading because the Oracle server *has* started, and will accept connections to the container database. The issue is that the PDB the application is attempting to connect to has not been started. Starting PDB’s can be done by the administrator account when connected to the CDB using the statement

1
ALTER PLUGGABLE DATABASE ALL OPEN;

To have this occur automatically, one can write a database startup trigger within the container database to explicitly start the required PDB, or all of them:

2
3
4
5
6
CREATE OR REPLACE TRIGGER open_pdbs 
  AFTER STARTUP ON DATABASE 
BEGIN 
   EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN'; 
END open_pdbs;

Beginning with version 12.2 this database startup trigger will no longer be required, as 12.2 will support syntax to permit the automatic starting of PDBs when they are created.

In doing the above it was advantageous to update my installed version of SQL Developer, from 4.0.2 to 4.1.0. While SQL Developer 4.0.2 worked well with 11g, it lacked support for some of the new commands supported by the 12c server, such as “SHOW PDBS”. Fortunately, the installation of SQL Developer is straightforward, and after being prompted for the location of the Java 8 JDK I’ve had no issues thus far with the new client. Thus far I’ve created two separate pluggable databases, one for academic examples and the other for SonarQube, to ensure that there is no contamination of the SonarQube schema after it has been installed.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
1 Comment  comments