Friday, August 31, 2007

Pointers

Pointers - Definition

  • Pointers provide an indirect method of accessing variables. The reason why pointers are hard to understand is because they aren't taught in a manner that is understandable. Think for a minute about a typical textbook. It will usually have a table of contents, some chapters, and an index. What if you were looking in the 'C' book for information on printf? You'd look at the index for printf. The index will tell you where information on printf is located within the book. Conceptually, this is how pointers work! Re-read this analogy if you didn't get it the first time. This is important. You must understand that pointers are indirect references.
  • You may be wondering, what is the point of this ? Why don't I just make all variables without the use of pointers? It's because sometimes you can't. What if you needed an array of ints, but didn't know the size of the array before hand? What if you needed a string, but it grew dynamically as the program ran? What if you need variables that are persistent through function use without declaring them global ? They are all solved through the use of pointers. Pointers are also essential in creating larger custom data structures, such as linked lists.
  • So now that you understand how pointers work, let's define them a little better.
    • A pointer when declared is just a reference. DECLARING A POINTER DOES NOT CREATE ANY SPACE FOR THE POINTER TO POINT TO. The main purpose of pointers is to allocate memory dynamically (Dynamic Memory Allocation).
    • A pointer is a reference to an area of memory in the heap. The heap is a dynamically allocated area of memory when the program runs.

void Pointers

  • void pointers can be assigned to any pointer value. It sometimes necessary to store/copy/move pointers without regard to the type it references.
  • You cannot dereference a void pointer.
  • Functions such as malloc, free, and scanf utilize void pointers.

Wednesday, August 29, 2007

Arrays



In this section, we will create a small C program that generates 10 random numbers and sorts them. To do that, we will use a new variable arrangement called an array.


An array lets you declare and work with a collection of values of the same type. For example, you might want to create a collection of five integers. One way to do it would be to declare five integers directly:
int a, b, c, d, e;
This is okay, but what if you needed a thousand integers? An easier way is to declare an array of five integers:
int a[5];
The five separate integers inside this array are accessed by an index. All arrays start at index zero and go to n-1 in C. Thus, int a[5]; contains five elements. For example:
int a[5];

a[0] = 12;
a[1] = 9;
a[2] = 14;
a[3] = 5;
a[4] = 1;
One of the nice things about array indexing is that you can use a loop to manipulate the index. For example, the following code initializes all of the values in the array to 0:
int a[5];
int i;

for (i=0; i<5;> The following code initializes the values in the array sequentially and then prints them out:
#include
int main()
{
int a[5];
int i;
for (i=0; i<5; i="0;">

Arrays are used all the time in C. To understand a common usage, start an editor and enter the following code: #include
#define MAX 10
int a[MAX];
int rand_seed=10;
/* from K&R
- returns random number between 0 and 32767.*/
int rand()
{
rand_seed = rand_seed * 1103515245 +12345;
return (unsigned int)(rand_seed / 65536) % 32768;
}

int main()
{
int i,t,x,y;
/* fill array */

for (i=0; i <>
{
a[i]=rand();
printf("%d\n",a[i]);
} /* more stuff will go here in a minute */
return 0;
}

This code contains several new concepts. The #define line declares a constant named MAX and sets it to 10. Constant names are traditionally written in all caps to make them obvious in the code. The line int a[MAX]; shows you how to declare an array of integers in C. Note that because of the position of the array's declaration, it is global to the entire program. The line int rand_seed=10 also declares a global variable, this time named rand_seed, that is initialized to 10 each time the program begins. This value is the starting seed for the random number code that follows.

In a real random number generator, the seed should initialize as a random value, such as the system time. Here, the rand function will produce the same values each time you run the program. The line int rand() is a function declaration. The rand function accepts no parameters and returns an integer value. We will learn more about functions later. The four lines that follow implement the rand function. We will ignore them for now. The main function is normal. Four local integers are declared, and the array is filled with 10 random values using a for loop. Note that the array a contains 10 individual integers. You point to a specific integer in the array using square brackets.

So a[0] refers to the first integer in the array, a[1] refers to the second, and so on. The line starting with /* and ending with */ is called a comment. The compiler completely ignores the line. You can place notes to yourself or other programmers in comments. Now add the following code in place of the more stuff ... comment:

/* bubble sort the array */
for (x=0; x <>
for (y=0; y <>
if (a[y] > a[y+1])
{
t=a[y];
a[y]=a[y+1];
a[y+1]=t;
}
/* print sorted array */
printf("--------------------\n");
for (i=0; i <>
This code sorts the random values and prints them in sorted order. Each time you run it, you will get the same values. If you would like to change the values that are sorted, change the value of rand_seed each time you run the program.

The only easy way to truly understand what this code is doing is to execute it "by hand." That is, assume MAX is 4 to make it a little more manageable, take out a sheet of paper and pretend you are the computer. Draw the array on your paper and put four random, unsorted values into the array. Execute each line of the sorting section of the code and draw out exactly what happens. You will find that, each time through the inner loop, the larger values in the array are pushed toward the bottom of the array and the smaller values bubble up toward the top.

Quiz: The basics of C++

1. What is the correct value to return to the operating system upon the successful completion of a program?
A. -1
B. 1
C. 0
D. Programs do not return a value.

2. What is the only function all C++ programs must contain?
A. start()
B. system()
C. main()
D. program()

3. What punctuation is used to signal the beginning and end of code blocks?
A. { }
B. -> and <-
C. BEGIN and END
D. ( and )

4. What punctuation ends most lines of C++ code?
A. .
B. ;
C. :
D. '

5. Which of the following is a correct comment?
A. */ Comments */
B. ** Comment **
C. /* Comment */
D. { Comment }

6. Which of the following is not a correct variable type?
A. float
B. real
C. int
D. double

7. Which of the following is the correct operator to compare two variables?
A. :=
B. =
C. equal
D. ==

Tuesday, August 28, 2007

Memory Leak in C Program

Why do we need to test weather it is memory leak or not?

How are we going to know that?
Possibilities are:

1) Array will print "Garbage Value"

2) Message by the Compiler!

Because size of heap memory will be reduced and leaked memory can not be assigned to other processes running simultaneously in a multiprogramming environment. Throughput will go down.

Introduction to EJB 3.0

Introduction to Enterprise JavaBeans 3.0
by Lynn Munsinger

==============================================================

The EJB 3.0 specification makes it easier than ever to develop Enterprise JavaBeans, perhaps encouraging you to consider developing EJBs for the first time. If that is the case, congratulations, you have successfully avoided the many pitfalls of EJB developers before you, and can enjoy the ease of EJB 3.0 development. But before you start development, you might want to know what Enterprise JavaBeans are and what purpose they serve. This article explains the basics of EJBs and how you can utilize them in a J2EE application.
What is an EJB?

An Enterprise JavaBean (EJB) is a reusable, portable J2EE component. EJBs consist of methods that encapsulate business logic. For example, an EJB may have business logic that contains a method to update customer data in a database. A variety of remote and local clients can invoke this method. Additionally, EJBs run inside a container, allowing the developer to focus on the business logic contained in the bean without having to worry about complicated and error-prone issues such as transaction support, security, and remote object access. EJBs are developed as POJOs, or Plain Old Java Objects, and developers can use metadata annotations to specify to the container how these beans are to be managed.
Types of EJBs

EJBs consist of three main types: Session, Entity, and Message-Driven. A Session bean performs a distinct, de-coupled task, such as checking credit history for a customer. An Entity bean is a complex business entity which represents a business object that exists in the database. A Message-Driven bean is used to receive asynchronous JMS messages. Let's examine these types further:
Session Beans

Session beans generally represent actions in the business process such as "process order." Session beans are classified based on the maintenance of the conversation state, either stateful or stateless.

Stateless session beans do not have an internal state. They do not keep track of the information that is passed from one method call to another. Thus, each invocation of a stateless business method is independent of its previous invocation; for example, calculating taxes or shipping charges. When a method to calculate tax charges is invoked with a certain taxable value, the tax value is calculated and returned to the calling method, without the necessity to store the caller's internal state for future invocation. Because they do not maintain state, these beans are simple to manage for the container. When the client requests a stateless bean instance, it may receive an instance from the pool of stateless session bean instances that are maintained by the container. Also, because stateless session beans can be shared, the container can maintain a lesser number of instances to serve a large number of clients. To specify that a Java Bean is to be deployed and managed as a stateless session bean, simply add the annotation @Stateless to the bean.

A stateful session bean maintains a conversational state across method invocations; for example, an online shopping cart of a customer. When the customer starts online shopping, the customer's details are retrieved from the database. The same details are available for the other methods that are invoked when the customer adds or removes items from the cart, places the order, and so on. Yet, stateful session beans are transient because the state does not survive session termination, system crash, or network failure. When a client requests a stateful session bean instance, that client is assigned one stateful instance, and the state of the bean is maintained for that client. To specify to the container that a stateful session bean instance should be removed upon the completion of a certain method, add the annotation @Remove to the method.
Session Bean Example

import javax.ejb.Stateless.*;

/**
* A simple stateless session bean implementing the incrementValue() method of the * CalculateEJB interface.
*/

@Stateless(name="CalculateEJB")
public class CalculateEJBBean
implements CalculateEJB
{
int value = 0;
public String incrementValue()
{
value++;
return "value incremented by 1";
}
}


Entity Beans

An entity bean is an object that manages persistent data, potentially uses several dependent Java objects, and can be uniquely identified by a primary key. Specify that a class is an entity bean by including the @Entity annotation. Entity beans represent persistent data from the database, such as a row in a customer table, or an employee record in an employee table. Entity beans are also sharable across multiple clients. For example, an employee entity bean can be used by various clients to calculate the annual salary of an employee or to update the employee address. Specific fields of the entity bean object can be made persistent. All fields in the entity bean not marked with the @Transient annotation are considered persistent. A key feature of EJB 3.0 is the ability to create Entity beans which contain object/relational mappings using metadata annotations. For example, to specify that an Entity bean's empId field is mapped to the EMPNO attribute of the Employees table, annotate the table name using @Table(name="Employees") and the field using @Column(name="EMPNO"), as shown in the example below. Additionally, a special feature of EJB 3.0 is that you can easily test Entity beans during development, as running Entity beans outside a container is now possible using the Oracle Application Server Entity Test Harness.
Entity Bean Example

import javax.persistence.*;
import java.util.ArrayList;
import java.util.Collection;

@Entity
@Table(name = "EMPLOYEES")
public class Employee implements java.io.Serializable
{
private int empId;
private String eName;
private double sal;

@Id
@Column(name="EMPNO", primaryKey=true)
public int getEmpId()
{
return empId;
}

public void setEmpId(int empId)
{
this.empId = empId;
}

public String getEname()
{
return eName;
}

public void setEname(String eName)
{
this.eName = eName;
}

public double getSal()
{
return sal;
}

public void setSal(double sal)
{
this.sal = sal;
}

public String toString()
{
StringBuffer buf = new StringBuffer();
buf.append("Class:")
.append(this.getClass().getName()).append(" :: ").append(" empId:").append(getEmpId()).append(" ename:").append(getEname()).append("sal:").append(getSal());
return buf.toString();
}
}


Message-Driven Beans

Message-driven beans (MDBs) provide an easier method to implement asynchronous communication than by using straight Java Message Services (JMS). MDBs were created to receive asynchronous JMS messages. The container handles most of the setup processes that are required for JMS queues and topics. It sends all the messages to the interested MDB. An MDB allows J2EE applications to send asynchronous messages that can then be processed by the application. To specify that a bean is an MDB, implement the javax.jms.MessageListener interface and annotate the bean with @MessageDriven.
Message Driven Bean Example

import javax.ejb.MessageDriven;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.Inject;
import javax.jms.*;
import java.util.*;
import javax.ejb.TimedObject;
import javax.ejb.Timer;
import javax.ejb.TimerService;
@MessageDriven(
activationConfig = {
@ActivationConfigProperty(propertyName="connectionFactoryJndiName", propertyValue="jms/TopicConnectionFactory"),
@ActivationConfigProperty(propertyName="destinationName", propertyValue="jms/myTopic"),
@ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Topic"),
@ActivationConfigProperty(propertyName="messageSelector", propertyValue="RECIPIENT = 'MDB'")
}
)

/**
* A simple Message-Driven Bean that listens to the configured JMS Queue or Topic and gets notified via an * invocation of it's onMessage() method when a message has been posted to the Queue or Topic. The bean
* prints the contents of the message.
*/

public class MessageLogger implements MessageListener, TimedObject
{

@Inject javax.ejb.MessageDrivenContext mc;

public void onMessage(Message message)
{
System.out.println("onMessage() - " + message);
try
{
String subject = message.getStringProperty("subject");
String inmessage = message.getStringProperty("message");
System.out.println("Message received\n\tDate: " + new java.util.Date() + "\n\tSubject: " + subject + "\n\tMessage: " + inmessage + "\n");
System.out.println("Creating Timer a single event timer");
TimerService ts = mc.getTimerService();
Timer timer = ts.createTimer(30000, subject);
System.out.println("Timer created by MDB at: " + new Date(System.currentTimeMillis()) +" with info: "+subject);
}
catch (Throwable ex)
{
ex.printStackTrace();
}
}

public void ejbTimeout(Timer timer)
{
System.out.println("EJB 3.0: Timer with MDB");
System.out.println("ejbTimeout() called at: " + new Date(System.currentTimeMillis()));
return;
}
}



Utilizing EJBs

An EJB client is the application that accesses the bean. It does not necessarily reside on the client tier, but can be a stand-alone application, JSP, servlet, or another EJB. The client accesses the methods of an EJB through the remote or local interfaces of the bean, depending on whether the client resides in the same or a different JVM than the bean. These interfaces define the methods of the bean, while the bean class actually implements the methods. When a client accesses a method of the bean class, the container generates a proxy for the bean, called the remote or local object. The remote or local object receives the request, delegates it to the corresponding bean instance, and returns the results to the client. To invoke a bean's methods, a client finds the bean by using its name defined in the EJB deployment descriptor. In the following example, the client finds the bean named "Statelessejb" using the Context object.
EJB Client Example

import javax.naming.Context;
import javax.naming.InitialContext;

/**
* A simple bean client which calls methods on a stateless session bean.
*/

public class CalculateejbClient
{
public static void main(String [] args)
{
Context context = new InitialContext();
CalculateEJB myejb =
(CalculateEJB)context.lookup("java:comp/env/ejb/CalculateEJB");
myejb.incrementValue();
}
}



Summary

Developing Enterprise JavaBeans is significantly easier with the EJB 3.0 specification. The specification uses metadata annotations to define the type of bean and the methods exposed to the client. Thus, whether you are creating a session bean for executing a specific task or mapping a table to an entity bean for updating data, you can do so using plain Java objects and interfaces, and expose methods for clients by using annotations within the business methods. Now that you have an understanding of the basics of EJBs, refer to OTN's EJB 3.0 Resources Page for more information.

Oracle 9i JDBC Guide

Oracle9i JDBC Developer's Guide and Reference
Release 2 (9.2) - to get this click on this link or paste this link on the browser address bar.

http://download-west.oracle.com/docs/cd/B10501_01/java.920/a96654/toc.htm

Connecting the Java World to Grid-Enabled Databases

Grid computing is not necessarily a new concept; however, its adoption within the enterprise has given birth to a new concept called enterprise grid computing, which is being embraced by the entire IT industry. Enterprise grid computing aims to consolidate IT resources - including both infrastructure software and applications - and optimize their usage, cutting costs substantially along the way. Since Java and J2EE are widely used as enterprise software platforms, how do they align with this vision?

This article outlines a set of challenges that JDBC faces as the database connectivity layer within enterprise grid environments and illustrates how the Oracle Database 10g JDBC driver addresses these challenges. First, I'll introduce the concept of enterprise grid computing; then I'll examine how Java and J2EE operate in grid environments and identify their database connectivity requirements; and finally, I'll discuss the features of the Oracle Database 10g JDBC driver that address those requirements.

Enterprise Grid Computing
Commercial software vendors use an assortment of terms, such as utility computing, on-demand computing, and autonomic computing, to describe enterprise grid computing. Regardless of which term is used, products that support enterprise grid computing all have the same set of common functional denominators:

* Resource consolidation
* Resource virtualization
* Policy-based resource management
* Provisioning or allocation

The chief aim of enterprise grid computing, as noted above, is to help enterprises consolidate IT resources and optimize usage, reducing infrastructure and labor costs. This is a slight but important divergence from the original concept of grid computing now classified as academic grid computing, in which massive arrays of computational power are constructed from a network of many small and widespread computers, and used to perform large calculations and operations broken in autonomous chunks that can't be achieved even on today's largest supercomputers. A good example of this is the SETI@home project.

The convergence of recent hardware and software advances has made resource virtualization possible and allowed the enterprise grid to be constructed. On the hardware side, these advances include networked storage devices (like storage grids) and low-cost, modular hardware components (such as blades); on the software side, they include improvements in networking, Web services, databases, application servers, and management frameworks.

Although no specific enterprise grid computing standards have been established, there is a general move toward the concept of service-oriented architectures (SOA), which are based on and make extensive use of existing Web services standards and specifications. SOA makes it possible to construct architectures where client applications can simply register, discover, and use the services deployed over the grid. This move is spearheaded by academic and research proposals such as the Open Grid Service Architecture (Global Grid Forum) and middleware vendors through participation in Web services standards bodies. A new consortium, Enterprise Grid Alliance, has been formed with the goals of developing enterprise grid solutions and accelerating the deployment of grid computing in enterprises.

Figure 1 illustrates a typical enterprise grid platform. While a complete enterprise grid computing discussion is beyond the scope of this article, I hope I've given enough information to encourage you to learn more about it.

Java and J2EE Containers in the Enterprise Grid
Very few enterprises run their business with just stand-alone Java or J2EE servers. Typical business applications employ more complex architectures consisting of load balancers, HTTP listeners, Web caching servers, J2EE containers, directory servers, resource managers and monitors, and management services.

In recognition of this, vendors, such as Oracle with its Oracle Application Server 10g, are actively supplementing their infrastructure with provisioning, monitoring, registering, discovering, and manageability mechanisms. This allows them to be used in the more complex system architectures required by modern SOA-based business applications.

Similarly, the J2EE world, realizing that it needs to broaden its support to interoperate with other applications and environments in a loosely coupled manner, is aligning with enterprise grid computing and SOA concepts. It's doing this by including core Web services standards (such as SOAP, WSDL, UDDI, and JAX-RPC) and numerous other Web services specifications including WS-I, WSIF, security, transaction, reliable messaging, events, peer-discovery, policy, orchestration, choreography, provisioning, and service-level agreement.

In the scientific and research worlds, the Globus Toolkit, from the Globus Alliance, is a reference implementation of the Open Grid Services Infrastructure (OGSI). OGSI lets Java components be exposed as OGSI services. OGSA-DAIS (Data Access and Integration) is another scientific and research proposal aimed at addressing data services requirements in grid environments; leading database vendors are participating in this initiative.

Enterprise grid computing creates an extremely dynamic environment where resources are automatically augmented or diminished based on business priorities captured in allocation policies. The current version of the JDBC specification (and implementations based on it) makes working in such environments impractical and reduces the ability of Java and J2EE applications to achieve all the benefits of enterprise grid computing. To be more explicit, let's look at some of the issues faced when working in an enterprise grid computing environment:

* Most JDBC applications today use connect strings that explicitly specify the hostname or IP address of the target database. This won't work when processing nodes are dynamically added to or retrenched from the cluster of database servers.
* Enterprise grid environments must scale well and serve thousands of concurrent clients. This makes efficient load-balancing and database connection pooling mandatory; traditional connection pooling mechanisms that cache only connections with the same identity are useless.
* With databases and other resources being dynamically managed and allocated for service, connection retention and management is critical. Mechanisms to resize or refresh stale connections or to search for and reclaim abandoned ones are required.
* Database clients, whether they're Java programs or J2EE containers, must be notified about changes in the database server configuration as quickly as possible. Application servers (and J2EE containers) that rely on unpredictable timeout mechanisms will not be able to provide a reliable, guaranteed service.

Java Database Connectivity Within the Enterprise Grid
Virtualizing the Database as a Service
As shown in Figure 1, clusters are pervasive at every level within the enterprise grid: disk and network storage servers, database servers, and application servers. Most database vendors now allow a single database to be concurrently managed by multiple servers or instances.

Although database clusters are easily understood, there are fundamental differences between the architectures supported by the various current implementations; for example, a shared nothing approach and a shared disk approach provide different levels of capabilities. This article focuses on the Oracle Database Real Application Cluster (RAC) model in which any database server instance can address the entire database space, hosted on shared networked storage.

The typical JDBC approach to specifying how to connect to a database is to provide the details for the target database in the form of a tuple containing ::. This is set as the URL attribute of a data source definition or directly in the connect string when establishing the JDBC connection.

String connString="jdbc:oracle:thin:@prodHost:1521:ORCL";

In the Oracle Database RAC model, the physical server hosts and database instances that make up the cluster are represented as a single logical entity known as a service. A client application connects to the logical database service without any real knowledge of the actual host or database server instance that's used in the cluster. By connecting to the service, as illustrated by the Enterprise Grid Model in Figure 2, and not to an actual physical server, clients can be insulated from changes made to the physical elements of the cluster as well as from node failures.

To use a logical database entity in Java applications, JDBC connection strings must be able to accept service-based connect strings instead of physical host details. In the Oracle Database RAC model, services are represented as "/".

String connString="jdbc:oracle:
thin:@/service_name";

The Connection Cache Manager
In the enterprise grid world, where applications are dynamically provisioned to run on different servers and even on instances of Java Virtual Machines (JVMs), it's imperative that the application's connection caches can be managed to permit the most efficient use of available resources. One approach to this is a new component called the Connection Cache Manager, which offers a centralized way to manage one or more connection caches. A single instance of Connection Cache Manager per JVM manages all of the connection caches used by applications running on that JVM.

The Connection Cache Manager plays two major roles: it manages cache and binds a connection to the data source.

Managing and Maintaining Cache
The Connection Cache Manager is responsible for creating the cache, maintaining its state, and terminating it. It's aware of the existence of each connection cache, managed independently. A rich set of APIs is provided to perform the Connection Cache Manager tasks.

The Connection Cache Manager supports the coexistence of more than one cache. Each cache is identified by a unique name and is then tightly bound to a data source. Each cache is either created transparently when getConnection() requests are made on a cache-enabled data source, or is created explicitly in the middle tier via the Connection Cache Manager API. Once a cache is created, it may either be explicitly removed via the Connection Cache Manager, or when the data source is closed.

Caches can be monitored using the Cache Manager APIs, enabling access to information such as the number of connections checked out and the number of connections available in the cache.

Binding a Connection Cache to the Data Source
As illustrated in Figure 3, the Connection Cache Manager makes sure a connection cache is associated with its data source object every time a cache is created, removed, or reinitialized. This ensures an efficient way to access the connection cache and retrieve connections from it every time the getConnection() method is invoked on a data source.

Fast Application Notification (FaN)
As I alluded to earlier, the dynamic nature of an enterprise grid environment results in nondeterministic system changes that may have flow-on effects on applications that are executing.

To allow application clients in an enterprise grid environment to function in the most efficient manner, a mechanism must be provided that performs the following tasks:

* Monitor changes in database configuration and notify clients as fast as possible.
* Balance connection requests across all active instances.
* Fail-over established connections to surviving instances

Oracle Database 10g JDBC offers these capabilities by combining the Connection Cache Manager, Oracle Database RAC, and notification events through a notification service to quickly inform affected components of changes occurring in the grid.

Let's look at how these pieces work together. When a new instance is added to a database cluster, or when an instance is retrenched from a cluster (instance stopped or node dies), Oracle Database RAC generates an event that indicates what happened. The Oracle Notification Service (ONS) detects and distributes this event to components that have registered as interested in such events.

In Oracle Database 10g JDBC, Fast Application Notification is important: the connection caches are alerted when database instances are affected and can take preventive courses of action.

The mechanism is enabled on a cache-enabled data source by setting the data source property FastConnectionFailoverEnabled to true (see Listing 1).

With Fast Application Notification, event detection and notification is nearly instantaneous, occurring in seconds rather than minutes with traditional timeout mechanisms.

Adding Database Instances: Load Balancing
Adding new database instances to an Oracle Database RAC cluster generates UP events. These automatically trigger the balancing of all allocated connections over all active RAC instances without waiting for application connection retries/requests.

Let's consider a basic example in which we have:

* A cluster with two nodes in a database service, one instance per node
* A cache size of 150 total connections, resulting in 75 connections per instance

Adding a new node to the service will trigger an UP event, which is detected and propagated via the FaN service. This will cause the connection cache to automatically rebalance the existing 150 connections in use in the cache over the three instances, resulting in 50 connections being established per database instance. This process involves removing some connections (to existing instances) and creating new ones (to the new instance).

Retrenching Database Instances (or Node Failure): High-Availability
Retrenching database instances from an Oracle Database RAC cluster generates DOWN events, which automatically trigger a detect-and-fix mechanism in the connection caches. This mechanism quickly removes connections belonging to the failed instance, preventing invalid connections from being handed out on connection request.

Continuing with the previous example:

* We now have a three-node database service with 50 connections per instance in the caches, for a total of 150 connections.

If a node in the database service fails or is removed by the resource provisioning system, then a DOWN event is created and propagated via the FaN service. The 50 connections belonging to the failed instance/node will be quickly removed from the cache, ensuring that the cache is clean and consistent. A clean cache guarantees that connection requests will be routed only to surviving instances. Clients with a stale connection must retry connection requests, unless a container intersperses such calls and is able to take corrective action.

The Fast Connection Fail-over mechanism transparently ensures reliable, highly available Java database connections in RAC and grid environments.

In-Flight Transactions
If an application is midtransaction when an instance fails, it will be thrown an appropriate SQL exception and the transaction will be rolled back. It's the responsibility of the application or the container to retry the connection request and reestablish session state.

Simplifying JDBC Connection Caching
As summarized in Table 1, the new Oracle Database10g JDBC Implicit Connection Cache has been designed to overcome existing JDBC Connection Caching limitations as listed in Table 1, by providing:

* Transparent access to the cache
* Support for multiple identities
* The ability to retrieve connections based on user-defined attributes and weights
* The ability to refresh or recycle stale connections from the cache

To take advantage of these capabilities, simply customize your environment by explicitly setting properties on the connection cache properties or connection.

Transparent Access to the Cache
By default, the getConnection() method in the standard OracleData-Source API creates a new database session and a physical database connection, thus incurring performance and scalability penalties. With Implicit Connection Caching, once the DataSource property ConnectionCachingEnabled has been set to true, the getConnection() method will service all connection requests from the connection cache.

ods.setConnectionCachingEnabled(True);
ods.setConnectionCacheName("MyCache"); // optional
ods.setConnectionCacheProperties(cp); // optional
ctx.bind("MyDS", ods);
ods =(OracleDataSource)
// lookup DataSource
ctx.lookup("MyDS");

A JDBC 10g Connection cache can be created either implicitly by the first invocation of the getConnection() method or explicitly by using the Connection Cache Manager API. The Connection Cache Manager is especially useful for developers working with J2EE containers and ERP frameworks since it shields the infrastructure from the complexity of managing connection cache.

// This call would create a "MyCache" cache and a
// connection from "MyDS"data source will be created and returned
conn = ods.getConnection();

// This call would create a "MyCache" cache and a
// connection to "MyDS", authenticated by "Scott" will be created
conn = ods.getConnection("SCOTT","TIGER");

Subsequent getConnection() invocations will either create a new connection (if the cache was not previously initialized) or retrieve an existing connection from the cache. Once the connection is retrieved, you can proceed with statement creation.


// Create a Statement
Statement stmt = conn.createStatement ();
...

// Close the Statement
stmt.close();
stmt = null;

The cache can be populated in one of two ways: by preinitializing it using the Cache Manager APIs or, incrementally, upon the release of connection(s) back to the cache. When returning a connection to the cache, applications can save current connection attributes settings for future use (see attribute-based retrieval, below).

// return this connection to the cache
conn.close();
conn = null;

Caching Multiple Identities
While a database does not impose any specific restrictions on the connection authentication, a traditional cache might impose a limitation on the connections it can manage, requiring that they all use the same username/password combination, for example.

The Implicit Connection Cache can handle any combination of user-authenticated connections. For example, a joe.johnson connection can coexist very well with a sue.miller connection in the same connection cache.

Connection Retrieval Based on User-Defined Attributes
One of the valuable new features in the Implicit Connection Cache is connection striping.

Connection striping, or labeling, consists of applying user-defined attributes to a connection and making their state persist when the connection is returned to the cache. This speeds up future connection retrieval since cached connections don't have to reinitialize a block of state every time. These attributes can later be used to retrieve the same connection from the cache, as follows.

Example 1: Retrieving a connection based on the NLS_LANG attribute:

// get a connection from the cache with NLS_LANG attribute
java.util.Properties connAttr = null;
connAttr.setProperty("NLS_LANG", "ISO-LATIN-1");
conn = ds.getConnection(connAttr);

Example 2: Retrieving a connection based on the isolation-level attribute:

java.util.Properties connAttr = null;
connAttr.setProperty("TRANSACTION_ISOLATION", "SERIALIZABLE");

// retrieve a connection that matches Transaction Isolation
conn = ds.getConnection(connAttr);
...
// this call will preserve attr settings for this connection
conn.close(connAttr);

Example 3: Retrieving a connection based on the connection tag and connection attribute:

java.util.Properties connAttr = null;
connAttr.setProperty("CONNECTION_TAG", "JOE'S_CONNECTION");
// retrieve connection that matches Joe's connection
conn = ds.getConnection(connAttr);

// apply attributes to the connection
conn.close(connAttr);

// This will retrieve Joe's connection
conn = ds.getConnection(connAttr);

Applying Connection Attributes to a Cached Connection
A connection attribute can be applied to a connection in the cache in two ways.

One approach is to call the applyCon-nectionAttributes(java.util.properties connAttr) API on the connection object. This simply sets the supplied attributes on the connection object. It's possible to apply attributes incrementally using this API, letting users apply connection attributes over multiple calls. For example, NLS_LANG may be applied by calling this API from module A. The next call from module B can then apply the TXN_ISOLATION attribute, and so on.

A second approach is to call the close(java.util.properties connAttr) API on the connection object. This API closes the logical connection and then applies the supplied connection attributes on the underlying PooledConnection (physical connection). The attributes set via this close() API override any attributes set using the applyConnectionAttri-butes() API.

The following example shows a call to the close(connectionAttributes) API on the connection object that lets the cache apply the matched connectionAttributes back on the pooled connection before returning it to the cache. This ensures that when a subsequent connection request with the same connection attributes is made, the cache will find a match.

// Sample connection request and close
java.util.properties connAttr = null;
connAttr.setProperty("NLSLANG", "ISO-LATIN-1");
// request connection based on attributes
conn = ds.getConnection(connAttr);

// apply attributes to connection
conn.close(connAttr);

Connection Retrieval Based on Attributes and Weights
Connections may be selectively retrieved from the connection cache based on a combination of ConnectionAttributes and attribute weights.

Weights are assigned to each key in a ConnectionAttribute in a one-time operation that also changes cache properties. The cache property CacheAttributeWeights is one of the java.util.Properties that allows the setting of attribute weights. Each weight is an integer value that defines how expensive the key is in terms of resources.

Once the weights are specified on the cache, connection requests are made on the data source by calling getConnection(connectionAttributes). The connectionAttributes argument refers to keys and their associated values.

The connection retrieval from the cache involves searching for a connection that satisfies a combination of the following:

* A key/value match on a connection from the cache
* The maximum total weight of all the keys of the connection-Attributes that were matched on the connection

Consider the following example in which a cache is configured with CacheAttributeWeights (see Listing 2). (Listings 2-3 can be downloaded from www.sys-con.com/java/sourcec.cfm.)

Once the weights are set, a connection request could be made as in Listing 3.

The getConnection() request tries to retrieve a connection from the MyCache cache. In connection matching and cache retrieval, one of two things can happen:

* An exact match is found. As in the above example, an exact match is a connection that satisfies the same attribute values and all the keys defined by Keys (NLS_LANG, SecurityGroup, and Application).
* An exact match is not found. In this case, the closest match based on the attribute key/value and its associated weights is used (but only if the ClosestConnectionMatch property is set).

Once the connection is returned, the user can invoke the getUnMatchedConnectionAttributes() API on the connection object to return a set of attributes (java.util.Properties) that did not match the criteria. The unmatched attribute list can then be used by the caller (or application) to reinitialize these values before using the connection.

Conclusion
This article highlighted a new set of Java data access requirements that have emerged when executing in an enterprise grid environment while also outlining how Oracle Database 10g JDBC tackles the challenges that these requirements present.

Whether you're using JDBC directly in applications via a run-time container such as an EJB container or through an O/R mapping framework such as Toplink, the latest Oracle JDBC drivers offer reliable and highly available data sources in RAC and grid environments and significantly simplify a range of data source connectivity issues for Java developers.

Oracle has proposed these new features to the JSR 221 (JDBC 4.0 specification) expert group. It hopes to see JDBC Connection Pool management and high-availability support for data sources within enterprise grid environments become part of the JDBC 4.0 standard.

Connecting to a Database with JDBC

// JDBC|Test – complete code
public class JDBCTest {

public static void main(String args[]){

RunDB runDB = new RunDB();

try{
runDB.loadDriver();
runDB.makeConnection();
runDB.buildStatement();
runDB.executeQuery();
}catch(Exception e){
e.printStackTrace();
}

}
}

//RunDB
import java.sql.*;

public class RunDB {

Connection connection;
Statement statement;

public void loadDriver() throws ClassNotFoundException{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
}

public void makeConnection() throws SQLException {
connection=
DriverManager.getConnection("jdbc:odbc:purchaseOrder")
}

public void buildStatement() throws SQLException {
statement = connection.createStatement();
}

public void executeQuery() throws SQLException {

boolean foundResults =
statement.execute("SELECT * FROM Transaction");
if(foundResults){
ResultSet set = statement.getResultSet();
if(set!=null) displayResults(set);
}else {
connection.close();
}
}

void displayResults(ResultSet rs) throws SQLException {
ResultSetMetaData metaData = rs.getMetaData();
int columns=metaData.getColumnCount();
String text="";

while(rs.next()){
for(int i=1;i<=columns;++i) { text+="<"+metaData.getColumnName(i)+">";
text+=rs.getString(i);
text+="";
text+="n";
}
text+="n";
}

System.out.println(text);

}

}