Single Source.1

169
CORE JAVA 6 COLLECTIONS 6 JDBC 14 JDBC INTRODUCTION....................................................14 JDBC PRODUCT COMPONENTS.............................................. 15 JDBC ARCHITECTURE....................................................17 A RELATIONAL DATABASE OVERVIEW......................................... 18 DEPLOYMENT 34 THREADS 35 RMI 35 SWINGS 35 JAAS 37 INTERNATIONALIZATION 37 1. CREATE THE PROPERTIES FILES.........................................40 2. DEFINE THE LOCALE................................................. 40 3. CREATE A RESOURCEBUNDLE............................................ 41 4. FETCH THE TEXT FROM THE RESOURCEBUNDLE...............................41 CONCLUSION.......................................................... 42 JUNIT 42 JNDI 42 NAMING CONCEPTS......................................................42 DIRECTORY PACKAGE.................................................... 49 LDAP PACKAGE........................................................49 EVENT PACKAGE....................................................... 50 1

description

It is mainly useful for Java interviews, It has abstract all Core java concepts.

Transcript of Single Source.1

Page 1: Single Source.1

CORE JAVA 6

COLLECTIONS 6

JDBC 14

JDBC INTRODUCTION..............................................................................14JDBC PRODUCT COMPONENTS.....................................................................................15JDBC ARCHITECTURE.................................................................................................17A RELATIONAL DATABASE OVERVIEW............................................................................18

DEPLOYMENT 34

THREADS 35

RMI 35

SWINGS 35

JAAS 37

INTERNATIONALIZATION 37

1. CREATE THE PROPERTIES FILES................................................................................402. DEFINE THE LOCALE...............................................................................................403. CREATE A RESOURCEBUNDLE...................................................................................414. FETCH THE TEXT FROM THE RESOURCEBUNDLE...........................................................41CONCLUSION.............................................................................................................42

JUNIT 42

JNDI 42

NAMING CONCEPTS....................................................................................................42DIRECTORY PACKAGE..................................................................................................49LDAP PACKAGE.........................................................................................................49EVENT PACKAGE........................................................................................................50SERVICE PROVIDER PACKAGE.......................................................................................50

J2EE 52

1

Page 2: Single Source.1

SERVLETS 52

JSP 52

EJB 52

STRUTS 52

JMS 52

HIBERNATE 52

SAX 52

DOM 52

DESIGN PATTERNS 52

SESSION FAÇADE 52

FRONT CONTROLLER 52

DAO 52

CHAIN OF RESPONSIBILITIES 52

COMPOSITION 52

AGGREGATION 52

ABSTRACT FACTORY 52

FACTORY METHOD 52

BRIDGE 52

SINGLETON 52

BUILDER 52

2

Page 3: Single Source.1

ITERATE 52

OBSERVER 52

STATE 52

STRATEGY 52

VISITOR 52

FLYWEIGHT 52

PROXY 53

ROUTER 53

TRANSLATION 53

WEB SERVICES 53

SOAP 53

UDDI 53

WSDL 53

APACHE AXIS 53

XML TECHNOLOGIES 53

XML 53

DDL 53

XSL 53

LINK 53

PATH 53

3

Page 4: Single Source.1

XQUERY 53

DATABASE 53

ORACLE 9I(SQLPL/SQL) 53

DB2 53

APPLICATION SERVERS 53

WEBSPHERE APPLICATION SERVER 6.1 53

WEBLOGIC 9.1 53

JBOSS 4.1.2 53

APACHE TOMCAT5.5 53

UML TOOLS 53

RATIONAL UML MODELING TOOL 53

WEB DESIGN 53

HTML 53

JAVA SCRIPT 54

CSS 54

AJAX 54

METHODOLOGIES 54

OOAD 54

OODB 54

SAD 54

4

Page 5: Single Source.1

TOOLS 54

ECLISPE3.2 54

ANT 54

MAVEN 54

BATCH SCRIPT 54

SHELL SCRIPT 54

STRATEGIES 54

REQUIREMENT/REQUEST ANALYSIS 54

DEPLOYMENT AND CONFIGURATION 54

PERFORMANCE TUNING AND REVIEW. 54

CONFIGURATION TOOLS 54

RATIONAL CLEAR CASE 54

OPERATING SYSTEMS 54

WINDOWS 54

UNIX 54

SUB TOPICS 54

EXTENSION MECHANISM 54

CONTENTS............................................................................................56INTRODUCTION.......................................................................................56THE EXTENSION MECHANISM.....................................................................57ARCHITECTURE..........................................................................................................57OPTIONAL PACKAGE DEPLOYMENT.................................................................................58BUNDLED OPTIONAL PACKAGES....................................................................................58INSTALLED OPTIONAL PACKAGES...................................................................................59

5

Page 6: Single Source.1

OPTIONAL PACKAGE SEALING....................................................................60OPTIONAL PACKAGE SECURITY...................................................................61RELATED APIS......................................................................................61

GENERIC 62

JMX 66

INSTRUMENTATION.....................................................................................................68JMX AGENT..............................................................................................................69Remote Management............................................................................................69

Core Java

Collections

Sources:http://java.sun.com/docs/books/tutorial/collections/interfaces/collection.html

Lesson: Introduction to Collections

A collection — sometimes called a container — is simply an object that groups multiple elements into a single unit. Collections are used to store, retrieve, manipulate, and communicate aggregate data. Typically, they represent data items that form a natural group, such as a poker hand (a collection of cards), a mail folder (a collection of letters), or a telephone directory (a mapping of names to phone numbers).

If you've used the Java programming language — or just about any other programming language — you're already familiar with collections. Collection implementations in earlier (pre-1.2) versions of the Java platform included Vector, Hashtable, and array. However, those earlier versions did not contain a collections framework.

What Is a Collections Framework?

A collections framework is a unified architecture for representing and manipulating collections. All collections frameworks contain the following:

* Interfaces: These are abstract data types that represent collections. Interfaces allow collections to be manipulated independently of the details of their representation. In object-oriented languages, interfaces generally form a hierarchy.

6

Page 7: Single Source.1

* Implementations: These are the concrete implementations of the collection interfaces. In essence, they are reusable data structures. * Algorithms: These are the methods that perform useful computations, such as searching and sorting, on objects that implement collection interfaces. The algorithms are said to be polymorphic: that is, the same method can be used on many different implementations of the appropriate collection interface. In essence, algorithms are reusable functionality.

Apart from the Java Collections Framework, the best-known examples of collections frameworks are the C++ Standard Template Library (STL) and Smalltalk's collection hierarchy. Historically, collections frameworks have been quite complex, which gave them a reputation for having a steep learning curve. We believe that the Java Collections Framework breaks with this tradition, as you will learn for yourself in this chapter.

Benefits of the Java Collections Framework

The Java Collections Framework provides the following benefits:

* Reduces programming effort: By providing useful data structures and algorithms, the Collections Framework frees you to concentrate on the important parts of your program rather than on the low-level "plumbing" required to make it work. By facilitating interoperability among unrelated APIs, the Java Collections Framework frees you from writing adapter objects or conversion code to connect APIs. * Increases program speed and quality: This Collections Framework provides high-performance, high-quality implementations of useful data structures and algorithms. The various implementations of each interface are interchangeable, so programs can be easily tuned by switching collection implementations. Because you're freed from the drudgery of writing your own data structures, you'll have more time to devote to improving programs' quality and performance. * Allows interoperability among unrelated APIs: The collection interfaces are the vernacular by which APIs pass collections back and forth. If my network administration API furnishes a collection of node names and if your GUI toolkit expects a collection of column headings, our APIs will interoperate seamlessly, even though they were written independently. * Reduces effort to learn and to use new APIs: Many APIs naturally take collections on input and furnish them as output. In the past, each such API had a small sub-API devoted to manipulating its collections. There was little consistency among these ad hoc collections sub-APIs, so you had to learn each one from scratch, and it was easy to make mistakes when using them. With the advent of standard collection interfaces, the problem went away. * Reduces effort to design new APIs: This is the flip side of the previous advantage. Designers and implementers don't have to reinvent the wheel each time they create an API that relies on collections; instead, they can use standard collection interfaces.

7

Page 8: Single Source.1

* Fosters software reuse: New data structures that conform to the standard collection interfaces are by nature reusable. The same goes for new algorithms that operate on objects that implement these interfaces.

Lesson: Interfaces

The core collection interfaces encapsulate different types of collections, which are shown in the figure below. These interfaces allow collections to be manipulated independently of the details of their representation. Core collection interfaces are the foundation of the Java Collections Framework. As you can see in the following figure, the core collection interfaces form a hierarchy.

Two interface trees, one starting with Collection and including Set, SortedSet, List, and Queue, and the other starting with Map and including SortedMap.

The core collection interfaces. A Set is a special kind of Collection, a SortedSet is a special kind of Set, and so forth. Note also that the hierarchy consists of two distinct trees — a Map is not a true Collection.

Note that all the core collection interfaces are generic. For example, this is the declaration of the Collection interface.

public interface Collection<E>...

The <E> syntax tells you that the interface is generic. When you declare a Collection instance you can and should specify the type of object contained in the collection. Specifying the type allows the compiler to verify (at compile-time) that the type of object you put into the collection is correct, thus reducing errors at runtime. For information on generic types, see the Generics lesson.

When you understand how to use these interfaces, you will know most of what there is to know about the Java Collections Framework. This chapter discusses general guidelines for effective use of the interfaces, including when to use which interface. You'll also learn programming idioms for each interface to help you get the most out of it.

To keep the number of core collection interfaces manageable, the Java platform doesn't provide separate interfaces for each variant of each collection type. (Such variants might include immutable, fixed-size, and append-only.) Instead, the modification operations in each interface are designated optional — a given implementation may elect not to support all operations. If an unsupported operation is invoked, a collection throws an UnsupportedOperationException. Implementations are responsible for documenting which of the optional operations they support. All of the Java platform's general-purpose implementations support all of the optional operations.

8

Page 9: Single Source.1

The following list describes the core collection interfaces:

* Collection — the root of the collection hierarchy. A collection represents a group of objects known as its elements. The Collection interface is the least common denominator that all collections implement and is used to pass collections around and to manipulate them when maximum generality is desired. Some types of collections allow duplicate elements, and others do not. Some are ordered and others are unordered. The Java platform doesn't provide any direct implementations of this interface but provides implementations of more specific subinterfaces, such as Set and List. Also see The Collection Interface section.

* Set — a collection that cannot contain duplicate elements. This interface models the mathematical set abstraction and is used to represent sets, such as the cards comprising a poker hand, the courses making up a student's schedule, or the processes running on a machine. See also The Set Interface section.

* List — an ordered collection (sometimes called a sequence). Lists can contain duplicate elements. The user of a List generally has precise control over where in the list each element is inserted and can access elements by their integer index (position). If you've used Vector, you're familiar with the general flavor of List. Also see The List Interface section.

* Queue — a collection used to hold multiple elements prior to processing. Besides basic Collection operations, a Queue provides additional insertion, extraction, and inspection operations.

Queues typically, but do not necessarily, order elements in a FIFO (first-in, first-out) manner. Among the exceptions are priority queues, which order elements according to a supplied comparator or the elements' natural ordering. Whatever the ordering used, the head of the queue is the element that would be removed by a call to remove or poll. In a FIFO queue, all new elements are inserted at the tail of the queue. Other kinds of queues may use different placement rules. Every Queue implementation must specify its ordering properties. Also see The Queue Interface section.

* Map — an object that maps keys to values. A Map cannot contain duplicate keys; each key can map to at most one value. If you've used Hashtable, you're already familiar with the basics of Map. Also see The Map Interface section.

The last two core collection interfaces are merely sorted versions of Set and Map:

* SortedSet — a Set that maintains its elements in ascending order. Several additional operations are provided to take advantage of the ordering. Sorted sets

9

Page 10: Single Source.1

are used for naturally ordered sets, such as word lists and membership rolls. Also see The SortedSet Interface section.

* SortedMap — a Map that maintains its mappings in ascending key order. This is the Map analog of SortedSet. Sorted maps are used for naturally ordered collections of key/value pairs, such as dictionaries and telephone directories. Also see The SortedMap Interface section.

To understand how the sorted interfaces maintain the order of their elements, see the Object Ordering section.

The Collection Interface

A Collection represents a group of objects known as its elements. The Collection interface is used to pass around collections of objects where maximum generality is desired. For example, by convention all general-purpose collection implementations have a constructor that takes a Collection argument. This constructor, known as a conversion constructor, initializes the new collection to contain all of the elements in the specified collection, whatever the given collection's subinterface or implementation type. In other words, it allows you to convert the collection's type.

Suppose, for example, that you have a Collection<String> c, which may be a List, a Set, or another kind of Collection. This idiom creates a new ArrayList (an implementation of the List interface), initially containing all the elements in c.

List<String> list = new ArrayList<String>(c);

The following shows the Collection interface.

public interface Collection<E> extends Iterable<E> { // Basic operations int size(); boolean isEmpty(); boolean contains(Object element); boolean add(E element); //optional boolean remove(Object element); //optional Iterator<E> iterator();

// Bulk operations boolean containsAll(Collection<?> c); boolean addAll(Collection<? extends E> c); //optional boolean removeAll(Collection<?> c); //optional boolean retainAll(Collection<?> c); //optional void clear(); //optional

10

Page 11: Single Source.1

// Array operations Object[] toArray(); <T> T[] toArray(T[] a); }

The interface does about what you'd expect given that a Collection represents a group of objects. The interface has methods to tell you how many elements are in the collection (size, isEmpty), to check whether a given object is in the collection (contains), to add and remove an element from the collection (add, remove), and to provide an iterator over the collection (iterator).

The add method is defined generally enough so that it makes sense for collections that allow duplicates as well as those that don't. It guarantees that the Collection will contain the specified element after the call completes, and returns true if the Collection changes as a result of the call. Similarly, the remove method is designed to remove a single instance of the specified element from the Collection, assuming that it contains the element to start with, and to return true if the Collection was modified as a result.

Traversing Collections

There are two ways to traverse collections: (1) with the for-each construct and (2) by using Iterators. for-each Construct The for-each construct allows you to concisely traverse a collection or array using a for loop — see The for Statement. The following code uses the for-each construct to print out each element of a collection on a separate line.

for (Object o : collection) System.out.println(o);

Iterators An Iterator is an object that enables you to traverse through a collection and to remove elements from the collection selectively, if desired. You get an Iterator for a collection by calling its iterator method. The following is the Iterator interface.

public interface Iterator<E> { boolean hasNext(); E next(); void remove(); //optional }

The hasNext method returns true if the iteration has more elements, and the next method returns the next element in the iteration. The remove method removes the last element that was returned by next from the underlying

11

Page 12: Single Source.1

Collection. The remove method may be called only once per call to next and throws an exception if this rule is violated.

Note that Iterator.remove is the only safe way to modify a collection during iteration; the behavior is unspecified if the underlying collection is modified in any other way while the iteration is in progress.

Use Iterator instead of the for-each construct when you need to:

* Remove the current element. The for-each construct hides the iterator, so you cannot call remove. Therefore, the for-each construct is not usable for filtering. * Iterate over multiple collections in parallel.

The following method shows you how to use an Iterator to filter an arbitrary Collection — that is, traverse the collection removing specific elements.

static void filter(Collection<?> c) { for (Iterator<?> it = c.iterator(); it.hasNext(); ) if (!cond(it.next())) it.remove(); }

This simple piece of code is polymorphic, which means that it works for any Collection regardless of implementation. This example demonstrates how easy it is to write a polymorphic algorithm using the Java Collections Framework.

Collection Interface Bulk Operations

Bulk operations perform an operation on an entire Collection. You could implement these shorthand operations using the basic operations, though in most cases such implementations would be less efficient. The following are the bulk operations:

* containsAll — returns true if the target Collection contains all of the elements in the specified Collection. * addAll — adds all of the elements in the specified Collection to the target Collection. * removeAll — removes from the target Collection all of its elements that are also contained in the specified Collection. * retainAll — removes from the target Collection all its elements that are not also contained in the specified Collection. That is, it retains only those elements in the target Collection that are also contained in the specified Collection. * clear — removes all elements from the Collection.

12

Page 13: Single Source.1

The addAll, removeAll, and retainAll methods all return true if the target Collection was modified in the process of executing the operation.

As a simple example of the power of bulk operations, consider the following idiom to remove all instances of a specified element, e, from a Collection, c.

c.removeAll(Collections.singleton(e));

More specifically, suppose you want to remove all of the null elements from a Collection.

c.removeAll(Collections.singleton(null));

This idiom uses Collections.singleton, which is a static factory method that returns an immutable Set containing only the specified element.

Collection Interface Array Operations

The toArray methods are provided as a bridge between collections and older APIs that expect arrays on input. The array operations allow the contents of a Collection to be translated into an array. The simple form with no arguments creates a new array of Object. The more complex form allows the caller to provide an array or to choose the runtime type of the output array.

For example, suppose that c is a Collection. The following snippet dumps the contents of c into a newly allocated array of Object whose length is identical to the number of elements in c.

Object[] a = c.toArray();

Suppose that c is known to contain only strings (perhaps because c is of type Collection<String>). The following snippet dumps the contents of c into a newly allocated array of String whose length is identical to the number of elements in c.

String[] a = c.toArray(new String[0]);

JDBC

Introduces an API for connectivity between the Java applications and a wide range of databases and a data sources.JDBC(TM) Database Access

The JDBC™ API was designed to keep simple things simple. This means that the JDBC makes everyday database tasks easy. This trail walks you through examples of using JDBC to execute common SQL statements, and perform other objectives common to database applications.

13

Page 14: Single Source.1

This trail is divided into these lessons:

JDBC Introduction Lists JDBC features, describes JDBC Architecture and reviews SQL commands and Relational Database concepts.

JDBC Basics covers the JDBC API, which is included in the Java™ SE 6 release.

By the end of the first lesson, you will know how to use the basic JDBC API to create tables, insert values into them, query the tables, retrieve the results of the queries, and update the tables. In this process, you will learn how to use simple statements and prepared statements, and you will see an example of a stored procedure. You will also learn how to perform transactions and how to catch exceptions and warnings.

JDBC Introduction

The JDBC API is a Java API that can access any kind of tabular data, especially data stored in a Relational Database.

JDBC helps you to write java applications that manage these three programming activities:

1. Connect to a data source, like a database 2. Send queries and update statements to the database 3. Retrieve and process the results received from the database in answer to

your query

The following simple code fragment gives a simple example of these three steps:

4. Connection con = DriverManager.getConnection5. ( "jdbc:myDriver:wombat", "myLogin","myPassword");6. 7. Statement stmt = con.createStatement();8. ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");9. while (rs.next()) {10. int x = rs.getInt("a");11. String s = rs.getString("b");12. float f = rs.getFloat("c");13. }

This short code fragment instantiates a DriverManager object to connect to a database driver and log into the database, instantiates a Statement object that carries your SQL language query to the database; instantiates a ResultSet object that retrieves the results of your query, and executes a simple while loop, which retrieves and displays those results. It's that simple.

14

Page 15: Single Source.1

JDBC Product Components

JDBC includes four components:

1. The JDBC API — 

The JDBC™ API provides programmatic access to relational data from the Java™ programming language. Using the JDBC API, applications can execute SQL statements, retrieve results, and propagate changes back to an underlying data source. The JDBC API can also interact with multiple data sources in a distributed, heterogeneous environment.

The JDBC API is part of the Java platform, which includes the Java™ Standard Edition (Java™ SE ) and the Java™ Enterprise Edition (Java™ EE). The JDBC 4.0 API is divided into two packages: java.sql and javax.sql. Both packages are included in the Java SE and Java EE platforms.

2. JDBC Driver Manager — 

The JDBC DriverManager class defines objects which can connect Java applications to a JDBC driver. DriverManager has traditionally been the backbone of the JDBC architecture. It is quite small and simple.

The Standard Extension packages javax.naming and javax.sql let you use a DataSource object registered with a Java Naming and Directory Interface™ (JNDI) naming service to establish a connection with a data source. You can use either connecting mechanism, but using a DataSource object is recommended whenever possible.

3. JDBC Test Suite  — 

The JDBC driver test suite helps you to determine that JDBC drivers will run your program. These tests are not comprehensive or exhaustive, but they do exercise many of the important features in the JDBC API.

4. JDBC-ODBC Bridge — 

The Java Software bridge provides JDBC access via ODBC drivers. Note that you need to load ODBC binary code onto each client machine that uses this driver. As a result, the ODBC driver is most appropriate on a corporate network where client installations are not a major problem, or for application server code written in Java in a three-tier architecture.

15

Page 16: Single Source.1

This Trail uses the first two of these these four JDBC components to connect to a database and then build a java program that uses SQL commands to communicate with a test Relational Database. The last two components are used in specialized environments to test web applications, or to communicate with ODBC-aware DBMSs.

JDBC Architecture

Two-tier and Three-tier Processing Models

The JDBC API supports both two-tier and three-tier processing models for database access.

Figure 1: Two-tier Architecture for Data Access.

In the two-tier model, a Java applet or application talks directly to the data source. This requires a JDBC driver that can communicate with the particular data source being accessed. A user's commands are delivered to the database or other data source, and the results of those statements are sent back to the user. The data source may be located on another machine to which the user is connected via a network. This is referred to as a client/server configuration, with the user's machine as the client, and the machine housing the data source as the server. The network can be an intranet, which, for example, connects employees within a corporation, or it can be the Internet.

In the three-tier model, commands are sent to a "middle tier" of services, which then sends the commands to the data source. The data source processes the commands and sends the results back to the middle tier, which then sends them to the user. MIS directors find the three-tier model very attractive because the middle tier makes it possible to maintain control over access and the kinds of updates that can be made to corporate data. Another advantage is that it simplifies the deployment of applications. Finally, in many cases, the three-tier architecture can provide performance advantages.

Figure 2: Three-tier Architecture for Data Access.

16

Page 17: Single Source.1

Until recently, the middle tier has often been written in languages such as C or C++, which offer fast performance. However, with the introduction of optimizing compilers that translate Java bytecode into efficient machine-specific code and technologies such as Enterprise JavaBeans™, the Java platform is fast becoming the standard platform for middle-tier development. This is a big plus, making it possible to take advantage of Java's robustness, multithreading, and security features.

With enterprises increasingly using the Java programming language for writing server code, the JDBC API is being used more and more in the middle tier of a three-tier architecture. Some of the features that make JDBC a server technology are its support for connection pooling, distributed transactions, and disconnected rowsets. The JDBC API is also what allows access to a data source from a Java middle tier.

A Relational Database Overview

A database is a means of storing information in such a way that information can be retrieved from it. In simplest terms, a relational database is one that presents information in tables with rows and columns. A table is referred to as a relation in the sense that it is a collection of objects of the same type (rows). Data in a table can be related according to common keys or concepts, and the ability to retrieve related data from a table is the basis for the term relational database. A Database Management System (DBMS) handles the way data is stored, maintained, and retrieved. In the case of a relational database, a Relational Database Management System (RDBMS) performs these tasks. DBMS as used in this book is a general term that includes RDBMS.

Integrity Rules

Relational tables follow certain integrity rules to ensure that the data they contain stay accurate and are always accessible. First, the rows in a relational table should all be distinct. If there are duplicate rows, there can be problems resolving

17

Page 18: Single Source.1

which of two possible selections is the correct one. For most DBMSs, the user can specify that duplicate rows are not allowed, and if that is done, the DBMS will prevent the addition of any rows that duplicate an existing row.

A second integrity rule of the traditional relational model is that column values must not be repeating groups or arrays. A third aspect of data integrity involves the concept of a null value. A database takes care of situations where data may not be available by using a null value to indicate that a value is missing. It does not equate to a blank or zero. A blank is considered equal to another blank, a zero is equal to another zero, but two null values are not considered equal.

When each row in a table is different, it is possible to use one or more columns to identify a particular row. This unique column or group of columns is called a primary key. Any column that is part of a primary key cannot be null; if it were, the primary key containing it would no longer be a complete identifier. This rule is referred to as entity integrity.

Table 1.2 illustrates some of these relational database concepts. It has five columns and six rows, with each row representing a different employee.

Table 1.2: Employees

Employee_Number First_name Last_Name Date_of_Birth Car_Number

10001 Axel Washington 28-Aug-43 5

10083 Arvid Sharma 24-Nov-54 null

10120 Jonas Ginsberg 01-Jan-69 null

10005 Florence Wojokowski 04-Jul-71 12

10099 Sean Washington 21-Sep-66 null

10035 Elizabeth Yamaguchi 24-Dec-59 null

The primary key for this table would generally be the employee number because each one is guaranteed to be different. (A number is also more efficient than a string for making comparisons.) It would also be possible to use First_Name and Last_Name because the combination of the two also identifies just one row in our sample database. Using the last name alone would not work because there are two employees with the last name of "Washington." In this particular case the first names are all different, so one could conceivably use that column as a primary key, but it is best to avoid using a column where duplicates could occur. If Elizabeth Taylor gets a job at this company and the primary key is First_Name, the RDBMS will not allow her name to be added (if it has been specified that no duplicates are permitted). Because there is already an Elizabeth in the table, adding a second one would make the primary key useless as a way of identifying just one row. Note that although using First_Name and Last_Name is a unique

18

Page 19: Single Source.1

composite key for this example, it might not be unique in a larger database. Note also that Table 1.2 assumes that there can be only one car per employee.

SELECT Statements

SQL is a language designed to be used with relational databases. There is a set of basic SQL commands that is considered standard and is used by all RDBMSs. For example, all RDBMSs use the SELECT statement.

A SELECT statement, also called a query, is used to get information from a table. It specifies one or more column headings, one or more tables from which to select, and some criteria for selection. The RDBMS returns rows of the column entries that satisfy the stated requirements. A SELECT statement such as the following will fetch the first and last names of employees who have company cars:

SELECT First_Name, Last_NameFROM EmployeesWHERE Car_Number IS NOT NULL

The result set (the set of rows that satisfy the requirement of not having null in the Car_Number column) follows. The first name and last name are printed for each row that satisfies the requirement because the SELECT statement (the first line) specifies the columns First_Name and Last_Name. The FROM clause (the second line) gives the table from which the columns will be selected.

FIRST_NAME LAST_NAME---------- -----------Axel WashingtonFlorence Wojokowski

The following code produces a result set that includes the whole table because it asks for all of the columns in the table Employees with no restrictions (no WHERE clause). Note that SELECT * means "SELECT all columns."

SELECT *FROM Employees

WHERE Clauses

The WHERE clause in a SELECT statement provides the criteria for selecting values. For example, in the following code fragment, values will be selected only if they occur in a row in which the column Last_Name begins with the string 'Washington'.

SELECT First_Name, Last_NameFROM EmployeesWHERE Last_Name LIKE 'Washington%'

19

Page 20: Single Source.1

The keyword LIKE is used to compare strings, and it offers the feature that patterns containing wildcards can be used. For example, in the code fragment above, there is a percent sign (%) at the end of 'Washington', which signifies that any value containing the string 'Washington' plus zero or more additional characters will satisfy this selection criterion. So 'Washington' or 'Washingtonian' would be matches, but 'Washing' would not be. The other wildcard used in LIKE clauses is an underbar (_), which stands for any one character. For example,

WHERE Last_Name LIKE 'Ba_man'

would match 'Batman', 'Barman', 'Badman', 'Balman', 'Bagman', 'Bamman', and so on.

The code fragment below has a WHERE clause that uses the equal sign (=) to compare numbers. It selects the first and last name of the employee who is assigned car 12.

SELECT First_Name, Last_NameFROM EmployeesWHERE Car_Number = 12

The next code fragment selects the first and last names of employees whose employee number is greater than 10005:

SELECT First_Name, Last_NameFROM EmployeesWHERE Employee_Number > 10005

WHERE clauses can get rather elaborate, with multiple conditions and, in some DBMSs, nested conditions. This overview will not cover complicated WHERE clauses, but the following code fragment has a WHERE clause with two conditions; this query selects the first and last names of employees whose employee number is less than 10100 and who do not have a company car.

SELECT First_Name, Last_NameFROM EmployeesWHERE Employee_Number < 10100 and Car_Number IS NULL

A special type of WHERE clause involves a join, which is explained in the next section.

Joins

A distinguishing feature of relational databases is that it is possible to get data from more than one table in what is called a join. Suppose that after retrieving the names of employees who have company cars, one wanted to find out who has

20

Page 21: Single Source.1

which car, including the make, model, and year of car. This information is stored in another table, Cars, shown in Table 1.3.

Table 1.3. Cars

Car Number Make Model Year

5 Honda Civic DX 1996

12 Toyota Corolla 1999

There must be one column that appears in both tables in order to relate them to each other. This column, which must be the primary key in one table, is called the foreign key in the other table. In this case, the column that appears in two tables is Car_Number, which is the primary key for the table Cars and the foreign key in the table Employees. If the 1996 Honda Civic were wrecked and deleted from the Cars table, then Car_Number 5 would also have to be removed from the Employees table in order to maintain what is called referential integrity. Otherwise, the foreign key column (Car_Number) in Employees would contain an entry that did not refer to anything in Cars. A foreign key must either be null or equal to an existing primary key value of the table to which it refers. This is different from a primary key, which may not be null. There are several null values in the Car_Number column in the table Employees because it is possible for an employee not to have a company car.

The following code asks for the first and last names of employees who have company cars and for the make, model, and year of those cars. Note that the FROM clause lists both Employees and Cars because the requested data is contained in both tables. Using the table name and a dot (.) before the column name indicates which table contains the column.

SELECT Employees.First_Name, Employees.Last_Name, Cars.Make, Cars.Model, Cars.YearFROM Employees, CarsWHERE Employees.Car_Number = Cars.Car_Number

This returns a result set that will look similar to the following:

FIRST_NAME LAST_NAME MAKE MODEL YEAR----------- ------------ -------- --------- -------Axel Washington Honda CivicDX 1996Florence Wojokowski Toyota Corolla 1999

Common SQL Commands

SQL commands are divided into categories, the two main ones being Data Manipulation Language (DML) commands and Data Definition Language (DDL)

21

Page 22: Single Source.1

commands. DML commands deal with data, either retrieving it or modifying it to keep it up-to-date. DDL commands create or change tables and other database objects such as views and indexes.

A list of the more common DML commands follows:

SELECT —  used to query and display data from a database. The SELECT statement specifies which columns to include in the result set. The vast majority of the SQL commands used in applications are SELECT statements.

INSERT —  adds new rows to a table. INSERT is used to populate a newly created table or to add a new row (or rows) to an already-existing table.

DELETE —  removes a specified row or set of rows from a table

UPDATE —  changes an existing value in a column or group of columns in a table

The more common DDL commands follow:

CREATE TABLE —  creates a table with the column names the user provides. The user also needs to specify a type for the data in each column. Data types vary from one RDBMS to another, so a user might need to use metadata to establish the data types used by a particular database. CREATE TABLE is normally used less often than the data manipulation commands because a table is created only once, whereas adding or deleting rows or changing individual values generally occurs more frequently.

DROP TABLE —  deletes all rows and removes the table definition from the database. A JDBC API implementation is required to support the DROP TABLE command as specified by SQL92, Transitional Level. However, support for the CASCADE and RESTRICT options of DROP TABLE is optional. In addition, the behavior of DROP TABLE is implementation-defined when there are views or integrity constraints defined that reference the table being dropped.

ALTER TABLE —  adds or removes a column from a table. It also adds or drops table constraints and alters column attributes

22

Page 23: Single Source.1

Result Sets and Cursors

The rows that satisfy the conditions of a query are called the result set. The number of rows returned in a result set can be zero, one, or many. A user can access the data in a result set one row at a time, and a cursor provides the means to do that. A cursor can be thought of as a pointer into a file that contains the rows of the result set, and that pointer has the ability to keep track of which row is currently being accessed. A cursor allows a user to process each row of a result set from top to bottom and consequently may be used for iterative processing. Most DBMSs create a cursor automatically when a result set is generated.

Earlier JDBC API versions added new capabilities for a result set's cursor, allowing it to move both forward and backward and also allowing it to move to a specified row or to a row whose position is relative to another row.

Transactions

When one user is accessing data in a database, another user may be accessing the same data at the same time. If, for instance, the first user is updating some columns in a table at the same time the second user is selecting columns from that same table, it is possible for the second user to get partly old data and partly updated data. For this reason, DBMSs use transactions to maintain data in a consistent state (data consistency) while allowing more than one user to access a database at the same time (data concurrency).

A transaction is a set of one or more SQL statements that make up a logical unit of work. A transaction ends with either a commit or a rollback, depending on whether there are any problems with data consistency or data concurrency. The commit statement makes permanent the changes resulting from the SQL statements in the transaction, and the rollback statement undoes all changes resulting from the SQL statements in the transaction.

A lock is a mechanism that prohibits two transactions from manipulating the same data at the same time. For example, a table lock prevents a table from being dropped if there is an uncommitted transaction on that table. In some DBMSs, a table lock also locks all of the rows in a table. A row lock prevents two transactions from modifying the same row, or it prevents one transaction from selecting a row while another transaction is still modifying it.

23

Page 24: Single Source.1

Stored Procedures

A stored procedure is a group of SQL statements that can be called by name. In other words, it is executable code, a mini-program, that performs a particular task that can be invoked the same way one can call a function or method. Traditionally, stored procedures have been written in a DBMS-specific programming language. The latest generation of database products allows stored procedures to be written using the Java programming language and the JDBC API. Stored procedures written in the Java programming language are bytecode portable between DBMSs. Once a stored procedure is written, it can be used and reused because a DBMS that supports stored procedures will, as its name implies, store it in the database.

The following code is an example of how to create a very simple stored procedure using the Java programming language. Note that the stored procedure is just a static Java method that contains normal JDBC code. It accepts two input parameters and uses them to change an employee's car number.

Do not worry if you do not understand the example at this point. The code example below is presented only to illustrate what a stored procedure looks like. You will learn how to write the code in this example in the tutorials that follow.

import java.sql.*;

public class UpdateCar {

public static void UpdateCarNum(int carNo, int empNo) throws SQLException { Connection con = null; PreparedStatement pstmt = null;

try { con = DriverManager.getConnection("jdbc:default:connection");

pstmt = con.prepareStatement( "UPDATE EMPLOYEES SET CAR_NUMBER = ? " + "WHERE EMPLOYEE_NUMBER = ?"); pstmt.setInt(1, carNo); pstmt.setInt(2, empNo); pstmt.executeUpdate(); } finally { if (pstmt != null) pstmt.close(); } }}

24

Page 25: Single Source.1

Metadata

Databases store user data, and they also store information about the database itself. Most DBMSs have a set of system tables, which list tables in the database, column names in each table, primary keys, foreign keys, stored procedures, and so forth. Each DBMS has its own functions for getting information about table layouts and database features. JDBC provides the interface DatabaseMetaData, which a driver writer must implement so that its methods return information about the driver and/or DBMS for which the driver is written. For example, a large number of methods return whether or not the driver supports a particular functionality. This interface gives users and tools a standardized way to get metadata. In general, developers writing tools and drivers are the ones most likely to be concerned with metadata.

JDBC Architecture

The JDBC API supports both two-tier and three-tier processing models for database access.

Figure 1: Two-tier Architecture for Data Access.

In the two-tier model, a Java application talks directly to the data source. This requires a JDBC driver that can communicate with the particular data source being accessed. A user's commands are delivered to the database or other data source, and the results of those statements are sent back to the user. The data source may be located on another machine to which the user is connected via a network. This is referred to as a client/server configuration, with the user's machine as the client, and the machine housing the data source as the server. The network can be an intranet, which, for example, connects employees within a corporation, or it can be the Internet.

In the three-tier model, commands are sent to a "middle tier" of services, which then sends the commands to the data source. The data source processes the commands and sends the results back to the middle tier, which then sends them to the user. MIS directors find the three-tier model very attractive because the middle tier makes it possible to maintain control over access and the kinds of updates that can be made to corporate data. Another advantage is that it

25

Page 26: Single Source.1

simplifies the deployment of applications. Finally, in many cases, the three-tier architecture can provide performance advantages.

Figure 2: Three-tier Architecture for Data Access.

Until recently, the middle tier has often been written in languages such as C or C++, which offer fast performance. However, with the introduction of optimizing compilers that translate Java bytecode into efficient machine-specific code and technologies such as Enterprise JavaBeans™, the Java platform is fast becoming the standard platform for middle-tier development. This is a big plus, making it possible to take advantage of Java's robustness, multithreading, and security features.

With enterprises increasingly using the Java programming language for writing server code, the JDBC API is being used more and more in the middle tier of a three-tier architecture. Some of the features that make JDBC a server technology are its support for connection pooling, distributed transactions, and disconnected rowsets. The JDBC API is also what allows access to a data source from a Java middle tier.

A Relational Database Overview

A database is a means of storing information in such a way that information can be retrieved from it. In simplest terms, a relational database is one that presents information in tables with rows and columns. A table is referred to as a relation in the sense that it is a collection of objects of the same type (rows). Data in a table can be related according to common keys or concepts, and the ability to retrieve related data from a table is the basis for the term relational database. A Database Management System (DBMS) handles the way data is stored, maintained, and retrieved. In the case of a relational database, a Relational Database Management System (RDBMS) performs these tasks. DBMS as used in this book is a general term that includes RDBMS.

26

Page 27: Single Source.1

Integrity Rules

Relational tables follow certain integrity rules to ensure that the data they contain stay accurate and are always accessible. First, the rows in a relational table should all be distinct. If there are duplicate rows, there can be problems resolving which of two possible selections is the correct one. For most DBMSs, the user can specify that duplicate rows are not allowed, and if that is done, the DBMS will prevent the addition of any rows that duplicate an existing row.

A second integrity rule of the traditional relational model is that column values must not be repeating groups or arrays. A third aspect of data integrity involves the concept of a null value. A database takes care of situations where data may not be available by using a null value to indicate that a value is missing. It does not equate to a blank or zero. A blank is considered equal to another blank, a zero is equal to another zero, but two null values are not considered equal.

When each row in a table is different, it is possible to use one or more columns to identify a particular row. This unique column or group of columns is called a primary key. Any column that is part of a primary key cannot be null; if it were, the primary key containing it would no longer be a complete identifier. This rule is referred to as entity integrity.

Table 1.2 illustrates some of these relational database concepts. It has five columns and six rows, with each row representing a different employee.

Table 1.2: Employees

Employee_Number First_name Last_Name Date_of_Birth Car_Number

10001 Axel Washington 28-Aug-43 5

10083 Arvid Sharma 24-Nov-54 null

10120 Jonas Ginsberg 01-Jan-69 null

10005 Florence Wojokowski 04-Jul-71 12

10099 Sean Washington 21-Sep-66 null

10035 Elizabeth Yamaguchi 24-Dec-59 null

The primary key for this table would generally be the employee number because each one is guaranteed to be different. (A number is also more efficient than a string for making comparisons.) It would also be possible to use First_Name and Last_Name because the combination of the two also identifies just one row in our sample database. Using the last name alone would not work because there are two employees with the last name of "Washington." In this particular case the first names are all different, so one could conceivably use that column as a primary key, but it is best to avoid using a column where duplicates could occur.

27

Page 28: Single Source.1

If Elizabeth Taylor gets a job at this company and the primary key is First_Name, the RDBMS will not allow her name to be added (if it has been specified that no duplicates are permitted). Because there is already an Elizabeth in the table, adding a second one would make the primary key useless as a way of identifying just one row. Note that although using First_Name and Last_Name is a unique composite key for this example, it might not be unique in a larger database. Note also that Table 1.2 assumes that there can be only one car per employee.

SELECT Statements

SQL is a language designed to be used with relational databases. There is a set of basic SQL commands that is considered standard and is used by all RDBMSs. For example, all RDBMSs use the SELECT statement.

A SELECT statement, also called a query, is used to get information from a table. It specifies one or more column headings, one or more tables from which to select, and some criteria for selection. The RDBMS returns rows of the column entries that satisfy the stated requirements. A SELECT statement such as the following will fetch the first and last names of employees who have company cars:

SELECT First_Name, Last_NameFROM EmployeesWHERE Car_Number IS NOT NULL

The result set (the set of rows that satisfy the requirement of not having null in the Car_Number column) follows. The first name and last name are printed for each row that satisfies the requirement because the SELECT statement (the first line) specifies the columns First_Name and Last_Name. The FROM clause (the second line) gives the table from which the columns will be selected.

FIRST_NAME LAST_NAME---------- -----------Axel WashingtonFlorence Wojokowski

The following code produces a result set that includes the whole table because it asks for all of the columns in the table Employees with no restrictions (no WHERE clause). Note that SELECT * means "SELECT all columns."

SELECT *FROM Employees

WHERE Clauses

The WHERE clause in a SELECT statement provides the criteria for selecting values. For example, in the following code fragment, values will be selected only if they

28

Page 29: Single Source.1

occur in a row in which the column Last_Name begins with the string 'Washington'.

SELECT First_Name, Last_NameFROM EmployeesWHERE Last_Name LIKE 'Washington%'

The keyword LIKE is used to compare strings, and it offers the feature that patterns containing wildcards can be used. For example, in the code fragment above, there is a percent sign (%) at the end of 'Washington', which signifies that any value containing the string 'Washington' plus zero or more additional characters will satisfy this selection criterion. So 'Washington' or 'Washingtonian' would be matches, but 'Washing' would not be. The other wildcard used in LIKE clauses is an underbar (_), which stands for any one character. For example,

WHERE Last_Name LIKE 'Ba_man'

would match 'Batman', 'Barman', 'Badman', 'Balman', 'Bagman', 'Bamman', and so on.

The code fragment below has a WHERE clause that uses the equal sign (=) to compare numbers. It selects the first and last name of the employee who is assigned car 12.

SELECT First_Name, Last_NameFROM EmployeesWHERE Car_Number = 12

The next code fragment selects the first and last names of employees whose employee number is greater than 10005:

SELECT First_Name, Last_NameFROM EmployeesWHERE Employee_Number > 10005

WHERE clauses can get rather elaborate, with multiple conditions and, in some DBMSs, nested conditions. This overview will not cover complicated WHERE clauses, but the following code fragment has a WHERE clause with two conditions; this query selects the first and last names of employees whose employee number is less than 10100 and who do not have a company car.

SELECT First_Name, Last_NameFROM EmployeesWHERE Employee_Number < 10100 and Car_Number IS NULL

A special type of WHERE clause involves a join, which is explained in the next section.

29

Page 30: Single Source.1

Joins

A distinguishing feature of relational databases is that it is possible to get data from more than one table in what is called a join. Suppose that after retrieving the names of employees who have company cars, one wanted to find out who has which car, including the make, model, and year of car. This information is stored in another table, Cars, shown in Table 1.3.

Table 1.3. Cars

Car Number Make Model Year

5 Honda Civic DX 1996

12 Toyota Corolla 1999

There must be one column that appears in both tables in order to relate them to each other. This column, which must be the primary key in one table, is called the foreign key in the other table. In this case, the column that appears in two tables is Car_Number, which is the primary key for the table Cars and the foreign key in the table Employees. If the 1996 Honda Civic were wrecked and deleted from the Cars table, then Car_Number 5 would also have to be removed from the Employees table in order to maintain what is called referential integrity. Otherwise, the foreign key column (Car_Number) in Employees would contain an entry that did not refer to anything in Cars. A foreign key must either be null or equal to an existing primary key value of the table to which it refers. This is different from a primary key, which may not be null. There are several null values in the Car_Number column in the table Employees because it is possible for an employee not to have a company car.

The following code asks for the first and last names of employees who have company cars and for the make, model, and year of those cars. Note that the FROM clause lists both Employees and Cars because the requested data is contained in both tables. Using the table name and a dot (.) before the column name indicates which table contains the column.

SELECT Employees.First_Name, Employees.Last_Name, Cars.Make, Cars.Model, Cars.YearFROM Employees, CarsWHERE Employees.Car_Number = Cars.Car_Number

This returns a result set that will look similar to the following:

FIRST_NAME LAST_NAME MAKE MODEL YEAR----------- ------------ -------- --------- -------Axel Washington Honda CivicDX 1996Florence Wojokowski Toyota Corolla 1999

30

Page 31: Single Source.1

Common SQL Commands

SQL commands are divided into categories, the two main ones being Data Manipulation Language (DML) commands and Data Definition Language (DDL) commands. DML commands deal with data, either retrieving it or modifying it to keep it up-to-date. DDL commands create or change tables and other database objects such as views and indexes.

A list of the more common DML commands follows:

SELECT —  used to query and display data from a database. The SELECT statement specifies which columns to include in the result set. The vast majority of the SQL commands used in applications are SELECT statements.

INSERT —  adds new rows to a table. INSERT is used to populate a newly created table or to add a new row (or rows) to an already-existing table.

DELETE —  removes a specified row or set of rows from a table

UPDATE —  changes an existing value in a column or group of columns in a table

The more common DDL commands follow:

CREATE TABLE —  creates a table with the column names the user provides. The user also needs to specify a type for the data in each column. Data types vary from one RDBMS to another, so a user might need to use metadata to establish the data types used by a particular database. CREATE TABLE is normally used less often than the data manipulation commands because a table is created only once, whereas adding or deleting rows or changing individual values generally occurs more frequently.

DROP TABLE —  deletes all rows and removes the table definition from the database. A JDBC API implementation is required to support the DROP TABLE command as specified by SQL92, Transitional Level. However, support for the CASCADE and RESTRICT options of DROP TABLE is optional. In addition, the behavior of DROP TABLE is implementation-defined when there are views or integrity constraints defined that reference the table being dropped.

31

Page 32: Single Source.1

ALTER TABLE —  adds or removes a column from a table. It also adds or drops table constraints and alters column attributes

Result Sets and Cursors

The rows that satisfy the conditions of a query are called the result set. The number of rows returned in a result set can be zero, one, or many. A user can access the data in a result set one row at a time, and a cursor provides the means to do that. A cursor can be thought of as a pointer into a file that contains the rows of the result set, and that pointer has the ability to keep track of which row is currently being accessed. A cursor allows a user to process each row of a result set from top to bottom and consequently may be used for iterative processing. Most DBMSs create a cursor automatically when a result set is generated.

Earlier JDBC API versions added new capabilities for a result set's cursor, allowing it to move both forward and backward and also allowing it to move to a specified row or to a row whose position is relative to another row.

Transactions

When one user is accessing data in a database, another user may be accessing the same data at the same time. If, for instance, the first user is updating some columns in a table at the same time the second user is selecting columns from that same table, it is possible for the second user to get partly old data and partly updated data. For this reason, DBMSs use transactions to maintain data in a consistent state (data consistency) while allowing more than one user to access a database at the same time (data concurrency).

A transaction is a set of one or more SQL statements that make up a logical unit of work. A transaction ends with either a commit or a rollback, depending on whether there are any problems with data consistency or data concurrency. The commit statement makes permanent the changes resulting from the SQL statements in the transaction, and the rollback statement undoes all changes resulting from the SQL statements in the transaction.

A lock is a mechanism that prohibits two transactions from manipulating the same data at the same time. For example, a table lock prevents a table from being dropped if there is an uncommitted transaction on that table. In some DBMSs, a table lock also locks all of the rows in a table. A row lock prevents two transactions from modifying the same row, or it prevents one transaction from selecting a row while another transaction is still modifying it.

32

Page 33: Single Source.1

Stored Procedures

A stored procedure is a group of SQL statements that can be called by name. In other words, it is executable code, a mini-program, that performs a particular task that can be invoked the same way one can call a function or method. Traditionally, stored procedures have been written in a DBMS-specific programming language. The latest generation of database products allows stored procedures to be written using the Java programming language and the JDBC API. Stored procedures written in the Java programming language are bytecode portable between DBMSs. Once a stored procedure is written, it can be used and reused because a DBMS that supports stored procedures will, as its name implies, store it in the database.

The following code is an example of how to create a very simple stored procedure using the Java programming language. Note that the stored procedure is just a static Java method that contains normal JDBC code. It accepts two input parameters and uses them to change an employee's car number.

Do not worry if you do not understand the example at this point. The code example below is presented only to illustrate what a stored procedure looks like. You will learn how to write the code in this example in the tutorials that follow.

import java.sql.*;

public class UpdateCar {

public static void UpdateCarNum(int carNo, int empNo) throws SQLException { Connection con = null; PreparedStatement pstmt = null;

try { con = DriverManager.getConnection("jdbc:default:connection");

pstmt = con.prepareStatement( "UPDATE EMPLOYEES SET CAR_NUMBER = ? " + "WHERE EMPLOYEE_NUMBER = ?"); pstmt.setInt(1, carNo); pstmt.setInt(2, empNo); pstmt.executeUpdate(); } finally { if (pstmt != null) pstmt.close(); } }}

33

Page 34: Single Source.1

Metadata

Databases store user data, and they also store information about the database itself. Most DBMSs have a set of system tables, which list tables in the database, column names in each table, primary keys, foreign keys, stored procedures, and so forth. Each DBMS has its own functions for getting information about table layouts and database features. JDBC provides the interface DatabaseMetaData, which a driver writer must implement so that its methods return information about the driver and/or DBMS for which the driver is written. For example, a large number of methods return whether or not the driver supports a particular functionality. This interface gives users and tools a standardized way to get metadata. In general, developers writing tools and drivers are the ones most likely to be concerned with metadata.

DeploymentPackaging Programs in JAR FilesThe JavaTM Archive (JAR) file format enables you to bundle multiple files into a single archive file. Typically a JAR file contains the class files and auxiliary resources associated with applets and applications.

The JAR file format provides many benefits:

Security: You can digitally sign the contents of a JAR file. Users who recognize your signature can then optionally grant your software security privileges it wouldn't otherwise have.

Decreased download time: If your applet is bundled in a JAR file, the applet's class files and associated resources can be downloaded to a browser in a single HTTP transaction without the need for opening a new connection for each file.

Compression: The JAR format allows you to compress your files for efficient storage.

Packaging for extensions: The extensions framework provides a means by which you can add functionality to the Java core platform, and the JAR file format defines the packaging for extensions. Java 3D™ and JavaMail™ are examples of extensions developed by SunTM. By using the JAR file format, you can turn your software into extensions as well.

Package Sealing: Packages stored in JAR files can be optionally sealed so that the package can enforce version consistency. Sealing a package within a JAR file means that all classes defined in that package must be found in the same JAR file.

Package Versioning: A JAR file can hold data about the files it contains, such as vendor and version information.

Portability: The mechanism for handling JAR files is a standard part of the Java platform's core API.

34

Page 35: Single Source.1

Threads

Reflection

The Reflection API

Uses of Reflection

Reflection is commonly used by programs which require the ability to examine or modify the runtime behavior of applications running in the Java virtual machine. This is a relatively advanced feature and should be used only by developers who have a strong grasp of the fundamentals of the language. With that caveat in mind, reflection is a powerful technique and can enable applications to perform operations which would otherwise be impossible.

Extensibility Features An application may make use of external, user-defined classes by creating instances of extensibility objects using their fully-qualified names.

Class Browsers and Visual Development Environments A class browser needs to be able to enumerate the members of classes. Visual development environments can benefit from making use of type information available in reflection to aid the developer in writing correct code.

Debuggers and Test Tools Debuggers need to be able to examine private members on classes. Test harnesses can make use of reflection to systematically call a discoverable set APIs defined on a class, to insure a high level of code coverage in a test suite.

Drawbacks of ReflectionReflection is powerful, but should not be used indiscriminately. If it is possible to perform an operation without using reflection, then it is preferable to avoid using it. The following concerns should be kept in mind when accessing code via reflection. Performance Overhead

Because reflection involves types that are dynamically resolved, certain Java virtual machine optimizations can not be performed. Consequently, reflective operations have slower performance than their non-reflective counterparts, and should be avoided in sections of code which are called frequently in performance-sensitive applications.

Security Restrictions Reflection requires a runtime permission which may not be present when running under a security manager. This is in an important consideration for code which has to run in a restricted security context, such as in an Applet.

Exposure of Internals

35

Page 36: Single Source.1

Since reflection allows code to perform operations that would be illegal in non-reflective code, such as accessing private fields and methods, the use of reflection can result in unexpected side-effects, which may render code dysfunctional and may destroy portability. Reflective code breaks abstractions and therefore may change behavior with upgrades of the platform.

Trail LessonsThis trail covers common uses of reflection for accessing and manipulating classes, fields, methods, and constructors. Each lesson contains code examples, tips, and troubleshooting information.

Classes This lesson shows the various ways to obtain a Class object and use it to examine properties of a class, including its declaration and contents.

Members This lesson describes how to use the Reflection APIs to find the fields, methods, and constructors of a class. Examples are provided for setting and getting field values, invoking methods, and creating new instances of objects using specific constructors.

Arrays and Enumerated Types This lesson introduces two special types of classes: arrays, which are generated at runtime, and enum types, which define unique named object instances. Sample code shows how to retrieve the component type for an array and how to set and get fields with array or enum types.

Note: The examples in this trail are designed for experimenting with the Reflection APIs. The handling of exceptions therefore is not the same as would be used in production code. In particular, in production code it is not recommended to dump stack traces that are visible to the user.ClassesEvery object is either a reference or primitive type. Reference types all inherit from java.lang.Object. Classes, enums, arrays, and interfaces are all reference types. There is a fixed set of primitive types: boolean, byte, short, int, long, char, float, and double. Examples of reference types include java.lang.String, all of the wrapper classes for primitive types such as java.lang.Double, the interface java.io.Serializable, and the enum javax.swing.SortOrder.

For every type of object, the Java virtual machine instantiates an immutable instance of java.lang.Class which provides methods to examine the runtime properties of the object including its members and type information. Class also provides the ability to create new classes and objects. Most importantly, it is the entry point for all of the Reflection APIs. This lesson covers the most commonly used reflection operations involving classes:

Retrieving Class Objects describes the ways to get a Class

36

Page 37: Single Source.1

Examining Class Modifiers and Types shows how to access the class declaration information

Discovering Class Members illustrates how to list the constructors, fields, methods, and nested classes in a class

Troubleshooting describes common errors encountered when using Class

Examining Class Modifiers and TypesA class may be declared with one or more modifiers which affect its runtime behavior:

Access modifiers: public, protected, and private Modifier requiring override: abstract Modifier restricting to one instance: static Modifier prohibiting value modification: final Modifier forcing strict floating point behavior: strictfp Annotations

Not all modifiers are allowed on all classes, for example an interface cannot be final and an enum cannot be abstract. java.lang.reflect.Modifier contains declarations for all possible modifiers. It also contains methods which may be used to decode the set of modifiers returned by Class.getModifiers().

The ClassDeclarationSpy example shows how to obtain the declaration components of a class including the modifiers, generic type parameters, implemented interfaces, and the inheritance path. Since Class implements the java.lang.reflect.AnnotatedElement interface it is also possible to query the runtime annotations.

import java.lang.annotation.Annotation;import java.lang.reflect.Modifier;import java.lang.reflect.Type;import java.lang.reflect.TypeVariable;import java.util.Arrays;import java.util.ArrayList;import java.util.List;import static java.lang.System.out;

public class ClassDeclarationSpy { public static void main(String... args) {

try { Class<?> c = Class.forName(args[0]); out.format("Class:%n %s%n%n", c.getCanonicalName()); out.format("Modifiers:%n %s%n%n",

Modifier.toString(c.getModifiers()));

out.format("Type Parameters:%n"); TypeVariable[] tv = c.getTypeParameters(); if (tv.length != 0) {

out.format(" ");for (TypeVariable t : tv) out.format("%s ", t.getName());out.format("%n%n");

} else {

37

Page 38: Single Source.1

out.format(" -- No Type Parameters --%n%n"); }

out.format("Implemented Interfaces:%n"); Type[] intfs = c.getGenericInterfaces(); if (intfs.length != 0) {

for (Type intf : intfs) out.format(" %s%n", intf.toString());out.format("%n");

} else {out.format(" -- No Implemented Interfaces --%n%n");

}

out.format("Inheritance Path:%n"); List<Class> l = new ArrayList<Class>(); printAncestor(c, l); if (l.size() != 0) {

for (Class<?> cl : l) out.format(" %s%n", cl.getCanonicalName());out.format("%n");

} else {out.format(" -- No Super Classes --%n%n");

}

out.format("Annotations:%n"); Annotation[] ann = c.getAnnotations(); if (ann.length != 0) {

for (Annotation a : ann) out.format(" %s%n", a.toString());out.format("%n");

} else {out.format(" -- No Annotations --%n%n");

}

// production code should handle this exception more gracefully} catch (ClassNotFoundException x) { x.printStackTrace();}

}

private static void printAncestor(Class<?> c, List<Class> l) {Class<?> ancestor = c.getSuperclass();

if (ancestor != null) { l.add(ancestor); printAncestor(ancestor, l);

} }}

A few samples of the output follows. User input is in italics. $ java ClassDeclarationSpy java.util.concurrent.ConcurrentNavigableMapClass: java.util.concurrent.ConcurrentNavigableMap

Modifiers: public abstract interface

38

Page 39: Single Source.1

Type Parameters: K V

Implemented Interfaces: java.util.concurrent.ConcurrentMap<K, V> java.util.NavigableMap<K, V>

Inheritance Path: -- No Super Classes --

Annotations: -- No Annotations --

This is the actual declaration for java.util.concurrent.ConcurrentNavigableMap in the source code: public interface ConcurrentNavigableMap<K,V> extends ConcurrentMap, NavigableMap<K,V>

Note that since this is an interface, it is implicitly abstract. The compiler adds this modifier for every interface. Also, this declaration contains two generic type parameters, K and V. The example code simply prints the names of these parameters, but is it possible to retrieve additional information about them using methods in java.lang.reflect.TypeVariable. Interfaces may also implement other interfaces as shown above. $ java ClassDeclarationSpy "[Ljava.lang.String;"Class: java.lang.String[]

Modifiers: public abstract final

Type Parameters: -- No Type Parameters --

Implemented Interfaces: interface java.lang.Cloneable interface java.io.Serializable

Inheritance Path: java.lang.Object

Annotations: -- No Annotations --

Since arrays are runtime objects, all of the type information is defined by the Java virtual machine. In particular, arrays implement Cloneable and java.io.Serializable and their direct superclass is always Object. $ java ClassDeclarationSpy java.io.InterruptedIOExceptionClass: java.io.InterruptedIOException

Modifiers: public

Type Parameters: -- No Type Parameters --

Implemented Interfaces:

39

Page 40: Single Source.1

-- No Implemented Interfaces --

Inheritance Path: java.io.IOException java.lang.Exception java.lang.Throwable java.lang.Object

Annotations: -- No Annotations --

From the inheritance path, it may be deduced that java.io.InterruptedIOException is a checked exception because RuntimeException is not present. $ java ClassDeclarationSpy java.security.IdentityClass: java.security.Identity

Modifiers: public abstract

Type Parameters: -- No Type Parameters --

Implemented Interfaces: interface java.security.Principal interface java.io.Serializable

Inheritance Path: java.lang.Object

Annotations: @java.lang.Deprecated()

This output shows that java.security.Identity, a deprecated API, possesses the annotation java.lang.Deprecated. This may be used by reflective code to detect deprecated APIs.

Note: Not all annotations are available via reflection. Only those which have a java.lang.annotation.RetentionPolicy of RUNTIME are accessible. Of the three annotations pre-defined in the language @Deprecated, @Override, and @SuppressWarnings only @Deprecated is available at runtime.

Security

Security Features in Java SEIn this trail you'll learn how the built-in Java™ security features protect you from malevolent programs. You'll see how to use tools to control access to resources, to generate and to check digital signatures, and to create and to manage keys needed for signature generation and checking. You'll also see how to incorporate cryptography services, such as digital signature generation and checking, into your programs.

The security features provided by the Java Development Kit (JDK™) are intended for a variety of audiences:

40

Page 41: Single Source.1

Users running programs:

Built-in security functionality protects you from malevolent programs (including viruses), maintains the privacy of your files and information about you, and authenticates the identity of each code provider. You can subject applications and applets to security controls when you need to.

Developers:

You can use API methods to incorporate security functionality into your programs, including cryptography services and security checks. The API framework enables you to define and integrate your own permissions (controlling access to specific resources), cryptography service implementations, security manager implementations, and policy implementations. In addition, classes are provided for management of your public/private key pairs and public key certificates from people you trust.

Systems administrators, developers, and users:

JDK tools manage your keystore (database of keys and certificates); generate digital signatures for JAR files, and verify the authenticity of such signatures and the integrity of the signed contents; and create and modify the policy files that define your installation's security policy.

RMI

The Java Remote Method Invocation (RMI) system allows an object running in one Java virtual machine to invoke methods on an object running in another Java virtual machine. RMI provides for remote communication between programs written in the Java programming language.

Note: If you are connecting to an existing IDL program, you should use Java IDL rather than RMI.

This trail provides a brief overview of the RMI system and then walks through a complete client/server example that uses RMI's unique capabilities to load and to execute user-defined tasks at runtime. The server in the example implements a generic compute engine, which the client uses to compute the value of .

An Overview of RMI ApplicationsRMI applications often comprise two separate programs, a server and a client. A typical server program creates some remote objects, makes references to these objects accessible, and waits for clients to invoke methods on these objects. A typical client program obtains a remote reference to one or more remote objects on a server and then invokes methods on them. RMI provides the mechanism by which the server and the client communicate and pass information back and forth. Such an application is sometimes referred to as a distributed object application.

41

Page 42: Single Source.1

Distributed object applications need to do the following:

Locate remote objects. Applications can use various mechanisms to obtain references to remote objects. For example, an application can register its remote objects with RMI's simple naming facility, the RMI registry. Alternatively, an application can pass and return remote object references as part of other remote invocations.

Communicate with remote objects. Details of communication between remote objects are handled by RMI. To the programmer, remote communication looks similar to regular Java method invocations.

Load class definitions for objects that are passed around. Because RMI enables objects to be passed back and forth, it provides mechanisms for loading an object's class definitions as well as for transmitting an object's data.

The following illustration depicts an RMI distributed application that uses the RMI registry to obtain a reference to a remote object. The server calls the registry to associate (or bind) a name with a remote object. The client looks up the remote object by its name in the server's registry and then invokes a method on it. The illustration also shows that the RMI system uses an existing web server to load class definitions, from server to client and from client to server, for objects when needed.

Advantages of Dynamic Code LoadingOne of the central and unique features of RMI is its ability to download the definition of an object's class if the class is not defined in the receiver's Java virtual machine. All of the types and behavior of an object, previously available only in a single Java virtual machine, can be transmitted to another, possibly remote, Java virtual machine. RMI passes objects by their actual classes, so the behavior of the objects is not changed when they are sent to another Java virtual machine. This capability enables new types and behaviors to be introduced into a remote Java virtual machine, thus dynamically extending the behavior of an application. The compute engine example in this trail uses this capability to introduce new behavior to a distributed program.

42

Page 43: Single Source.1

Remote Interfaces, Objects, and MethodsLike any other Java application, a distributed application built by using Java RMI is made up of interfaces and classes. The interfaces declare methods. The classes implement the methods declared in the interfaces and, perhaps, declare additional methods as well. In a distributed application, some implementations might reside in some Java virtual machines but not others. Objects with methods that can be invoked across Java virtual machines are called remote objects.

An object becomes remote by implementing a remote interface, which has the following characteristics:

A remote interface extends the interface java.rmi.Remote. Each method of the interface declares java.rmi.RemoteException in its throws

clause, in addition to any application-specific exceptions.

RMI treats a remote object differently from a non-remote object when the object is passed from one Java virtual machine to another Java virtual machine. Rather than making a copy of the implementation object in the receiving Java virtual machine, RMI passes a remote stub for a remote object. The stub acts as the local representative, or proxy, for the remote object and basically is, to the client, the remote reference. The client invokes a method on the local stub, which is responsible for carrying out the method invocation on the remote object.

A stub for a remote object implements the same set of remote interfaces that the remote object implements. This property enables a stub to be cast to any of the interfaces that the remote object implements. However, only those methods defined in a remote interface are available to be called from the receiving Java virtual machine.

Creating Distributed Applications by Using RMIUsing RMI to develop a distributed application involves these general steps:

1. Designing and implementing the components of your distributed application.

2. Compiling sources. 3. Making classes network accessible. 4. Starting the application.

Designing and Implementing the Application ComponentsFirst, determine your application architecture, including which components are local objects and which components are remotely accessible. This step includes:

Defining the remote interfaces. A remote interface specifies the methods that can be invoked remotely by a client. Clients program to remote interfaces, not to the implementation classes of those interfaces. The design of such interfaces includes the determination of the types of objects that will be used as the parameters and return values for these

43

Page 44: Single Source.1

methods. If any of these interfaces or classes do not yet exist, you need to define them as well.

Implementing the remote objects. Remote objects must implement one or more remote interfaces. The remote object class may include implementations of other interfaces and methods that are available only locally. If any local classes are to be used for parameters or return values of any of these methods, they must be implemented as well.

Implementing the clients. Clients that use remote objects can be implemented at any time after the remote interfaces are defined, including after the remote objects have been deployed.

Compiling SourcesAs with any Java program, you use the javac compiler to compile the source files. The source files contain the declarations of the remote interfaces, their implementations, any other server classes, and the client classes.

Note: With versions prior to Java Platform, Standard Edition 5.0, an additional step was required to build stub classes, by using the rmic compiler. However, this step is no longer necessary.

Making Classes Network AccessibleIn this step, you make certain class definitions network accessible, such as the definitions for the remote interfaces and their associated types, and the definitions for classes that need to be downloaded to the clients or servers. Classes definitions are typically made network accessible through a web server.

Starting the ApplicationStarting the application includes running the RMI remote object registry, the server, and the client.

The rest of this section walks through the steps used to create a compute engine.

Building a Generic Compute EngineThis trail focuses on a simple, yet powerful, distributed application called a compute engine. The compute engine is a remote object on the server that takes tasks from clients, runs the tasks, and returns any results. The tasks are run on the machine where the server is running. This type of distributed application can enable a number of client machines to make use of a particularly powerful machine or a machine that has specialized hardware.

The novel aspect of the compute engine is that the tasks it runs do not need to be defined when the compute engine is written or started. New kinds of tasks can be created at any time and then given to the compute engine to be run. The only

44

Page 45: Single Source.1

requirement of a task is that its class implement a particular interface. The code needed to accomplish the task can be downloaded by the RMI system to the compute engine. Then, the compute engine runs the task, using the resources on the machine on which the compute engine is running.

The ability to perform arbitrary tasks is enabled by the dynamic nature of the Java platform, which is extended to the network by RMI. RMI dynamically loads the task code into the compute engine's Java virtual machine and runs the task without prior knowledge of the class that implements the task. Such an application, which has the ability to download code dynamically, is often called a behavior-based application. Such applications usually require full agent-enabled infrastructures. With RMI, such applications are part of the basic mechanisms for distributed computing on the Java platform.

Writing an RMI ServerThe compute engine server accepts tasks from clients, runs the tasks, and returns any results. The server code consists of an interface and a class. The interface defines the methods that can be invoked from the client. Essentially, the interface defines the client's view of the remote object. The class provides the implementation.

Designing a Remote Interface

This section explains the Compute interface, which provides the connection between the client and the server. You will also learn about the RMI API, which supports this communication.

Implementing a Remote Interface

This section explores the class that implements the Compute interface, thereby implementing a remote object. This class also provides the rest of the code that makes up the server program, including a main method that creates an instance of the remote object, registers it with the RMI registry, and sets up a security manager. Designing a Remote InterfaceAt the core of the compute engine is a protocol that enables tasks to be submitted to the compute engine, the compute engine to run those tasks, and the results of those tasks to be returned to the client. This protocol is expressed in the interfaces that are supported by the compute engine. The remote communication for this protocol is illustrated in the following figure.

45

Page 46: Single Source.1

Each interface contains a single method. The compute engine's remote interface, Compute, enables tasks to be submitted to the engine. The client interface, Task, defines how the compute engine executes a submitted task.

The compute.Compute interface defines the remotely accessible part, the compute engine itself. Here is the source code for the Compute interface:

package compute;

import java.rmi.Remote;import java.rmi.RemoteException;

public interface Compute extends Remote { <T> T executeTask(Task<T> t) throws RemoteException;}

By extending the interface java.rmi.Remote, the Compute interface identifies itself as an interface whose methods can be invoked from another Java virtual machine. Any object that implements this interface can be a remote object.

As a member of a remote interface, the executeTask method is a remote method. Therefore, this method must be defined as being capable of throwing a java.rmi.RemoteException. This exception is thrown by the RMI system from a remote method invocation to indicate that either a communication failure or a protocol error has occurred. A RemoteException is a checked exception, so any code invoking a remote method needs to handle this exception by either catching it or declaring it in its throws clause.

The second interface needed for the compute engine is the Task interface, which is the type of the parameter to the executeTask method in the Compute interface. The compute.Task interface defines the interface between the compute engine and the work that it needs to do, providing the way to start the work. Here is the source code for the Task interface:

package compute;

public interface Task<T> { T execute();}

The Task interface defines a single method, execute, which has no parameters and throws no exceptions. Because the interface does not extend Remote, the method in this interface doesn't need to list java.rmi.RemoteException in its throws clause.

The Task interface has a type parameter, T, which represents the result type of the task's computation. This interface's execute method returns the result of the computation and thus its return type is T.

The Compute interface's executeTask method, in turn, returns the result of the execution of the Task instance passed to it. Thus, the executeTask method has its own type parameter, T, that associates its own return type with the result type of the passed Task instance.

46

Page 47: Single Source.1

RMI uses the Java object serialization mechanism to transport objects by value between Java virtual machines. For an object to be considered serializable, its class must implement the java.io.Serializable marker interface. Therefore, classes that implement the Task interface must also implement Serializable, as must the classes of objects used for task results.

Different kinds of tasks can be run by a Compute object as long as they are implementations of the Task type. The classes that implement this interface can contain any data needed for the computation of the task and any other methods needed for the computation.

Here is how RMI makes this simple compute engine possible. Because RMI can assume that the Task objects are written in the Java programming language, implementations of the Task object that were previously unknown to the compute engine are downloaded by RMI into the compute engine's Java virtual machine as needed. This capability enables clients of the compute engine to define new kinds of tasks to be run on the server machine without needing the code to be explicitly installed on that machine.

The compute engine, implemented by the ComputeEngine class, implements the Compute interface, enabling different tasks to be submitted to it by calls to its executeTask method. These tasks are run using the task's implementation of the execute method and the results, are returned to the remote client.

Implementing a Remote InterfaceThis section discusses the task of implementing a class for the compute engine. In general, a class that implements a remote interface should at least do the following:

Declare the remote interfaces being implemented Define the constructor for each remote object Provide an implementation for each remote method in the remote

interfaces

An RMI server program needs to create the initial remote objects and export them to the RMI runtime, which makes them available to receive incoming remote invocations. This setup procedure can be either encapsulated in a method of the remote object implementation class itself or included in another class entirely. The setup procedure should do the following:

Create and install a security manager Create and export one or more remote objects Register at least one remote object with the RMI registry (or with another

naming service, such as a service accessible through the Java Naming and Directory Interface) for bootstrapping purposes

The complete implementation of the compute engine follows. The engine.ComputeEngine class implements the remote interface Compute and also includes the main method for setting up the compute engine. Here is the source code for the ComputeEngine class:

47

Page 48: Single Source.1

package engine;

import java.rmi.RemoteException;import java.rmi.registry.LocateRegistry;import java.rmi.registry.Registry;import java.rmi.server.UnicastRemoteObject;import compute.Compute;import compute.Task;

public class ComputeEngine implements Compute {

public ComputeEngine() { super(); }

public <T> T executeTask(Task<T> t) { return t.execute(); }

public static void main(String[] args) { if (System.getSecurityManager() == null) { System.setSecurityManager(new SecurityManager()); } try { String name = "Compute"; Compute engine = new ComputeEngine(); Compute stub = (Compute) UnicastRemoteObject.exportObject(engine, 0); Registry registry = LocateRegistry.getRegistry(); registry.rebind(name, stub); System.out.println("ComputeEngine bound"); } catch (Exception e) { System.err.println("ComputeEngine exception:"); e.printStackTrace(); } }}

The following sections discuss each component of the compute engine implementation.

Declaring the Remote Interfaces Being ImplementedThe implementation class for the compute engine is declared as follows: public class ComputeEngine implements Compute

This declaration states that the class implements the Compute remote interface and therefore can be used for a remote object.

The ComputeEngine class defines a remote object implementation class that implements a single remote interface and no other interfaces. The ComputeEngine class also contains two executable program elements that can only be invoked locally. The first of these elements is a constructor for ComputeEngine instances.

48

Page 49: Single Source.1

The second of these elements is a main method that is used to create a ComputeEngine instance and make it available to clients.

Defining the Constructor for the Remote ObjectThe ComputeEngine class has a single constructor that takes no arguments. The code for the constructor is as follows: public ComputeEngine() { super();}This constructor just invokes the superclass constructor, which is the no-argument constructor of the Object class. Although the superclass constructor gets invoked even if omitted from the ComputeEngine constructor, it is included for clarity.

Providing Implementations for Each Remote MethodThe class for a remote object provides implementations for each remote method specified in the remote interfaces. The Compute interface contains a single remote method, executeTask, which is implemented as follows: public <T> T executeTask(Task<T> t) { return t.execute();}

This method implements the protocol between the ComputeEngine remote object and its clients. Each client provides the ComputeEngine with a Task object that has a particular implementation of the Task interface's execute method. The ComputeEngine executes each client's task and returns the result of the task's execute method directly to the client.

Passing Objects in RMIArguments to or return values from remote methods can be of almost any type, including local objects, remote objects, and primitive data types. More precisely, any entity of any type can be passed to or from a remote method as long as the entity is an instance of a type that is a primitive data type, a remote object, or a serializable object, which means that it implements the interface java.io.Serializable.

Some object types do not meet any of these criteria and thus cannot be passed to or returned from a remote method. Most of these objects, such as threads or file descriptors, encapsulate information that makes sense only within a single address space. Many of the core classes, including the classes in the packages java.lang and java.util, implement the Serializable interface.

The rules governing how arguments and return values are passed are as follows:

49

Page 50: Single Source.1

Remote objects are essentially passed by reference. A remote object reference is a stub, which is a client-side proxy that implements the complete set of remote interfaces that the remote object implements.

Local objects are passed by copy, using object serialization. By default, all fields are copied except fields that are marked static or transient. Default serialization behavior can be overridden on a class-by-class basis.

Passing a remote object by reference means that any changes made to the state of the object by remote method invocations are reflected in the original remote object. When a remote object is passed, only those interfaces that are remote interfaces are available to the receiver. Any methods defined in the implementation class or defined in non-remote interfaces implemented by the class are not available to that receiver.

For example, if you were to pass a reference to an instance of the ComputeEngine class, the receiver would have access only to the compute engine's executeTask method. That receiver would not see the ComputeEngine constructor, its main method, or its implementation of any methods of java.lang.Object.

In the parameters and return values of remote method invocations, objects that are not remote objects are passed by value. Thus, a copy of the object is created in the receiving Java virtual machine. Any changes to the object's state by the receiver are reflected only in the receiver's copy, not in the sender's original instance. Any changes to the object's state by the sender are reflected only in the sender's original instance, not in the receiver's copy.

Implementing the Server's main MethodThe most complex method of the ComputeEngine implementation is the main method. The main method is used to start the ComputeEngine and therefore needs to do the necessary initialization and housekeeping to prepare the server to accept calls from clients. This method is not a remote method, which means that it cannot be invoked from a different Java virtual machine. Because the main method is declared static, the method is not associated with an object at all but rather with the class ComputeEngine.

Creating and Installing a Security ManagerThe main method's first task is to create and install a security manager, which protects access to system resources from untrusted downloaded code running within the Java virtual machine. A security manager determines whether downloaded code has access to the local file system or can perform any other privileged operations.

If an RMI program does not install a security manager, RMI will not download classes (other than from the local class path) for objects received as arguments

50

Page 51: Single Source.1

or return values of remote method invocations. This restriction ensures that the operations performed by downloaded code are subject to a security policy.

Here's the code that creates and installs a security manager:

if (System.getSecurityManager() == null) { System.setSecurityManager(new SecurityManager());}

Making the Remote Object Available to ClientsNext, the main method creates an instance of ComputeEngine and exports it to the RMI runtime with the following statements: Compute engine = new ComputeEngine();Compute stub = (Compute) UnicastRemoteObject.exportObject(engine, 0);

The static UnicastRemoteObject.exportObject method exports the supplied remote object so that it can receive invocations of its remote methods from remote clients. The second argument, an int, specifies which TCP port to use to listen for incoming remote invocation requests for the object. It is common to use the value zero, which specifies the use of an anonymous port. The actual port will then be chosen at runtime by RMI or the underlying operating system. However, a non-zero value can also be used to specify a specific port to use for listening. Once the exportObject invocation has returned successfully, the ComputeEngine remote object is ready to process incoming remote invocations.

The exportObject method returns a stub for the exported remote object. Note that the type of the variable stub must be Compute, not ComputeEngine, because the stub for a remote object only implements the remote interfaces that the exported remote object implements.

The exportObject method declares that it can throw a RemoteException, which is a checked exception type. The main method handles this exception with its try/catch block. If the exception were not handled in this way, RemoteException would have to be declared in the throws clause of the main method. An attempt to export a remote object can throw a RemoteException if the necessary communication resources are not available, such as if the requested port is bound for some other purpose.

Before a client can invoke a method on a remote object, it must first obtain a reference to the remote object. Obtaining a reference can be done in the same way that any other object reference is obtained in a program, such as by getting the reference as part of the return value of a method or as part of a data structure that contains such a reference.

The system provides a particular type of remote object, the RMI registry, for finding references to other remote objects. The RMI registry is a simple remote

51

Page 52: Single Source.1

object naming service that enables clients to obtain a reference to a remote object by name. The registry is typically only used to locate the first remote object that an RMI client needs to use. That first remote object might then provide support for finding other objects.

The java.rmi.registry.Registry remote interface is the API for binding (or registering) and looking up remote objects in the registry. The java.rmi.registry.LocateRegistry class provides static methods for synthesizing a remote reference to a registry at a particular network address (host and port). These methods create the remote reference object containing the specified network address without performing any remote communication. LocateRegistry also provides static methods for creating a new registry in the current Java virtual machine, although this example does not use those methods. Once a remote object is registered with an RMI registry on the local host, clients on any host can look up the remote object by name, obtain its reference, and then invoke remote methods on the object. The registry can be shared by all servers running on a host, or an individual server process can create and use its own registry.

The ComputeEngine class creates a name for the object with the following statement:

String name = "Compute";

The code then adds the name to the RMI registry running on the server. This step is done later with the following statements:

Registry registry = LocateRegistry.getRegistry();registry.rebind(name, stub);

This rebind invocation makes a remote call to the RMI registry on the local host. Like any remote call, this call can result in a RemoteException being thrown, which is handled by the catch block at the end of the main method.

Note the following about the Registry.rebind invocation:

The no-argument overload of LocateRegistry.getRegistry synthesizes a reference to a registry on the local host and on the default registry port, 1099. You must use an overload that has an int parameter if the registry is created on a port other than 1099.

When a remote invocation on the registry is made, a stub for the remote object is passed instead of a copy of the remote object itself. Remote implementation objects, such as instances of ComputeEngine, never leave the Java virtual machine in which they were created. Thus, when a client performs a lookup in a server's remote object registry, a copy of the stub is returned. Remote objects in such cases are thus effectively passed by (remote) reference rather than by value.

52

Page 53: Single Source.1

For security reasons, an application can only bind, unbind, or rebind remote object references with a registry running on the same host. This restriction prevents a remote client from removing or overwriting any of the entries in a server's registry. A lookup, however, can be requested from any host, local or remote.

Once the server has registered with the local RMI registry, it prints a message indicating that it is ready to start handling calls. Then, the main method completes. It is not necessary to have a thread wait to keep the server alive. As long as there is a reference to the ComputeEngine object in another Java virtual machine, local or remote, the ComputeEngine object will not be shut down or garbage collected. Because the program binds a reference to the ComputeEngine in the registry, it is reachable from a remote client, the registry itself. The RMI system keeps the ComputeEngine's process running. The ComputeEngine is available to accept calls and won't be reclaimed until its binding is removed from the registry and no remote clients hold a remote reference to the ComputeEngine object.

The final piece of code in the ComputeEngine.main method handles any exception that might arise. The only checked exception type that could be thrown in the code is RemoteException, either by the UnicastRemoteObject.exportObject invocation or by the registry rebind invocation. In either case, the program cannot do much more than exit after printing an error message. In some distributed applications, recovering from the failure to make a remote invocation is possible. For example, the application could attempt to retry the operation or choose another server to continue the operation.

Creating a Client ProgramThe compute engine is a relatively simple program: it runs tasks that are handed to it. The clients for the compute engine are more complex. A client needs to call the compute engine, but it also has to define the task to be performed by the compute engine.

Two separate classes make up the client in our example. The first class, ComputePi, looks up and invokes a Compute object. The second class, Pi, implements the Task interface and defines the work to be done by the compute engine. The job of the Pi class is to compute the value of to some number of decimal places.

The non-remote Task interface is defined as follows:

package compute;

public interface Task<T> { T execute();}

The code that invokes a Compute object's methods must obtain a reference to that object, create a Task object, and then request that the task be executed. The definition of the task class Pi is shown later. A

53

Page 54: Single Source.1

Pi object is constructed with a single argument, the desired precision of the result. The result of the task execution is a java.math.BigDecimal representing calculated to the specified precision.

Here is the source code for client.ComputePi, the main client class:

package client;

import java.rmi.registry.LocateRegistry;import java.rmi.registry.Registry;import java.math.BigDecimal;import compute.Compute;

public class ComputePi { public static void main(String args[]) { if (System.getSecurityManager() == null) { System.setSecurityManager(new SecurityManager()); } try { String name = "Compute"; Registry registry = LocateRegistry.getRegistry(args[0]); Compute comp = (Compute) registry.lookup(name); Pi task = new Pi(Integer.parseInt(args[1])); BigDecimal pi = comp.executeTask(task); System.out.println(pi); } catch (Exception e) { System.err.println("ComputePi exception:"); e.printStackTrace(); } } }

Like the ComputeEngine server, the client begins by installing a security manager. This step is necessary because the process of receiving the server remote object's stub could require downloading class definitions from the server. For RMI to download classes, a security manager must be in force.

After installing a security manager, the client constructs a name to use to look up a Compute remote object, using the same name used by ComputeEngine to bind its remote object. Also, the client uses the LocateRegistry.getRegistry API to synthesize a remote reference to the registry on the server's host. The value of the first command-line argument, args[0], is the name of the remote host on which the Compute object runs. The client then invokes the lookup method on the registry to look up the remote object by name in the server host's registry. The particular overload of LocateRegistry.getRegistry used, which has a single String parameter, returns a reference to a registry at the named host and the default registry port, 1099. You must use an overload that has an int parameter if the registry is created on a port other than 1099.

Next, the client creates a new Pi object, passing to the Pi constructor the value of the second command-line argument, args[1], parsed as an integer. This argument indicates the number of decimal places to use in the calculation. Finally, the client invokes the executeTask method of the Compute remote object. The object passed into the executeTask invocation returns an object of type BigDecimal, which the program stores in the variable result. Finally, the program prints the result.

54

Page 55: Single Source.1

The following figure depicts the flow of messages among the ComputePi client, the rmiregistry, and the ComputeEngine.

The Pi class implements the Task interface and computes the value of to a specified number of decimal places. For this example, the actual algorithm is unimportant. What is important is that the algorithm is computationally expensive, meaning that you would want to have it executed on a capable server.

Here is the source code for client.Pi, the class that implements the Task interface:

package client;

import compute.Task;import java.io.Serializable;import java.math.BigDecimal;

public class Pi implements Task<BigDecimal>, Serializable {

private static final long serialVersionUID = 227L;

/** constants used in pi computation */ private static final BigDecimal FOUR = BigDecimal.valueOf(4);

/** rounding mode to use during pi computation */ private static final int roundingMode = BigDecimal.ROUND_HALF_EVEN;

/** digits of precision after the decimal point */ private final int digits; /** * Construct a task to calculate pi to the specified * precision. */ public Pi(int digits) { this.digits = digits; }

/** * Calculate pi. */ public BigDecimal execute() { return computePi(digits); }

55

Page 56: Single Source.1

/** * Compute the value of pi to the specified number of * digits after the decimal point. The value is * computed using Machin's formula: * * pi/4 = 4*arctan(1/5) - arctan(1/239) * * and a power series expansion of arctan(x) to * sufficient precision. */ public static BigDecimal computePi(int digits) { int scale = digits + 5; BigDecimal arctan1_5 = arctan(5, scale); BigDecimal arctan1_239 = arctan(239, scale); BigDecimal pi = arctan1_5.multiply(FOUR).subtract( arctan1_239).multiply(FOUR); return pi.setScale(digits, BigDecimal.ROUND_HALF_UP); } /** * Compute the value, in radians, of the arctangent of * the inverse of the supplied integer to the specified * number of digits after the decimal point. The value * is computed using the power series expansion for the * arc tangent: * * arctan(x) = x - (x^3)/3 + (x^5)/5 - (x^7)/7 + * (x^9)/9 ... */ public static BigDecimal arctan(int inverseX, int scale) { BigDecimal result, numer, term; BigDecimal invX = BigDecimal.valueOf(inverseX); BigDecimal invX2 = BigDecimal.valueOf(inverseX * inverseX);

numer = BigDecimal.ONE.divide(invX, scale, roundingMode);

result = numer; int i = 1; do { numer = numer.divide(invX2, scale, roundingMode); int denom = 2 * i + 1; term = numer.divide(BigDecimal.valueOf(denom), scale, roundingMode); if ((i % 2) != 0) { result = result.subtract(term); } else { result = result.add(term); } i++; } while (term.compareTo(BigDecimal.ZERO) != 0);

56

Page 57: Single Source.1

return result; }}

Note that all serializable classes, whether they implement the Serializable interface directly or indirectly, must declare a private static final field named serialVersionUID to guarantee serialization compatibility between versions. If no previous version of the class has been released, then the value of this field can be any long value, similar to the 227L used by Pi, as long as the value is used consistently in future versions. If a previous version of the class has been released without an explicit serialVersionUID declaration, but serialization compatibility with that version is important, then the default implicitly computed value for the previous version must be used for the value of the new version's explicit declaration. The serialver tool can be run against the previous version to determine the default computed value for it.

The most interesting feature of this example is that the Compute implementation object never needs the Pi class's definition until a Pi object is passed in as an argument to the executeTask method. At that point, the code for the class is loaded by RMI into the Compute object's Java virtual machine, the execute method is invoked, and the task's code is executed. The result, which in the case of the Pi task is a BigDecimal object, is handed back to the calling client, where it is used to print the result of the computation.

The fact that the supplied Task object computes the value of Pi is irrelevant to the ComputeEngine object. You could also implement a task that, for example, generates a random prime number by using a probabilistic algorithm. That task would also be computationally intensive and therefore a good candidate for passing to the ComputeEngine, but it would require very different code. This code could also be downloaded when the Task object is passed to a Compute object. In just the way that the algorithm for computing is brought in when needed, the code that generates the random prime number would be brought in when needed. The Compute object knows only that each object it receives implements the execute method. The Compute object does not know, and does not need to know, what the implementation does.

Compiling and Running the ExampleNow that the code for the compute engine example has been written, it needs to be compiled and run.

Compiling the Example Programs

In this section, you learn how to compile the server and the client programs that make up the compute engine example.

Running the Example Programs

Finally, you run the server and client programs and consequently compute the value of . Compiling the Example Programs

57

Page 58: Single Source.1

In a real-world scenario in which a service such as the compute engine is deployed, a developer would likely create a Java Archive (JAR) file that contains the Compute and Task interfaces for server classes to implement and client programs to use. Next, a developer, perhaps the same developer of the interface JAR file, would write an implementation of the Compute interface and deploy that service on a machine available to clients. Developers of client programs can use the Compute and the Task interfaces, contained in the JAR file, and independently develop a task and client program that uses a Compute service.

In this section, you learn how to set up the JAR file, server classes, and client classes. You will see that the client's Pi class will be downloaded to the server at runtime. Also, the Compute and Task interfaces will be downloaded from the server to the registry at runtime.

This example separates the interfaces, remote object implementation, and client code into three packages:

compute – Compute and Task interfaces engine – ComputeEngine implementation class client – ComputePi client code and Pi task implementation

First, you need to build the interface JAR file to provide to server and client developers.

Building a JAR File of Interface ClassesFirst, you need to compile the interface source files in the compute package and then build a JAR file that contains their class files. Assume that user waldo has written these interfaces and placed the source files in the directory c:\home\waldo\src\compute on Windows or the directory /home/waldo/src/compute on Solaris OS or Linux. Given these paths, you can use the following commands to compile the interfaces and create the JAR file:

Microsoft Windows:cd c:\home\waldo\srcjavac compute\Compute.java compute\Task.javajar cvf compute.jar compute\*.classSolaris OS or Linux:cd /home/waldo/srcjavac compute/Compute.java compute/Task.javajar cvf compute.jar compute/*.class

The jar command displays the following output due to the -v option:

added manifestadding: compute/Compute.class(in = 307) (out= 201)(deflated 34%)adding: compute/Task.class(in = 217) (out= 149)(deflated 31%)

58

Page 59: Single Source.1

Now, you can distribute the compute.jar file to developers of server and client applications so that they can make use of the interfaces.

After you build either server-side or client-side classes with the javac compiler, if any of those classes will need to be dynamically downloaded by other Java virtual machines, you must ensure that their class files are placed in a network-accessible location. In this example, for Solaris OS or Linux this location is /home/user/public_html/classes because many web servers allow the accessing of a user's public_html directory through an HTTP URL constructed as http://host/~user/. If your web server does not support this convention, you could use a different location in the web server's hierarchy, or you could use a file URL instead. The file URLs take the form file:/home/user/public_html/classes/ on Solaris OS or Linux and the form file:/c:/home/user/public_html/classes/ on Windows. You may also select another type of URL, as appropriate.

The network accessibility of the class files enables the RMI runtime to download code when needed. Rather than defining its own protocol for code downloading, RMI uses URL protocols supported by the Java platform (for example, HTTP) to download code. Note that using a full, heavyweight web server to serve these class files is unnecessary. For example, a simple HTTP server that provides the functionality needed to make classes available for downloading in RMI through HTTP can be found at http://java.sun.com/javase/technologies/core/basic/rmi/class-server.zip.

Building the Server ClassesThe engine package contains only one server-side implementation class, ComputeEngine, the implementation of the remote interface Compute.

Assume that user ann, the developer of the ComputeEngine class, has placed ComputeEngine.java in the directory c:\home\ann\src\engine on Windows or the directory /home/ann/src/engine on Solaris OS or Linux. She is deploying the class files for clients to download in a subdirectory of her public_html directory, c:\home\ann\public_html\classes on Windows or /home/ann/public_html/classes on Solaris OS or Linux. This location is accessible through some web servers as http://host:port/~ann/classes/.

The ComputeEngine class depends on the Compute and Task interfaces, which are contained in the compute.jar JAR file. Therefore, you need the compute.jar file in your class path when you build the server classes. Assume that the compute.jar file is located in the directory c:\home\ann\public_html\classes on Windows or the directory /home/ann/public_html/classes on Solaris OS or Linux. Given these paths, you can use the following commands to build the server classes:

Microsoft Windows:cd c:\home\ann\srcjavac -cp c:\home\ann\public_html\classes\compute.jar

59

Page 60: Single Source.1

engine\ComputeEngine.javaSolaris OS or Linux: cd /home/ann/srcjavac -cp /home/ann/public_html/classes/compute.jar engine/ComputeEngine.java

The stub class for ComputeEngine implements the Compute interface, which refers to the Task interface. So, the class definitions for those two interfaces need to be network-accessible for the stub to be received by other Java virtual machines such as the registry's Java virtual machine. The client Java virtual machine will already have these interfaces in its class path, so it does not actually need to download their definitions. The compute.jar file under the public_html directory can serve this purpose.

Now, the compute engine is ready to deploy. You could do that now, or you could wait until after you have built the client.

Building the Client ClassesThe client package contains two classes, ComputePi, the main client program, and Pi, the client's implementation of the Task interface.

Assume that user jones, the developer of the client classes, has placed ComputePi.java and Pi.java in the directory c:\home\jones\src\client on Windows or the directory /home/jones/src/client on Solaris OS or Linux. He is deploying the class files for the compute engine to download in a subdirectory of his public_html directory, c:\home\jones\public_html\classes on Windows or /home/jones/public_html/classes on Solaris OS or Linux. This location is accessible through some web servers as http://host:port/~jones/classes/.

The client classes depend on the Compute and Task interfaces, which are contained in the compute.jar JAR file. Therefore, you need the compute.jar file in your class path when you build the client classes. Assume that the compute.jar file is located in the directory c:\home\jones\public_html\classes on Windows or the directory /home/jones/public_html/classes on Solaris OS or Linux. Given these paths, you can use the following commands to build the client classes:

Microsoft Windows: cd c:\home\jones\srcjavac -cp c:\home\jones\public_html\classes\compute.jar client\ComputePi.java client\Pi.javamkdir c:\home\jones\public_html\classes\clientcp client\Pi.class c:\home\jones\public_html\classes\clientSolaris OS or Linux: cd /home/jones/src

60

Page 61: Single Source.1

javac -cp /home/jones/public_html/classes/compute.jar client/ComputePi.java client/Pi.javamkdir /home/jones/public_html/classes/clientcp client/Pi.class /home/jones/public_html/classes/client

Only the Pi class needs to be placed in the directory public_html\classes\client because only the Pi class needs to be available for downloading to the compute engine's Java virtual machine. Now, you can run the server and then the client.

Running the Example Programs

A Note About SecurityThe server and client programs run with a security manager installed. When you run either program, you need to specify a security policy file so that the code is granted the security permissions it needs to run. Here is an example policy file to use with the server program: grant codeBase "file:/home/ann/src/" { permission java.security.AllPermission;};

Here is an example policy file to use with the client program:

grant codeBase "file:/home/jones/src/" { permission java.security.AllPermission;};

For both example policy files, all permissions are granted to the classes in the program's local class path, because the local application code is trusted, but no permissions are granted to code downloaded from other locations. Therefore, the compute engine server restricts the tasks that it executes (whose code is not known to be trusted and might be hostile) from performing any operations that require security permissions. The example client's Pi task does not require any permissions to execute.

In this example, the policy file for the server program is named server.policy, and the policy file for the client program is named client.policy.

Starting the ServerBefore starting the compute engine, you need to start the RMI registry. The RMI registry is a simple server-side bootstrap naming facility that enables remote clients to obtain a reference to an initial remote object. It can be started with the rmiregistry command. Before you execute rmiregistry, you must make sure that the shell or window in which you will run rmiregistry either has no CLASSPATH environment variable set or has a CLASSPATH environment variable that does not

61

Page 62: Single Source.1

include the path to any classes that you want downloaded to clients of your remote objects.

To start the registry on the server, execute the rmiregistry command. This command produces no output and is typically run in the background. For this example, the registry is started on the host zaphod.

Microsoft Windows (use javaw if start is not available): start rmiregistrySolaris OS or Linux: rmiregistry &

By default, the registry runs on port 1099. To start the registry on a different port, specify the port number on the command line. Do not forget to unset your CLASSPATH environment variable.

Microsoft Windows: start rmiregistry 2001Solaris OS or Linux: rmiregistry 2001 &

Once the registry is started, you can start the server. You need to make sure that both the compute.jar file and the remote object implementation class are in your class path. When you start the compute engine, you need to specify, using the java.rmi.server.codebase property, where the server's classes are network accessible. In this example, the server-side classes to be made available for downloading are the Compute and Task interfaces, which are available in the compute.jar file in the public_html\classes directory of user ann. The compute engine server is started on the host zaphod, the same host on which the registry was started.

Microsoft Windows: java -cp c:\home\ann\src;c:\home\ann\public_html\classes\compute.jar -Djava.rmi.server.codebase=file:/c:/home/ann/public_html/classes/compute.jar -Djava.rmi.server.hostname=zaphod.east.sun.com -Djava.security.policy=server.policy engine.ComputeEngineSolaris OS or Linux: java -cp /home/ann/src:/home/ann/public_html/classes/compute.jar -Djava.rmi.server.codebase=http://zaphod/~ann/classes/compute.jar -Djava.rmi.server.hostname=zaphod.east.sun.com -Djava.security.policy=server.policy engine.ComputeEngine

62

Page 63: Single Source.1

The above java command defines the following system properties:

The java.rmi.server.codebase property specifies the location, a codebase URL, from which the definitions for classes originating from this server can be downloaded. If the codebase specifies a directory hierarchy (as opposed to a JAR file), you must include a trailing slash at the end of the codebase URL.

The java.rmi.server.hostname property specifies the host name or address to put in the stubs for remote objects exported in this Java virtual machine. This value is the host name or address used by clients when they attempt to communicate remote method invocations. By default, the RMI implementation uses the server's IP address as indicated by the java.net.InetAddress.getLocalHost API. However, sometimes, this address is not appropriate for all clients and a fully qualified host name would be more effective. To ensure that RMI uses a host name (or IP address) for the server that is routable from all potential clients, set the java.rmi.server.hostname property.

The java.security.policy property is used to specify the policy file that contains the permissions you intend to grant.

Starting the ClientOnce the registry and the compute engine are running, you can start the client, specifying the following:

The location where the client serves its classes (the Pi class) by using the java.rmi.server.codebase property

The java.security.policy property, which is used to specify the security policy file that contains the permissions you intend to grant to various pieces of code

As command-line arguments, the host name of the server (so that the client knows where to locate the Compute remote object) and the number of decimal places to use in the calculation

Start the client on another host (a host named ford, for example) as follows:

Microsoft Windows: java -cp c:\home\jones\src;c:\home\jones\public_html\classes\compute.jar -Djava.rmi.server.codebase=file:/c:/home/jones/public_html/classes/ -Djava.security.policy=client.policy

client.ComputePi zaphod.east.sun.com 45Solaris OS or Linux: java -cp /home/jones/src:/home/jones/public_html/classes/compute.jar -Djava.rmi.server.codebase=http://ford/~jones/classes/ -Djava.security.policy=client.policy client.ComputePi zaphod.east.sun.com 45

63

Page 64: Single Source.1

Note that the class path is set on the command line so that the interpreter can find the client classes and the JAR file containing the interfaces. Also note that the value of the java.rmi.server.codebase property, which specifies a directory hierarchy, ends with a trailing slash.

After you start the client, the following output is displayed:

3.141592653589793238462643383279502884197169399

The following figure illustrates where the rmiregistry, the ComputeEngine server, and the ComputePi client obtain classes during program execution.

When the ComputeEngine server binds its remote object reference in the registry, the registry downloads the Compute and Task interfaces on which the stub class depends. These classes are downloaded from either the ComputeEngine server's web server or file system, depending on the type of codebase URL used when starting the server.

Because the ComputePi client has both the Compute and the Task interfaces available in its class path, it loads their definitions from its class path, not from the server's codebase.

Finally, the Pi class is loaded into the ComputeEngine server's Java virtual machine when the Pi object is passed in the executeTask remote call to the ComputeEngine object. The Pi class is loaded by the server from either the client's web server or file system, depending on the type of codebase URL used when starting the client.

Swings

What is Swing?

64

Page 65: Single Source.1

To create a Java program with a graphical user interface (GUI), you'll want to learn about Swing.

The Swing toolkit includes a rich set of components for building GUIs and adding interactivity to Java applications. Swing includes all the components you would expect from a modern toolkit: table controls, list controls, tree controls, buttons, and labels.

Swing is far from a simple component toolkit, however. It includes rich undo support, a highly customizable text package, integrated internationalization and accessibility support. To truly leverage the cross-platform capabilities of the Java platform, Swing supports numerous look and feels, including the ability to create your own look and feel. The ability to create a custom look and feel is made easier with Synth, a look and feel specifically designed to be customized. Swing wouldn't be a component toolkit without the basic user interface primitives such as drag and drop, event handling, customizable painting, and window management.

Swing is part of the Java Foundation Classes (JFC). The JFC also include other features important to a GUI program, such as the ability to add rich graphics functionality and the ability to create a program that can work in different languages and by users with different input devices.

The following list shows some of the features that Swing and the Java Foundation Classes provide.

Swing GUI Components The Swing toolkit includes a rich array of components: from basic components, such as buttons and check boxes, to rich and complex components, such as tables and text. Even deceptively simple components, such as text fields, offer sophisticated functionality, such as formatted text input or password field behavior. There are file browsers and dialogs to suit most needs, and if not, customization is possible. If none of Swing's provided components are exactly what you need, you can leverage the basic Swing component functionality to create your own.

Java 2D API To make your application stand out; convey information visually; or add figures, images, or animation to your GUI, you'll want to use the Java 2DTM API. Because Swing is built on the 2D package, it's trivial to make use of 2D within Swing components. Adding images, drop shadows, compositing — it's easy with Java 2D.

Pluggable Look-and-Feel Support Any program that uses Swing components has a choice of look and feel. The JFC classes shipped by Sun and Apple provide a look and feel that matches that of the platform. The Synth package allows you to create your own look and feel. The GTK+ look and feel makes hundreds of existing look and feels available to Swing programs.

65

Page 66: Single Source.1

A program can specify the look and feel of the platform it is running on, or it can specify to always use the Java look and feel, and without recompiling, it will just work. Or, you can ignore the issue and let the UI manager sort it out.

Data Transfer Data transfer, via cut, copy, paste, and drag and drop, is essential to almost any application. Support for data transfer is built into Swing and works between Swing components within an application, between Java applications, and between Java and native applications.

Internationalization This feature allows developers to build applications that can interact with users worldwide in their own languages and cultural conventions. Applications can be created that accept input in languages that use thousands of different characters, such as Japanese, Chinese, or Korean.

Swing's layout managers make it easy to honor a particular orientation required by the UI. For example, the UI will appear right to left in a locale where the text flows right to left. This support is automatic: You need only code the UI once and then it will work for left to right and right to left, as well as honor the appropriate size of components that change as you localize the text.

Accessibility API People with disabilities use special software — assistive technologies — that mediates the user experience for them. Such software needs to obtain a wealth of information about the running application in order to represent it in alternate media: for a screen reader to read the screen with synthetic speech or render it via a Braille display, for a screen magnifier to track the caret and keyboard focus, for on-screen keyboards to present dynamic keyboards of the menu choices and toolbar items and dialog controls, and for voice control systems to know what the user can control with his or her voice. The accessibility API enables these assistive technologies to get the information they need, and to programmatically manipulate the elements that make up the graphical user interface.

Undo Framework API Swing's undo framework allows developers to provide support for undo and redo. Undo support is built in to Swing's text component. For other components, Swing supports an unlimited number of actions to undo and redo, and is easily adapted to an application. For example, you could easily enable undo to add and remove elements from a table.

Flexible Deployment Support If you want your program to run within a browser window, you can create it as an applet and run it using Java Plug-in, which supports a variety of browsers, such as Internet Explorer, Firefox, and Safari. If you want to create a program that can be launched from a browser, you can do this with

66

Page 67: Single Source.1

Java Web Start. Of course, your application can also run outside of browser as a standard desktop application.

For more information on deploying an application, see the Deployment trail in this tutorial.

This trail provides an overview of Swing capabilities, beginning with a demo that showcases many of these features. When you are ready to begin coding, the Creating a GUI with JFC/Swing trail provides the programming techniques to take advantage of these features.

Next, a demo shows many of these features.

Using Top-Level ContainersAs we mentioned before, Swing provides three generally useful top-level container classes: JFrame, JDialog, and JApplet. When using these classes, you should keep these facts in mind:

To appear onscreen, every GUI component must be part of a containment hierarchy. A containment hierarchy is a tree of components that has a top-level container as its root. We'll show you one in a bit.

Each GUI component can be contained only once. If a component is already in a container and you try to add it to another container, the component will be removed from the first container and then added to the second.

Each top-level container has a content pane that, generally speaking, contains (directly or indirectly) the visible components in that top-level container's GUI.

You can optionally add a menu bar to a top-level container. The menu bar is by convention positioned within the top-level container, but outside the content pane. Some look and feels, such as the Mac OS look and feel, give you the option of placing the menu bar in another place more appropriate for the look and feel, such as at the top of the screen.

Note:  Although JInternalFrame mimics JFrame, internal frames aren't actually top-level containers.

Here's a picture of a frame created by an application. The frame contains a green menu bar (with no menus) and, in the frame's content pane, a large blank, yellow label.

67

Page 68: Single Source.1

You can find the entire source for this example in TopLevelDemo.java. Although the example uses a JFrame in a standalone application, the same concepts apply to JApplets and JDialogs.

Here's the containment hierarchy for this example's GUI:

As the ellipses imply, we left some details out of this diagram. We reveal the missing details a bit later. Here are the topics this section discusses:

Top-Level Containers and Containment Hierarchies Adding Components to the Content Pane Adding a Menu Bar The Root Pane (a.k.a. The Missing Details)

Top-Level Containers and Containment HierarchiesEach program that uses Swing components has at least one top-level container. This top-level container is the root of a containment hierarchy — the hierarchy that contains all of the Swing components that appear inside the top-level container.

As a rule, a standalone application with a Swing-based GUI has at least one containment hierarchy with a JFrame as its root. For example, if an application has one main window and two dialogs, then the application has three containment

68

Page 69: Single Source.1

hierarchies, and thus three top-level containers. One containment hierarchy has a JFrame as its root, and each of the other two has a JDialog object as its root.

A Swing-based applet has at least one containment hierarchy, exactly one of which is rooted by a JApplet object. For example, an applet that brings up a dialog has two containment hierarchies. The components in the browser window are in a containment hierarchy rooted by a JApplet object. The dialog has a containment hierarchy rooted by a JDialog object.

Adding Components to the Content PaneHere's the code that the preceding example uses to get a frame's content pane and add the yellow label to it: frame.getContentPane().add(yellowLabel, BorderLayout.CENTER);As the code shows, you find the content pane of a top-level container by calling the getContentPane method. The default content pane is a simple intermediate container that inherits from JComponent, and that uses a BorderLayout as its layout manager.

It's easy to customize the content pane — setting the layout manager or adding a border, for example. However, there is one tiny gotcha. The getContentPane method returns a Container object, not a JComponent object. This means that if you want to take advantage of the content pane's JComponent features, you need to either typecast the return value or create your own component to be the content pane. Our examples generally take the second approach, since it's a little cleaner. Another approach we sometimes take is to simply add a customized component to the content pane, covering the content pane completely.

Note that the default layout manager for JPanel is FlowLayout; you'll probably want to change it.

To make a component the content pane, use the top-level container's setContentPane method. For example:

//Create a panel and add components to it.JPanel contentPane = new JPanel(new BorderLayout());contentPane.setBorder(someBorder);contentPane.add(someComponent, BorderLayout.CENTER);contentPane.add(anotherComponent, BorderLayout.PAGE_END);

topLevelContainer.setContentPane(contentPane);

Note:  As a convenience, the add method and its variants, remove and setLayout have been overridden to forward to the contentPane as necessary. This means you can write frame.add(child);

69

Page 70: Single Source.1

and the child will be added to the contentPane.

Note that only these three methods do this. This means that getLayout() will not return the layout set with setLayout().

Adding a Menu BarIn theory, all top-level containers can hold a menu bar. In practice, however, menu bars usually appear only in frames and applets. To add a menu bar to a top-level container, create a JMenuBar object, populate it with menus, and then call setJMenuBar. The TopLevelDemo adds a menu bar to its frame with this code: frame.setJMenuBar(greenMenuBar);For more information about implementing menus and menu bars, see How to Use Menus.

The Root PaneEach top-level container relies on a reclusive intermediate container called the root pane. The root pane manages the content pane and the menu bar, along with a couple of other containers. You generally don't need to know about root panes to use Swing components. However, if you ever need to intercept mouse clicks or paint over multiple components, you should get acquainted with root panes.

Here's a list of the components that a root pane provides to a frame (and to every other top-level container):

We've already told you about the content pane and the optional menu bar. The two other components that a root pane adds are a layered pane and a glass pane. The layered pane contains the menu bar and content pane, and enables Z-ordering of other components. The glass pane is often used to intercept input events occuring over the top-level container, and can also be used to paint over multiple components.

For more details, see How to Use Root Panes.

70

Page 71: Single Source.1

JAAS

Internationalization

IntroductionInternationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. Sometimes the term internationalization is abbreviated as i18n, because there are 18 letters between the first "i" and the last "n."

An internationalized program has the following characteristics:

With the addition of localized data, the same executable can run worldwide. Textual elements, such as status messages and the GUI component labels,

are not hardcoded in the program. Instead they are stored outside the source code and retrieved dynamically.

Support for new languages does not require recompilation. Culturally-dependent data, such as dates and currencies, appear in formats

that conform to the end user's region and language. It can be localized quickly.

Localization is the process of adapting software for a specific region or language by adding locale-specific components and translating text. The term localization is often abbreviated as l10n, because there are 10 letters between the "l" and the "n."

The primary task of localization is translating the user interface elements and documentation. Localization involves not only changing the language interaction, but also other relevant changes such as display of numbers, dates, currency, and so on. Other types of data, such as sounds and images, may require localization if they are culturally sensitive. The better internationalized an application is, the easier it is to localize it for a particular language and character encoding scheme.

Internationalization may seem a bit daunting at first. Reading the following sections will help ease you into the subject.

Before InternationalizationSuppose that you've written a program that displays three messages, as follows: public class NotI18N {

static public void main(String[] args) {

System.out.println("Hello."); System.out.println("How are you?"); System.out.println("Goodbye."); }}You've decided that this program needs to display these same messages for people living in France and Germany. Unfortunately your programming staff is not

71

Page 72: Single Source.1

multilingual, so you'll need help translating the messages into French and German. Since the translators aren't programmers, you'll have to move the messages out of the source code and into text files that the translators can edit. Also, the program must be flexible enough so that it can display the messages in other languages, but right now no one knows what those languages will be.

It looks like the program needs to be internationalized.

After InternationalizationThe source code for the internationalized program follows. Notice that the text of the messages is not hardcoded. import java.util.*;

public class I18NSample {

static public void main(String[] args) {

String language; String country;

if (args.length != 2) { language = new String("en"); country = new String("US"); } else { language = new String(args[0]); country = new String(args[1]); }

Locale currentLocale; ResourceBundle messages;

currentLocale = new Locale(language, country);

messages = ResourceBundle.getBundle("MessagesBundle", currentLocale); System.out.println(messages.getString("greetings")); System.out.println(messages.getString("inquiry")); System.out.println(messages.getString("farewell")); }}To compile and run this program, you need these source files:

I18NSample.java MessagesBundle.properties MessagesBundle_de_DE.properties MessagesBundle_en_US.properties MessagesBundle_fr_FR.properties

Running the Sample ProgramThe internationalized program is flexible; it allows the end user to specify a language and a country on the command line. In the following example the language code is fr (French) and the country code is FR (France), so the program displays the messages in French:

72

Page 73: Single Source.1

% java I18NSample fr FRBonjour.Comment allez-vous?Au revoir.In the next example the language code is en (English) and the country code is US (United States) so the program displays the messages in English: % java I18NSample en USHello.How are you?Goodbye.

Internationalizing the Sample ProgramIf you look at the internationalized source code, you'll notice that the hardcoded English messages have been removed. Because the messages are no longer hardcoded and because the language code is specified at run time, the same executable can be distributed worldwide. No recompilation is required for localization. The program has been internationalized.

You may be wondering what happened to the text of the messages or what the language and country codes mean. Don't worry. You'll learn about these concepts as you step through the process of internationalizing the sample program.

1. Create the Properties FilesA properties file stores information about the characteristics of a program or environment. A properties file is in plain-text format. You can create the file with just about any text editor.

In the example the properties files store the translatable text of the messages to be displayed. Before the program was internationalized, the English version of this text was hardcoded in the System.out.println statements. The default properties file, which is called MessagesBundle.properties, contains the following lines:

greetings = Hellofarewell = Goodbyeinquiry = How are you?Now that the messages are in a properties file, they can be translated into various languages. No changes to the source code are required. The French translator has created a properties file called MessagesBundle_fr_FR.properties, which contains these lines: greetings = Bonjour.farewell = Au revoir.inquiry = Comment allez-vous?Notice that the values to the right side of the equal sign have been translated but that the keys on the left side have not been changed. These keys must not change, because they will be referenced when your program fetches the translated text.

73

Page 74: Single Source.1

The name of the properties file is important. For example, the name of the MessagesBundle_fr_FR.properties file contains the fr language code and the FR country code. These codes are also used when creating a Locale object.

2. Define the LocaleThe Locale object identifies a particular language and country. The following statement defines a Locale for which the language is English and the country is the United States: aLocale = new Locale("en","US");

The next example creates Locale objects for the French language in Canada and in France:

caLocale = new Locale("fr","CA");frLocale = new Locale("fr","FR");

The program is flexible. Instead of using hardcoded language and country codes, the program gets them from the command line at run time:

String language = new String(args[0]);String country = new String(args[1]);currentLocale = new Locale(language, country);

Locale objects are only identifiers. After defining a Locale, you pass it to other objects that perform useful tasks, such as formatting dates and numbers. These objects are locale-sensitive because their behavior varies according to Locale. A ResourceBundle is an example of a locale-sensitive object.

3. Create a ResourceBundleResourceBundle objects contain locale-specific objects. You use ResourceBundle objects to isolate locale-sensitive data, such as translatable text. In the sample program the ResourceBundle is backed by the properties files that contain the message text we want to display.

The ResourceBundle is created as follows:

messages = ResourceBundle.getBundle("MessagesBundle", currentLocale);

The arguments passed to the getBundle method identify which properties file will be accessed. The first argument, MessagesBundle, refers to this family of properties files:

MessagesBundle_en_US.propertiesMessagesBundle_fr_FR.propertiesMessagesBundle_de_DE.properties

74

Page 75: Single Source.1

The Locale, which is the second argument of getBundle, specifies which of the MessagesBundle files is chosen. When the Locale was created, the language code and the country code were passed to its constructor. Note that the language and country codes follow MessagesBundle in the names of the properties files.

Now all you have to do is get the translated messages from the ResourceBundle.

4. Fetch the Text from the ResourceBundleThe properties files contain key-value pairs. The values consist of the translated text that the program will display. You specify the keys when fetching the translated messages from the ResourceBundle with the getString method. For example, to retrieve the message identified by the greetings key, you invoke getString as follows: String msg1 = messages.getString("greetings");The sample program uses the key greetings because it reflects the content of the message, but it could have used another String, such as s1 or msg1. Just remember that the key is hardcoded in the program and it must be present in the properties files. If your translators accidentally modify the keys in the properties files, getString won't be able to find the messages.

ConclusionThat's it. As you can see, internationalizing a program isn't too difficult. It requires some planning and a little extra coding, but the benefits are enormous. To provide you with an overview of the internationalization process, the sample program in this lesson was intentionally kept simple. As you read the lessons that follow, you'll learn about the more advanced internationalization features of the Java programming language.

JUnit

JNDI

Java Naming and Directory Interface(TM).This trail describes JNDITM (Java Naming and Directory Interface) an API to access the directory and naming services. Here you learn about the basic naming and directory services and how to use JNDI to write simple applications to use these services. The most popular directory service LDAP is used to demostrate the use of JNDI to access the directory services.Naming and Directory Concepts

Naming ConceptsA fundamental facility in any computing system is the naming service--the means by which names are associated with objects and objects are found based on their names. When using almost any computer program or system, you are always naming one object or another. For example, when you use an electronic mail system, you must provide the name of the recipient. To access a file in the

75

Page 76: Single Source.1

computer, you must supply its name. A naming service allows you to look up an object given its name.

A naming service's primary function is to map people friendly names to objects, such as addresses, identifiers, or objects typically used by computer programs.

For example, the Internet Domain Name System (DNS) maps machine names to IP Addresses:

www.sun.com ==> 192.9.48.5.

A file system maps a filename to a file reference that a program can use to access the contents of the file.

c:\bin\autoexec.bat ==> File Reference

These two examples also illustrate the wide range of scale at which naming services exist--from naming an object on the Internet to naming a file on the local file system.

Names

To look up an object in a naming system, you supply it the name of the object. The naming system determines the syntax that the name must follow. This syntax is sometimes called the naming system's naming convention. A name is made up components. A name's representation consist of a component separator marking the components of the name.

Naming System Component Seperator Names

UNIXTM file system "/" /usr/hello

DNS "." sales.Wiz.COM

LDAP "," and "=" cn=Rosanna Lee, o=Sun, c=US

The UNIXTM file system's naming convention is that a file is named from its path relative to the root of the file system, with each component in the path separated from left to right using the forward slash character ("/"). The UNIX pathname, /usr/hello, for example, names a file hello in the file directory usr, which is located in the root of the file system.

DNS naming convention calls for components in the DNS name to be ordered from right to left and delimited by the dot character ("."). Thus the DNS name sales.Wiz.COM names a DNS entry with the name sales, relative to the DNS entry

76

Page 77: Single Source.1

Wiz.COM. The DNS entry Wiz.COM, in turn, names an entry with the name Wiz in the COM entry.

The Lightweight Directory Access Protocol (LDAP) naming convention orders components from right to left, delimited by the comma character (","). Thus the LDAP name cn=Rosanna Lee, o=Sun, c=US names an LDAP entry cn=Rosanna Lee, relative to the entry o=Sun, which in turn, is relative to c=us. LDAP has the further rule that each component of the name must be a name/value pair with the name and value separated by an equals character ("=").

BindingsThe association of a name with an object is called a binding. A file name is bound to a file.

The DNS contains bindings that map machine names to IP addresses. An LDAP name is bound to an LDAP entry.

References and Addresses

Depending on the naming service, some objects cannot be stored directly by the naming service; that is, a copy of the object cannot be placed inside the naming service. Instead, they must be stored by reference; that is, a pointer or reference to the object is placed inside the naming service. A reference represents information about how to access an object. Typically, it is a compact representation that can be used to communicate with the object, while the object itself might contain more state information. Using the reference, you can contact the object and obtain more information about the object.

For example, an airplane object might contain a list of the airplane's passengers and crew, its flight plan, and fuel and instrument status, and its flight number and departure time. By contrast, an airplane object reference might contain only its flight number and departure time. The reference is a much more compact representation of information about the airplane object and can be used to obtain additional information. A file object, for example, is accessed using a file reference. A printer object, for example, might contain the state of the printer, such as its current queue and the amount of paper in the paper tray. A printer object reference, on the other hand, might contain only information on how to reach the printer, such as its print server name and printing protocol.

Although in general a reference can contain any arbitrary information, it is useful to refer to its contents as addresses (or communication end points): specific information about how to access the object.

For simplicity, this tutorial uses "object" to refer to both objects and object references when a distinction between the two is not required.

77

Page 78: Single Source.1

Context

A context is a set of name-to-object bindings. Every context has an associated naming convention. A context always provides a lookup (resolution) operation that returns the object, it typically also provides operations such as those for binding names, unbinding names, and listing bound names. A name in one context object can be bound to another context object (called a subcontext) that has the same naming convention.

A file directory, such as /usr, in the UNIX file system represents a context. A file directory named relative to another file directory represents a subcontext (UNIX users refer to this as a subdirectory). That is, in a file directory /usr/bin, the directory bin is a subcontext of usr. A DNS domain, such as COM, represents a context. A DNS domain named relative to another DNS domain represents a subcontext. For the DNS domain Sun.COM, the DNS domain Sun is a subcontext of COM.

Finally, an LDAP entry, such as c=us, represents a context. An LDAP entry named relative to another LDAP entry represents a subcontext. For the LDAP entry o=sun,c=us, the entry o=sun is a subcontext of c=us.

Naming Systems and Namespaces

A naming system is a connected set of contexts of the same type (they have the same naming convention) and provides a common set of operations.

A system that implements the DNS is a naming system. A system that communicates using the LDAP is a naming system.

A naming system provides a naming service to its customers for performing naming-related operations. A naming service is accessed through its own interface. The DNS offers a naming service that maps machine names to IP addresses. LDAP offers a naming service that maps LDAP names to LDAP entries. A file system offers a naming service that maps filenames to files and directories.

A namespace is the set of all possible names in a naming system. The UNIX file system has a namespace consisting of all of the names of files and directories in that file system. The DNS namespace contains names of DNS domains and entries. The LDAP namespace contains names of LDAP entries.

Directory ConceptsMany naming services are extended with a directory service. A directory service associates names with objects and also associates such objects with attributes.

78

Page 79: Single Source.1

directory service = naming service + objects containing attributes

You not only can look up an object by its name but also get the object's attributes or search for the object based on its attributes.

An example is the telephone company's directory service. It maps a subscriber's name to his address and phone number. A computer's directory service is very much like a telephone company's directory service in that both can be used to store information such as telephone numbers and addresses. The computer's directory service is much more powerful, however, because it is available online and can be used to store a variety of information that can be utilized by users, programs, and even the computer itself and other computers.

A directory object represents an object in a computing environment. A directory object can be used, for example, to represent a printer, a person, a computer, or a network. A directory object contains attributes that describe the object that it represents.

Attributes

A directory object can have attributes. For example, a printer might be represented by a directory object that has as attributes its speed, resolution, and color. A user might be represented by a directory object that has as attributes the user's e-mail address, various telephone numbers, postal mail address, and computer account information.

An attribute has an attribute identifier and a set of attribute values. An attribute identifier is a token that identifies an attribute independent of its values. For example, two different computer accounts might have a "mail" attribute; "mail" is the attribute identifier. An attribute value is the contents of the attribute. The email address, for example, might have:

Attribute Identifier : Attribute Value mail [email protected]

Directories and Directory ServicesA directory is a connected set of directory objects. A directory service is a service that provides operations for creating, adding, removing, and modifying the attributes associated with objects in a directory. The service is accessed through its own interface.

Many examples of directory services are possible.

79

Page 80: Single Source.1

Network Information Service (NIS)NIS is a directory service available on the Unix operating system for storing system-related information, such as that relating to machines, networks, printers, and users.

Sun Java Directory ServerThe Sun Java Directory Server is a general-purpose directory service based on the Internet standard LDAP

Novell Directory Service (NDS) NDS is a directory service from Novell that provides information about many networking services, such as the file and print services.

Search ServiceYou can look up a directory object by supplying its name to the directory service. Alternatively, many directories, such as those based on the LDAP, support the notion of searches. When you search, you can supply not a name but a query consisting of a logical expression in which you specify the attributes that the object or objects must have. The query is called a search filter. This style of searching is sometimes called reverse lookup or content-based searching. The directory service searches for and returns the objects that satisfy the search filter.

For example, you can query the directory service to find:

all users that have the attribute "age" greater than 40 years. all machines whose IP address starts with "192.113.50".

Combining Naming and Directory ServicesDirectories often arrange their objects in a hierarchy. For example, the LDAP arranges all directory objects in a tree, called a directory information tree (DIT). Within the DIT, an organization object, for example, might contain group objects that might in turn contain person objects. When directory objects are arranged in this way, they play the role of naming contexts in addition to that of containers of attributes. Overview of JNDIThe Java Naming and Directory InterfaceTM (JNDI) is an application programming interface (API) that provides naming and directory functionality to applications written using the JavaTM programming language. It is defined to be independent of any specific directory service implementation. Thus a variety of directories -new, emerging, and already deployed can be accessed in a common way.

ArchitectureThe JNDI architecture consists of an API and a service provider interface (SPI). Java applications use the JNDI API to access a variety of naming and directory services. The SPI enables a variety of naming and directory services to be

80

Page 81: Single Source.1

plugged in transparently, thereby allowing the Java application using the JNDI API to access their services. See the following figure.

PackagingJNDI is included in the Java SE Platform. To use the JNDI, you must have the JNDI classes and one or more service providers. The JDK includes service providers for the following naming/directory services:

Lightweight Directory Access Protocol (LDAP) Common Object Request Broker Architecture (CORBA) Common Object

Services (COS) name service Java Remote Method Invocation (RMI) Registry Domain Name Service (DNS)

Other service providers can be downloaded from the JNDI Web site or obtained from other vendors.

The JNDI is divided into five packages:

javax.naming javax.naming.directory javax.naming.ldap javax.naming.event javax.naming.spi

The next part of the lesson has a brief description of the JNDI packages. Naming PackageThe javax.naming package contains classes and interfaces for accessing naming services.

Context

The javax.naming package defines a Context interface, which is the core interface for looking up, binding/unbinding, renaming objects and creating and destroying subcontexts.

LookupThe most commonly used operation is lookup(). You supply lookup() the name of the object you want to look up, and it returns the object bound to that name.

BindingslistBindings() returns an enumeration of name-to-object bindings. A binding is a tuple containing the name of the bound object, the name of the object's class, and the object itself.

List

81

Page 82: Single Source.1

list()is similar to listBindings(), except that it returns an enumeration of names containing an object's name and the name of the object's class. list() is useful for applications such as browsers that want to discover information about the objects bound within a context but that don't need all of the actual objects. Although listBindings() provides all of the same information, it is potentially a much more expensive operation.

NameName is an interface that represents a generic name--an ordered sequence of zero or more components. The Naming Systems use this interface to define the names that follow its conventions as described in the Naming and Directory Concepts lesson.

ReferencesObjects are stored in naming and directory services in different ways. A reference might be a very compact representation of an object.

The JNDI defines the Reference class to represent reference. A reference contains information on how to construct a copy of the object. The JNDI will attempt to turn references looked up from the directory into the Java objects that they represent so that JNDI clients have the illusion that what is stored in the directory are Java objects.

The Initial ContextIn the JNDI, all naming and directory operations are performed relative to a context. There are no absolute roots. Therefore the JNDI defines an InitialContext, which provides a starting point for naming and directory operations. Once you have an initial context, you can use it to look up other contexts and objects.

ExceptionsThe JNDI defines a class hierarchy for exceptions that can be thrown in the course of performing naming and directory operations. The root of this class hierarchy is NamingException. Programs interested in dealing with a particular exception can catch the corresponding subclass of the exception. Otherwise, they should catch NamingException. Directory and LDAP Packages

Directory PackageThe javax.naming package to provide functionality for accessing directory services in addition to naming services. This package allows applications to retrieve associated with objects stored in the directory and to search for objects using specified attributes.

The Directory ContextThe DirContextinterface represents a directory context. DirContext also behaves as a naming context by extending the getAttributes()to retrieve the attributes associated with a directory entry (for which you supply the name). Attributes are modified

82

Page 83: Single Source.1

using modifyAttributes()method. Other overloaded forms of search()support more sophisticated search filters.

LDAP PackageThe javax.naming.ldap package contains classes and interfaces for using features that are specific to the LDAP v3 that are not already covered by the more generic javax.naming.directory package. In fact, most JNDI applications that use the LDAP will find the javax.naming.directory package sufficient and will not need to use the javax.naming.ldap package at all. This package is primarily for those applications that need to use "extended" operations, controls, or unsolicited notifications. "Extended" Operation

In addition to specifying well defined operations such as search and modify, the LDAP v3 (RFC 2251) specifies a way to transmit yet-to-be defined operations between the LDAP client and the server. These operations are called "extended" operations. An "extended" operation may be defined by a standards organization such as the Internet Engineering Task Force (IETF) or by a vendor.

ControlsThe LDAP v3 allows any request or response to be augmented by yet-to-be defined modifiers, called controls . A control sent with a request is a request control and a control sent with a response is a response control . A control may be defined by a standards organization such as the IETF or by a vendor. Request controls and response controls are not necessarily paired, that is, there need not be a response control for each request control sent, and vice versa.

Unsolicited NotificationsIn addition to the normal request/response style of interaction between the client and server, the LDAP v3 also specifies unsolicited notifications--messages that are sent from the server to the client asynchronously and not in response to any client request.

The LDAP ContextThe LdapContextinterface represents a context for performing "extended" operations, sending request controls, and receiving response controls. Examples of how to use these features are described in the JNDI Tutorial's Controls and Extensions lesson. Event and Service Provider Packages

Event PackageThe javax.naming.event package contains classes and interfaces for supporting event notification in naming and directory services. Event notification is described in detail in the trail. Events

A NamingEvent represents an event that is generated by a naming/directory service. The event contains a type that identifies the type of event. For

83

Page 84: Single Source.1

example, event types are categorized into those that affect the namespace, such as "object added," and those that do not, such as "object changed."

ListenersA NamingListener is an object that listens for NamingEvents. Each category of event type has a corresponding type of NamingListener. For example, a NamespaceChangeListener represents a listener interested in namespace change events and an ObjectChangeListener represents a listener interested in object change events.

To receive event notifications, a listener must be registered with either an EventContext or an EventDirContext. Once registered, the listener will receive event notifications when the corresponding changes occur in the naming/directory service. The details about Event Notification can be found in the JNDI Tutorial

Service Provider PackageThe javax.naming.spi package provides the means by which developers of different naming/directory service providers can develop and hook up their implementations so that the corresponding services are accessible from applications that use the JNDI. Plug-In Architecture

The javax.naming.spi package allows different implementations to be plugged in dynamically. These implementations include those for the initial context and for contexts that can be reached from the initial context.

Java Object SupportThe javax.naming.spi package supports implementors of lookup and related methods to return Java objects that are natural and intuitive for the Java programmer. For example, if you look up a printer name from the directory, then you likely would expect to get back a printer object on which to operate. This support is provided in the form of object factories.

This package also provides support for doing the reverse. That is, implementors of Context.bind() and related methods can accept Java objects and store the objects in a format acceptable to the underlying naming/directory service. This support is provided in the form of state factories.

Multiple Naming Systems (Federation)JNDI operations allow applications to supply names that span multiple naming systems. In the process of completing an operation, one service provider might need to interact with another service provider, for example to pass on the operation to be continued in the next naming system. This package provides support for different providers to cooperate to complete JNDI operations.

The details about the Service Provider mechanism can be found in the JNDI Tutorial

84

Page 85: Single Source.1

J2EE

Servlets

JSP

EJB

Struts

JMS

Hibernate

SAX

DOM

Design Patterns

Session façade

Front Controller

DAO

Chain of Responsibilities

Composition

Aggregation

Abstract Factory

Factory method

Bridge

Singleton

Builder

Iterate

Observer

State

Strategy

Visitor

Flyweight

85

Page 86: Single Source.1

Proxy

Router

Translation

Web Services

SOAP

UDDI

WSDL

Apache Axis

XML Technologies

XML

DDL

XSL

Link

Path

XQuery

Database

Oracle 9i(SQLPL/SQL)

DB2

Application Servers

WebSphere Application Server 6.1

WebLogic 9.1

JBoss 4.1.2

Apache Tomcat5.5

UML tools

Rational UML Modeling tool

Web Design

HTML

86

Page 87: Single Source.1

Java Script

CSS

AJAX

Methodologies

OOAD

OODB

SAD

Tools

Eclispe3.2

Ant

Maven

Batch Script

Shell script

Strategies

Requirement/Request Analysis

Deployment and Configuration

Performance Tuning and Review.

Configuration Tools

Rational Clear Case

Operating Systems

Windows

Unix

Sub Topics

Extension Mechanism

for Support of Optional PackagesDocumentation Contents

Optional packages are packages of classes (and any associated native code) that application developers can use to extend the functionality of the core platform. The extension mechanism allows the Java virtual machine (VM) to use

87

Page 88: Single Source.1

the classes of the optional extension in much the same way as the VM uses classes in the Java Platform. The extension mechanism also provides a way for needed optional packages to be retrieved from specified URLs when they are not already installed in the JDK or JRE.

Overview

* Overview - What optional packages are and how to use them.

API Specification

The following classes play a role in the extension mechanism:

* java.lang.ClassLoader * java.lang.Package * java.lang.Thread * java.net.JarURLConnection * java.net.URLClassLoader * java.security.SecureClassLoader

See also the API Details section of the mechanism specification for notes on these APIs.

Tutorials and Programmer's Guides

Located on the Java Software website:

* The Extension Mechanism trail of the Java Tutorial.

API Enhancements

* Enhancements to the extension mechanism in version 1.3 included an extended set of manifest attributes that can be used for checking vendor and versioning information of installed optional packages. If an applet requires an optional package that isn't installed, or that is installed but has the wrong version number or is not from the appropriate vendor, the Java Plug-in can download the needed extension from a specified URL. For more information, see Optional-Package Versioning.

More Information

Extension Mechanism Architecture - Notes on the extension mechanism API and how optional packages use the Jar file format.

Contents

88

Page 89: Single Source.1

Introduction The Extension Mechanism

o Architecture o Optional Package Deployment o Bundled Optional Packages o Installed Optional Packages

Optional Package Sealing Optional Package Security Related APIs

Introduction

Note: Optional packages are the new name for what used to be known as standard extensions. The "extension mechanism" is that functionality of the JDK™ and JRE™ that supports the use of optional packages.

This document describes the mechanism provided by the Java™ platform for handling optional packages. An optional package is a group of packages housed in one or more JAR files that implement an API that extends the Java platform. Optional package classes extend the platform in the sense that the virtual machine can find and load them without their being on the class path, much as if they were classes in the platform's core API.

Since optional packages extend the platform's core API, their use should be judiciously applied. Most commonly they are used for well standardized interfaces such as those defined by the Java Community ProcessSM, although it may also be appropriate for site wide interfaces. Optional packages are rarely appropriate for interfaces used by a single, or small set of applications.

Furthermore, since the symbols defined by installed optional packages will be visible in all Java processes, care should be taken to ensure that all visible symbols follow the appropriate "reverse domain name" and "class hierarchy" conventions. For example, com.mycompany.MyClass.

An implementation of an optional package may consist of code written in the Java programming language and, less commonly, platform-specific native code. In addition, it may include properties, localization catalogs, images, serialized data, and other resources specific to the optional package.

Support for optional packages in browsers such as Internet Explorer and Netscape Navigator is available through the Java Plug-in.

A optional package is an implementation of an open, standard API (examples of optional packages from Sun are JavaServlet, Java3D, JavaManagement). Most optional packages are rooted in the javax.* namespace, although there may be exceptions.

89

Page 90: Single Source.1

The Extension Mechanism

ArchitectureThe extension mechanism is designed to contain the following elements:

an optional package or application packaged as a JAR file can declare dependencies on other JAR files, thus allowing an application to consist of multiple modules, and,

the class loading mechanism is augmented to search installed optional packages (and other libraries) for classes, and, if that fails, to search along an application-specified path for classes.

Applications must therefore, in general, be prepared to specify and supply the optional packages (and, more generally, libraries) that it needs. The system will prefer installed copies of optional packages (and libraries) if they exist; otherwise, it will delegate to the class loader of the application to find and load the referenced optional package (and library) classes.

This architecture, since it allows applications, applets and servlets to extend their own class path, also permits packaging and deploying these as multiple JAR files.

Each optional package or application consists of at least one JAR file containing an optional manifest, code and assorted resources. As described below, this primary JAR file can also include additional information in its manifest to describe dependencies on other JAR files. The jar command line tool included with the JDK provides a convenient means of packaging optional packages. (See the reference pages for the jar tool: [Microsoft Windows] [Solaris™ Operating System (Solaris OS), Linux])

An optional package or application may refer to additional JAR files which will be referenced from the primary JAR, and these can optionally contain their own dependency information as well.

Packages comprising optional packages should be named per the standard package naming conventions when implementing optional packages. These conventions are outlined in The Java Language Specification, but the requirement that the domain prefix be specified in all upper case letters has been removed. For example, the package name com.sun.server is an accepted alternative to COM.sun.server. Unique package naming is recommended in order to avoid conflicts, because applications and optional packages may share the same class loader.

Optional Package DeploymentAn optional package may either be bundled with an application or installed in the JRE for use by all applications. Bundled optional packages are provided at the same code base as the application and will automatically be downloaded in the

90

Page 91: Single Source.1

case of network applications (applets). For this reason, bundled optional packages are often called download optional packages. When the manifest of a bundled optional package's JAR file contains version information and the JAR is signed, it can be installed into the extensions directory of the JRE which downloads it (see Deploying Java Extensions). Installed optional packages are loaded when first used and will be shared by all applications using the same JRE.

When packaging optional packages, the JAR file manifest can be used to identify vendor and version information (see Package Version Identification).

Classes for installed optional packages are shared by all code in the same virtual machine. Thus, installed optional packages are similar to the platform's core classes (in rt.jar), but with an associated class loader and a pre-configured security policy as described below.

Classes for bundled optional packages are private to the class loader of the application, applet or servlet. In the case of network applications such as applets, these optional packages will be automatically downloaded as needed. Since class loaders are currently associated with a codebase, this permits multiple applets originating from the same codebase to share implementations (JARs). However, signed bundled optional packages with version information as described above are installed in the JRE, and their contents are available to all applications running on that JRE and are therefore not private.

Bundled Optional PackagesThe manifest for an application or optional package can specify one or more relative URLs referring to the JAR files and directories for the optional packages (and other libraries) that it needs. These relative URLs will be treated relative to the code base that the containing application or optional package JAR file was loaded from.

An application (or, more generally, JAR file) specifies the relative URLs of the optional packages (and libraries) that it needs via the manifest attribute Class-Path. This attribute lists the URLs to search for implementations of optional packages (or other libraries) if they cannot be found as optional packages installed on the host Java virtual machine*. These relative URLs may include JAR files and directories for any libraries or resources needed by the application or optional package. Relative URLs not ending with '/' are assumed to refer to JAR files. For example,

Class-Path: servlet.jar infobus.jar acme/beans.jar images/At most one Class-Path header may be specified in a JAR file's manifest..

Currently, the URLs must be relative to the code base of the JAR file for security reasons. Thus, remote optional packages will originate from the same code base as the application.

91

Page 92: Single Source.1

Each relative URL is resolved against the code base that the containing application or optional package was loaded from. If the resulting URL is invalid or refers to a resource that cannot be found then it is ignored.

The resulting URLs are used to extend the class path for the application, applet, or servlet by inserting the URLs in the class path immediately following the URL of the containing JAR file. Any duplicate URLs are omitted. For example, given the following class path:

        a.jar b.jarIf optional package b.jar contained the following Class-Path manifest attribute:         Class-Path: x.jar a.jarThen the resulting application class path would be the following:         a.jar b.jar x.jarOf course, if x.jar had dependencies of its own then these would be added according to the same rules and so on for each subsequent URL. In the actual implementation, JAR file dependencies are processed lazily so that the JAR files are not actually opened until needed.

Installed Optional PackagesBeginning with Sun's implementation of the Java 2 Platform, the JAR files of a installed optional package are placed in a standard local code source: <java-home>\lib\ext                     [Microsoft Windows]<java-home>/lib/ext                     [Solaris OS, Linux]

Here <java-home> refers to the directory where the runtime software is installed (which is the top-level directory of the JRE or the jre directory in the JDK).

The locations for installed optional packages can be specified through the system property java.ext.dirs. This property specifies one or more directories to search for installed optional packages, each separated by File.pathSeparatorChar. The default setting for java.ext.dirs is the standard directory for installed optional packages, as indicated above. For Java 6 and later, the default is enhanced: it is suffixed with the path to a platform-specific directory that is shared by all JREs (Java 6 or later) installed on a system:

%SystemRoot%\Sun\Java\lib\ext           [Microsoft Windows]/usr/java/packages/lib/ext             [Linux]/usr/jdk/packages/lib/ext               [Solaris OS]

An installed optional package may also contain one or more shared libraries (such as .dll files) and executables. In what follows, <arch> will be shown but in practice should be the name of an instruction set architecture, for example sparc, sparcv9, i386, and amd64. These can be installed in one of two places. The first to be searched is:

<java-home>\bin                         [Microsoft Windows]

92

Page 93: Single Source.1

<java-home>/lib/<arch>                  [Solaris OS, Linux]

The second extension directory to be searched applies only to Java 6 and later. As with Java packages, native libraries can be installed in directories that will be shared by all Java 6 and later JREs:

%SystemRoot%\Sun\Java\bin               [Microsoft Windows]/usr/java/packages/lib/<arch>          [Linux]/usr/jdk/packages/lib/<arch>            [Solaris OS]

An optional package that contains native code cannot be downloaded by network code into the virtual machine at execution time, whether such code is trusted or not. An optional package that contains native code and is bundled with a network application must be installed in the JDK or JRE.

By default, installed optional packages in this standard directory are trusted. That is, they are granted the same privileges as if they were core platform classes (those in rt.jar). This default privilege is specified in the system policy file (in <java-home>/jre/lib/security/java.policy), but can be overridden for a particular optional package by adding the appropriate policy file entry (see Permissions in the JDK).

Note also that if a installed optional package JAR is signed by a trusted entity, then it will be granted the privileges associated with the trusted signer.

Optional Package Sealing

JAR files and packages can be optionally sealed, so that an optional package or package can enforce consistency within a version.

A package sealed within a JAR specifies that all classes defined in that package must originate from the same JAR. Otherwise, a SecurityException is thrown.

A sealed JAR specifies that all packages defined by that JAR are sealed unless overridden specifically for a package.

A sealed package is specified via the manifest attribute, Sealed, whose value is true or false (case irrelevant). For example,

Name: javax/servlet/internal/Sealed: truespecifies that the javax.servlet.internal package is sealed, and that all classes in that package must be loaded from the same JAR file.

If this attribute is missing, the package sealing attribute is that of the containing JAR file.

93

Page 94: Single Source.1

A sealed JAR is specified via the same manifest header, Sealed, with the value again of either true or false. For example,

Sealed: truespecifies that all packages in this archive are sealed unless explicitly overridden for a particular package with the Sealed attribute in a manifest entry.

If this attribute is missing, the JAR file is assumed to not be sealed, for backwards compatibility. The system then defaults to examining package headers for sealing information.

Package sealing is also important for security, because it restricts access to package-protected members to only those classes defined in the package that originated from the same JAR file.

Package sealing is checked for installed as well as downloaded optional packages, and will result in a SecurityException if violated. Also, the null package is not sealable, so classes that are to be sealed must be placed in their own packages. 

Optional Package Security

The code source for a installed optional package (namely <java-home>/lib/ext) has a pre-configured security policy associated with it. In Sun's implementation, the exact level of trust granted to JARs in this directory is specified by the standard security policy configuration file <java-home>/lib/security/java.policyThe default policy is for a installed optional package to behave the same way it would if were part of the core platform. This follows from the common need for a installed optional package to load native code.

The Java Security Model provides some safety when installed optional package code is called from untrusted code. However optional package code must be carefully reviewed for potential security breaches wherever it uses privileged blocks.

A remotely loaded optional package that needs to use access-checked system services (such as file I/O) to function correctly must either be signed by a trusted entity or loaded from a trusted source.

Consult the Java security documentation for further details regarding how to write optional package and application code to use the security features of the Java Platform.

Related APIs

Several classes in the Java platform support the extension mechanism, including:

94

Page 95: Single Source.1

public class java.lang.ClassLoader public class java.lang.Package public class java.net.URLClassLoader

*As used on this web site, the terms "Java Virtual Machine" or "JVM" mean a virtual machine for the Java platform.

Generic IntroductionJDK 5.0 introduces several new extensions to the Java programming language. One of these is the introduction of generics.

This trail is an introduction to generics. You may be familiar with similar constructs from other languages, most notably C++ templates. If so, you'll see that there are both similarities and important differences. If you are unfamiliar with look-a-alike constructs from elsewhere, all the better; you can start fresh, without having to unlearn any misconceptions.

Generics allow you to abstract over types. The most common examples are container types, such as those in the Collections hierarchy.

Here is a typical usage of that sort:

List myIntList = new LinkedList(); // 1myIntList.add(new Integer(0)); // 2Integer x = (Integer) myIntList.iterator().next(); // 3 The cast on line 3 is slightly annoying. Typically, the programmer knows what kind of data has been placed into a particular list. However, the cast is essential. The compiler can only guarantee that an Object will be returned by the iterator. To ensure the assignment to a variable of type Integer is type safe, the cast is required.

Of course, the cast not only introduces clutter. It also introduces the possibility of a run time error, since the programmer may be mistaken.

What if programmers could actually express their intent, and mark a list as being restricted to contain a particular data type? This is the core idea behind generics. Here is a version of the program fragment given above using generics:

List<Integer> myIntList = new LinkedList<Integer>(); // 1'myIntList.add(new Integer(0)); // 2'Integer x = myIntList.iterator().next(); // 3'Notice the type declaration for the variable myIntList. It specifies that this is not just an arbitrary List, but a List of Integer, written List<Integer>. We say that List is a generic interface that takes a type parameter--in this case, Integer. We also specify a type parameter when creating the list object.

95

Page 96: Single Source.1

Note, too, that the cast on line 3' is gone.

Now, you might think that all we've accomplished is to move the clutter around. Instead of a cast to Integer on line 3, we have Integer as a type parameter on line 1'. However, there is a very big difference here. The compiler can now check the type correctness of the program at compile-time. When we say that myIntList is declared with type List<Integer>, this tells us something about the variable myIntList, which holds true wherever and whenever it is used, and the compiler will guarantee it. In contrast, the cast tells us something the programmer thinks is true at a single point in the code.

The net effect, especially in large programs, is improved readability and robustness.

Generics and SubtypingLet's test your understanding of generics. Is the following code snippet legal? List<String> ls = new ArrayList<String>(); // 1List<Object> lo = ls; // 2 Line 1 is certainly legal. The trickier part of the question is line 2. This boils down to the question: is a List of String a List of Object. Most people instinctively answer, "Sure!"

Well, take a look at the next few lines:

lo.add(new Object()); // 3String s = ls.get(0); // 4: Attempts to assign an Object to a String!Here we've aliased ls and lo. Accessing ls, a list of String, through the alias lo, we can insert arbitrary objects into it. As a result ls does not hold just Strings anymore, and when we try and get something out of it, we get a rude surprise.

The Java compiler will prevent this from happening of course. Line 2 will cause a compile time error.

In general, if Foo is a subtype (subclass or subinterface) of Bar, and G is some generic type declaration, it is not the case that G<Foo> is a subtype of G<Bar>. This is probably the hardest thing you need to learn about generics, because it goes against our deeply held intuitions.

We should not assume that collections don't change. Our instinct may lead us to think of these things as immutable.

For example, if the department of motor vehicles supplies a list of drivers to the census bureau, this seems reasonable. We think that a List<Driver> is a List<Person>, assuming that Driver is a subtype of Person. In fact, what is being passed is a copy of the registry of drivers. Otherwise, the census bureau could add new people who are not drivers into the list, corrupting the DMV's records.

96

Page 97: Single Source.1

To cope with this sort of situation, it's useful to consider more flexible generic types. The rules we've seen so far are quite restrictive.

WildcardsConsider the problem of writing a routine that prints out all the elements in a collection. Here's how you might write it in an older version of the language (i.e., a pre-5.0 release): void printCollection(Collection c) { Iterator i = c.iterator(); for (k = 0; k < c.size(); k++) { System.out.println(i.next()); }}And here is a naive attempt at writing it using generics (and the new for loop syntax): void printCollection(Collection<Object> c) { for (Object e : c) { System.out.println(e); }}The problem is that this new version is much less useful than the old one. Whereas the old code could be called with any kind of collection as a parameter, the new code only takes Collection<Object>, which, as we've just demonstrated, is not a supertype of all kinds of collections!

So what is the supertype of all kinds of collections? It's written Collection<?> (pronounced "collection of unknown"), that is, a collection whose element type matches anything. It's called a wildcard type for obvious reasons. We can write:

void printCollection(Collection<?> c) { for (Object e : c) { System.out.println(e); }}and now, we can call it with any type of collection. Notice that inside printCollection(), we can still read elements from c and give them type Object. This is always safe, since whatever the actual type of the collection, it does contain objects. It isn't safe to add arbitrary objects to it however: Collection<?> c = new ArrayList<String>();c.add(new Object()); // Compile time errorSince we don't know what the element type of c stands for, we cannot add objects to it. The add() method takes arguments of type E, the element type of the collection. When the actual type parameter is ?, it stands for some unknown type. Any parameter we pass to add would have to be a subtype of this unknown type. Since we don't know what type that is, we cannot pass anything in. The sole exception is null, which is a member of every type.

On the other hand, given a List<?>, we can call get() and make use of the result. The result type is an unknown type, but we always know that it is an object. It is

97

Page 98: Single Source.1

therefore safe to assign the result of get() to a variable of type Object or pass it as a parameter where the type Object is expected.

Bounded WildcardsConsider a simple drawing application that can draw shapes such as rectangles and circles. To represent these shapes within the program, you could define a class hierarchy such as this: public abstract class Shape { public abstract void draw(Canvas c);}

public class Circle extends Shape { private int x, y, radius; public void draw(Canvas c) { ... }}

public class Rectangle extends Shape { private int x, y, width, height; public void draw(Canvas c) { ... }}These classes can be drawn on a canvas: public class Canvas { public void draw(Shape s) { s.draw(this); }}Any drawing will typically contain a number of shapes. Assuming that they are represented as a list, it would be convenient to have a method in Canvas that draws them all: public void drawAll(List<Shape> shapes) { for (Shape s: shapes) { s.draw(this); }}Now, the type rules say that drawAll() can only be called on lists of exactly Shape: it cannot, for instance, be called on a List<Circle>. That is unfortunate, since all the method does is read shapes from the list, so it could just as well be called on a List<Circle>. What we really want is for the method to accept a list of any kind of shape: public void drawAll(List<? extends Shape> shapes) { ...}There is a small but very important difference here: we have replaced the type List<Shape> with List<? extends Shape>. Now drawAll() will accept lists of any subclass of Shape, so we can now call it on a List<Circle> if we want.

98

Page 99: Single Source.1

List<? extends Shape> is an example of a bounded wildcard. The ? stands for an unknown type, just like the wildcards we saw earlier. However, in this case, we know that this unknown type is in fact a subtype of Shape. (Note: It could be Shape itself, or some subclass; it need not literally extend Shape.) We say that Shape is the upper bound of the wildcard.

There is, as usual, a price to be paid for the flexibility of using wildcards. That price is that it is now illegal to write into shapes in the body of the method. For instance, this is not allowed:

public void addRectangle(List<? extends Shape> shapes) { shapes.add(0, new Rectangle()); // Compile-time error!}You should be able to figure out why the code above is disallowed. The type of the second parameter to shapes.add() is ? extends Shape-- an unknown subtype of Shape. Since we don't know what type it is, we don't know if it is a supertype of Rectangle; it might or might not be such a supertype, so it isn't safe to pass a Rectangle there.

Bounded wildcards are just what one needs to handle the example of the DMV passing its data to the census bureau. Our example assumes that the data is represented by mapping from names (represented as strings) to people (represented by reference types such as Person or its subtypes, such as Driver). Map<K,V> is an example of a generic type that takes two type arguments, representing the keys and values of the map.

Again, note the naming convention for formal type parameters--K for keys and V for values.

public class Census { public static void addRegistry(Map<String, ? extends Person> registry) {}...

Map<String, Driver> allDrivers = ... ;Census.addRegistry(allDrivers);

JMX

Java Management Extensions provides a standard way of managing resources such as applications, devices, and services.

The Java Management Extensions (JMX) trail provides an introduction to the JMX technology, which is included in the Java Platform, Standard Edition (Java SE

99

Page 100: Single Source.1

platform). This trail presents examples of how to use the most important features of the JMX technology.

Overview of the JMX Technology

The Java Management Extensions (JMX) technology is a standard part of the Java Platform, Standard Edition (Java SE platform). The JMX technology was added to the platform in the Java 2 Platform, Standard Edition (J2SE) 5.0 release.

The JMX technology provides a simple, standard way of managing resources such as applications, devices, and services. Because the JMX technology is dynamic, you can use it to monitor and manage resources as they are created, installed and implemented. You can also use the JMX technology to monitor and manage the Java Virtual Machine (Java VM).

The JMX specification defines the architecture, design patterns, APIs, and services in the Java programming language for management and monitoring of applications and networks.

Using the JMX technology, a given resource is instrumented by one or more Java objects known as Managed Beans, or MBeans. These MBeans are registered in a core-managed object server, known as an MBean server. The MBean server acts as a management agent and can run on most devices that have been enabled for the Java programming language.

The specifications define JMX agents that you use to manage any resources that have been correctly configured for management. A JMX agent consists of an MBean server, in which MBeans are registered, and a set of services for handling the MBeans. In this way, JMX agents directly control resources and make them available to remote management applications.

The way in which resources are instrumented is completely independent from the management infrastructure. Resources can therefore be rendered manageable regardless of how their management applications are implemented.

The JMX technology defines standard connectors (known as JMX connectors) that enable you to access JMX agents from remote management applications. JMX connectors using different protocols provide the same management interface. Consequently, a management application can manage resources transparently, regardless of the communication protocol used. JMX agents can also be used by systems or applications that are not compliant with the JMX specification, as long as those systems or applications support JMX agents.

Why Use the JMX Technology?

The JMX technology provides developers with a flexible means to instrument Java technology-based applications (Java applications), create smart agents, implement distributed management middleware and managers, and smoothly integrate these solutions into existing management and monitoring systems.

100

Page 101: Single Source.1

The JMX technology enables Java applications to be managed without heavy investment. A JMX technology-based agent (JMX agent) can run on most Java technology-enabled devices. Consequently, Java applications can become manageable with little impact on their design. A Java application needs only to embed a managed object server and make some of its functionality available as one or several managed beans (MBeans) registered in the object server. That is all it takes to benefit from the management infrastructure.

The JMX technology provides a standard way to manage Java applications, systems, and networks. For example, the Java Platform, Enterprise Edition (Java EE) 5 Application Server conforms to the JMX architecture and consequently can be managed by using JMX technology.

The JMX technology can be used for out-of-the-box management of the Java VM. The Java Virtual Machine (Java VM) is highly instrumented using the JMX technology. You can start a JMX agent to access the built-in Java VM instrumentation, and thereby monitor and manage a Java VM remotely.

The JMX technology provides a scalable, dynamic management architecture. Every JMX agent service is an independent module that can be plugged into the management agent, depending on the requirements. This component-based approach means that JMX solutions can scale from small-footprint devices to large telecommunications switches and beyond. The JMX specification provides a set of core agent services. Additional services can be developed and dynamically loaded, unloaded, or updated in the management infrastructure.

The JMX technology leverages existing standard Java technologies. Whenever needed, the JMX specification references existing Java specifications, for example, the Java Naming and Directory Interface (J.N.D.I.) API.

The JMX technology-based applications (JMX applications) can be created from a NetBeans IDE module. You can obtain a module from the NetBeans Update Center (select Tools -> Update Center in the NetBeans interface) that enables you to create JMX applications by using the NetBeans IDE. This reduces the cost of development of JMX applications.

The JMX technology integrates with existing management solutions and emerging technologies. The JMX APIs are open interfaces that any management system vendor can implement. JMX solutions can use lookup and discovery services and protocols such as Jini network technology and the Service Location Protocol (SLP).

Architecture of the JMX TechnologyThe JMX technology can be divided into three levels, as follows:

Instrumentation JMX agent Remote management

101

Page 102: Single Source.1

Instrumentation

To manage resources using the JMX technology, you must first instrument the resources in the Java programming language. You use Java objects known as MBeans to implement the access to the resources' instrumentation. MBeans must follow the design patterns and interfaces defined in the JMX specification. Doing so ensures that all MBeans provide managed resource instrumentation in a standardized way. In addition to standard MBeans, the JMX specification also defines a special type of MBean called an MXBean. An MXBean is an MBean that references only a pre-defined set of data types. Other types of MBean exist, but this trail will concentrate on standard MBeans and MXBeans.

Once a resource has been instrumented by MBeans, it can be managed through a JMX agent. MBeans do not require knowledge of the JMX agent with which they will operate.

MBeans are designed to be flexible, simple, and easy to implement. Developers of applications, systems, and networks can make their products manageable in a standard way without having to understand or invest in complex management systems. Existing resources can be made manageable with minimum effort.

In addition, the instrumentation level of the JMX specification provides a notification mechanism. This mechanism enables MBeans to generate and propagate notification events to components of the other levels.

JMX Agent

A JMX technology-based agent (JMX agent) is a standard management agent that directly controls resources and makes them available to remote management applications. JMX agents are usually located on the same machine as the resources they control, but this arrangement is not a requirement.

The core component of a JMX agent is the MBean server, a managed object server in which MBeans are registered. A JMX agent also includes a set of services to manage MBeans, and at least one communications adaptor or connector to allow access by a management application.

When you implement a JMX agent, you do not need to know the semantics or functions of the resources that it will manage. In fact, a JMX agent does not even need to know which resources it will serve because any resource instrumented in compliance with the JMX specification can use any JMX agent that offers the services that the resource requires. Similarly, the JMX agent does not need to know the functions of the management applications that will access it.

102

Page 103: Single Source.1

Remote Management

JMX technology instrumentation can be accessed in many different ways, either through existing management protocols such as the Simple Network Management Protocol (SNMP) or through proprietary protocols. The MBean server relies on protocol adaptors and connectors to make a JMX agent accessible from management applications outside the agent's Java Virtual Machine (Java VM).

Each adaptor provides a view through a specific protocol of all MBeans that are registered in the MBean server. For example, an HTML adaptor could display an MBean in a browser.

Connectors provide a manager-side interface that handles the communication between manager and JMX agent. Each connector provides the same remote management interface through a different protocol. When a remote management application uses this interface, it can connect to a JMX agent transparently through the network, regardless of the protocol. The JMX technology provides a standard solution for exporting JMX technology instrumentation to remote applications based on Java Remote Method Invocation (Java RMI).

Monitoring and Management of the Java Virtual Machine

The JMX technology can also be used to monitor and manage the Java virtual machine (Java VM).

The Java VM has built-in instrumentation that enables you to monitor and manage it by using the JMX technology. These built-in management utilities are often referred to as out-of-the-box management tools for the Java VM. To monitor and manage different aspects of the Java VM, the Java VM includes a platform MBean server and special MXBeans for use by management applications that conform to the JMX specification.

Platform MXBeans and the Platform MBean Server

The platform MXBeans are a set of MXBeans that is provided with the Java SE platform for monitoring and managing the Java VM and other components of the Java Runtime Environment (JRE). Each platform MXBean encapsulates a part of Java VM functionality, such as the class-loading system, just-in-time (JIT) compilation system, garbage collector, and so on. These MXBeans can be displayed and interacted with by using a monitoring and management tool that complies with the JMX specification, to enable you to monitor and manage these different VM functionalities. One such monitoring and management tool is the Java SE platform's JConsole graphical user interface (GUI).

103

Page 104: Single Source.1

The Java SE platform provides a standard platform MBean server in which these platform MXBeans are registered. The platform MBean server can also register any other MBeans you wish to create.

JConsole

The Java SE platform includes the JConsole monitoring and management tool, which complies with the JMX specification. JConsole uses the extensive instrumentation of the Java VM (the platform MXBeans) to provide information about the performance and resource consumption of applications that are running on the Java platform.

Out-of-the-Box Management in Action

Because standard monitoring and management utilities that implement the JMX technology are built into the Java SE platform, you can see the out-of-the-box JMX technology in action without having to write a single line of JMX API code. You can do so by launching a Java application and then monitoring it by using JConsole.

Monitoring an Application by Using JConsole

This procedure shows how to monitor the Notepad Java application. This procedure assumes that you are running the Java SE 6 platform.

1. Start the Notepad Java application, by using the following command in a terminal window:

java -jar jdk_home/demo/jfc/Notepad/Notepad.jar

Where jdk_home is the directory in which the Java Development Kit (JDK) is installed.

2. Once Notepad has opened, in a different terminal window, start JConsole by using the following command:

jconsole

A New Connection dialog box is displayed.

3. In the New Connection dialog box, select Notepad.jar from the Local Process list, and click the Connect button.

JConsole opens and connects itself to the Notepad.jar process. When JConsole opens, you are presented with an overview of monitoring and management information related to Notepad. For example, you can view the amount of heap memory the application is consuming, the number of

104

Page 105: Single Source.1

threads the application is currently running, and how much central procesing unit (CPU) capacity the application is consuming.

4. Click the different JConsole tabs.

Each tab presents more detailed information about the different areas of functionality of the Java VM in which Notepad is running. All the information presented is obtained from the various JMX technology MXBeans mentioned in this trail. All the platform MXBeans can be displayed in the MBeans tab. The MBeans tab is examined in the next section of this trail.

5. To close JConsole, select Connection -> Exit.

Security

Java™ Security Overview

1 Introduction

The Java™ platform was designed with a strong emphasis on security. At its core, the Java language itself is type-safe and provides automatic garbage collection, enhancing the robustness of application code. A secure class loading and verification mechanism ensures that only legitimate Java code is executed.

The initial version of the Java platform created a safe environment for running potentially untrusted code, such as Java applets downloaded from a public network. As the platform has grown and widened its range of deployment, the Java security architecture has correspondingly evolved to support an increasing set of services. Today the architecture includes a large set of application programming interfaces (APIs), tools, and implementations of commonly-used security algorithms, mechanisms, and protocols. This provides the developer a comprehensive security framework for writing applications, and also provides the user or administrator a set of tools to securely manage applications.

The Java security APIs span a wide range of areas. Cryptographic and public key infrastructure (PKI) interfaces provide the underlying basis for developing secure applications. Interfaces for performing authentication and access control enable applications to guard against unauthorized access to protected resources.

The APIs allow for multiple interoperable implementations of algorithms and other security services. Services are implemented in providers, which are plugged into the Java platform via a standard interface that makes it easy for applications to obtain security services without having to know anything about their implementations. This allows developers to focus on how to integrate security

105

Page 106: Single Source.1

into their applications, rather than on how to actually implement complex security mechanisms.

The Java platform includes a number of providers that implement a core set of security services. It also allows for additional custom providers to be installed. This enables developers to extend the platform with new security mechanisms.

This paper gives a broad overview of security in the Java platform, from secure language features to the security APIs, tools, and built-in provider services, highlighting key packages and classes where applicable. Note that this paper is based on Java™ SE version 6.

2 Java Language Security and Bytecode Verification

The Java language is designed to be type-safe and easy to use. It provides automatic memory management, garbage collection, and range-checking on arrays. This reduces the overall programming burden placed on developers, leading to fewer subtle programming errors and to safer, more robust code.

In addition, the Java language defines different access modifiers that can be assigned to Java classes, methods, and fields, enabling developers to restrict access to their class implementations as appropriate. Specifically, the language defines four distinct access levels: private, protected, public, and, if unspecified, package. The most open access specifier is public access is allowed to anyone. The most restrictive modifier is private access is not allowed outside the particular class in which the private member (a method, for example) is defined. The protected modifier allows access to any subclass, or to other classes within the same package. Package-level access only allows access to classes within the same package.

A compiler translates Java programs into a machine-independent bytecode representation. A bytecode verifier is invoked to ensure that only legitimate bytecodes are executed in the Java runtime. It checks that the bytecodes conform to the Java Language Specification and do not violate Java language rules or namespace restrictions. The verifier also checks for memory management violations, stack underflows or overflows, and illegal data typecasts. Once bytecodes have been verified, the Java runtime prepares them for execution.

3 Basic Security Architecture

The Java platform defines a set of APIs spanning major security areas, including cryptography, public key infrastructure, authentication, secure communication, and access control. These APIs allow developers to easily integrate security into their application code. They were designed around the following principles:

106

Page 107: Single Source.1

1. Implementation independence

Applications do not need to implement security themselves. Rather, they can request security services from the Java platform. Security services are implemented in providers (see below), which are plugged into the Java platform via a standard interface. An application may rely on multiple independent providers for security functionality.

2. Implementation interoperability

Providers are interoperable across applications. Specifically, an application is not bound to a specific provider, and a provider is not bound to a specific application.

3. Algorithm extensibility

The Java platform includes a number of built-in providers that implement a basic set of security services that are widely used today. However, some applications may rely on emerging standards not yet implemented, or on proprietary services. The Java platform supports the installation of custom providers that implement such services.

Security Providers

The java.security.Provider class encapsulates the notion of a security provider in the Java platform. It specifies the provider's name and lists the security services it implements. Multiple providers may be configured at the same time, and are listed in order of preference. When a security service is requested, the highest priority provider that implements that service is selected.

Applications rely on the relevant getInstance method to obtain a security service from an underlying provider. For example, message digest creation represents one type of service available from providers. (Section 4 discusses message digests and other cryptographic services.) An application invokes the getInstance method in the java.security.MessageDigest class to obtain an implementation of a specific message digest algorithm, such as MD5.

MessageDigest md = MessageDigest.getInstance("MD5");

The program may optionally request an implementation from a specific provider, by indicating the provider name, as in the following:

MessageDigest md = MessageDigest.getInstance("MD5", "ProviderC");

107

Page 108: Single Source.1

Figures 1 and 2 illustrate these options for requesting an MD5 message digest implementation. Both figures show three providers that implement message digest algorithms. The providers are ordered by preference from left to right (1-3). In Figure 1, an application requests an MD5 algorithm implementation without specifying a provider name. The providers are searched in preference order and the implementation from the first provider supplying that particular algorithm, ProviderB, is returned. In Figure 2, the application requests the MD5 algorithm implementation from a specific provider, ProviderC. This time the implementation from that provider is returned, even though a provider with a higher preference order, ProviderB, also supplies an MD5 implementation.

Figure 1 Provider searching Figure 2 Specific provider requested

The Java platform implementation from Sun Microsystems includes a number of pre-configured default providers that implement a basic set of security services that can be used by applications. Note that other vendor implementations of the Java platform may include different sets of providers that encapsulate vendor-specific sets of security services. When this paper mentions built-in default providers, it is referencing those available in Sun's implementation.

The sections below on the various security areas (cryptography, authentication, etc.) each include descriptions of the relevant services supplied by the default providers. A table in Appendix C summarizes all of the default providers.

File Locations

Certain aspects of Java security mentioned in this paper, including the configuration of providers, may be customized by setting security properties. You may set security properties statically in the security properties file, which by default is the java.security file in the lib/security directory of the directory where the Java™ Runtime Environment (JRE) is installed. Security properties may also be set dynamically by calling appropriate methods of the Security class (in the java.security package).

The tools and commands mentioned in this paper are all in the ~jre/bin directory, where ~jre stands for the directory in which the JRE is installed. The cacerts file mentioned in Section 5 is in ~jre/lib/security.

4 Cryptography

The Java cryptography architecture is a framework for accessing and developing cryptographic functionality for the Java platform. It includes APIs for a large variety of cryptographic services, including

108

Page 109: Single Source.1

Message digest algorithms Digital signature algorithms Symmetric bulk encryption Symmetric stream encryption Asymmetric encryption Password-based encryption (PBE) Elliptic Curve Cryptography (ECC) Key agreement algorithms Key generators Message Authentication Codes (MACs) (Pseudo-)random number generators

For historical (export control) reasons, the cryptography APIs are organized into two distinct packages. The java.security package contains classes that are not subject to export controls (like Signature and MessageDigest). The javax.crypto package contains classes that are subject to export controls (like Cipher and KeyAgreement).

The cryptographic interfaces are provider-based, allowing for multiple and interoperable cryptography implementations. Some providers may perform cryptographic operations in software; others may perform the operations on a hardware token (for example, on a smartcard device or on a hardware cryptographic accelerator). Providers that implement export-controlled services must be digitally signed.

The Java platform includes built-in providers for many of the most commonly used cryptographic algorithms, including the RSA and DSA signature algorithms, the DES, AES, and ARCFOUR encryption algorithms, the MD5 and SHA-1 message digest algorithms, and the Diffie-Hellman key agreement algorithm. These default providers implement cryptographic algorithms in Java code.

The Java platform also includes a built-in provider that acts as a bridge to a native PKCS#11 (v2.x) token. This provider, named SunPKCS11, allows Java applications to seamlessly access cryptographic services located on PKCS#11-compliant tokens.

5 Public Key Infrastructure

Public Key Infrastructure (PKI) is a term used for a framework that enables secure exchange of information based on public key cryptography. It allows identities (of people, organizations, etc.) to be bound to digital certificates and provides a means of verifying the authenticity of certificates. PKI encompasses keys, certificates, public key encryption, and trusted Certification Authorities (CAs) who generate and digitally sign certificates.

The Java platform includes API and provider support for X.509 digital certificates and certificate revocation lists (CRLs), as well as PKIX-compliant certification path

109

Page 110: Single Source.1

building and validation. The classes related to PKI are located in the java.security and java.security.cert packages.

Key and Certificate Storage

The Java platform provides for long-term persistent storage of cryptographic keys and certificates via key and certificate stores. Specifically, the java.security.KeyStore class represents a key store, a secure repository of cryptographic keys and/or trusted certificates (to be used, for example, during certification path validation), and the java.security.cert.CertStore class represents a certificate store, a public and potentially vast repository of unrelated and typically untrusted certificates. A CertStore may also store CRLs.

KeyStore and CertStore implementations are distinguished by types. The Java platform includes the standard PKCS11 and PKCS12 key store types (whose implementations are compliant with the corresponding PKCS specifications from RSA Security), as well as a proprietary file-based key store type called JKS (which stands for "Java Key Store").

The Java platform includes a special built-in JKS key store, cacerts, that contains a number of certificates for well-known, trusted CAs. The keytool documentation (see the security features documentation link in Section 9) lists the certificates included in cacerts.

The SunPKCS11 provider mentioned in the "Cryptography" section (Section 4) includes a PKCS11 KeyStore implementation. This means that keys and certificates residing in secure hardware (such as a smartcard) can be accessed and used by Java applications via the KeyStore API. Note that smartcard keys may not be permitted to leave the device. In such cases, the java.security.Key object reference returned by the KeyStore API may simply be a reference to the key (that is, it would not contain the actual key material). Such a Key object can only be used to perform cryptographic operations on the device where the actual key resides.

The Java platform also includes an LDAP certificate store type (for accessing certificates stored in an LDAP directory), as well as an in-memory Collection certificate store type (for accessing certificates managed in a java.util.Collection object).

PKI Tools

There are two built-in tools for working with keys, certificates, and key stores:

keytool is used to create and manage key stores. It can

110

Page 111: Single Source.1

Create public/private key pairs Display, import, and export X.509 v1, v2, and v3 certificates stored as files Create self-signed certificates Issue certificate (PKCS#10) requests to be sent to CAs Import certificate replies (obtained from the CAs sent certificate requests) Designate public key certificates as trusted

The jarsigner tool is used to sign JAR files, or to verify signatures on signed JAR files. The Java ARchive (JAR) file format enables the bundling of multiple files into a single file. Typically a JAR file contains the class files and auxiliary resources associated with applets and applications. When you want to digitally sign code, you first use keytool to generate or import appropriate keys and certificates into your key store (if they are not there already), then use the jar tool to place the code in a JAR file, and finally use the jarsigner tool to sign the JAR file. The jarsigner tool accesses a key store to find any keys and certificates needed to sign a JAR file or to verify the signature of a signed JAR file. Note: jarsigner can optionally generate signatures that include a timestamp. Systems (such as Java Plug-in) that verify JAR file signatures can check the timestamp and accept a JAR file that was signed while the signing certificate was valid rather than requiring the certificate to be current. (Certificates typically expire annually, and it is not reasonable to expect JAR file creators to re-sign deployed JAR files annually.)

6 Authentication

Authentication is the process of determining the identity of a user. In the context of the Java runtime environment, it is the process of identifying the user of an executing Java program. In certain cases, this process may rely on the services described in the "Cryptography" section (Section 4).

The Java platform provides APIs that enable an application to perform user authentication via pluggable login modules. Applications call into the LoginContext class (in the javax.security.auth.login package), which in turn references a configuration. The configuration specifies which login module (an implementation of the javax.security.auth.spi.LoginModule interface) is to be used to perform the actual authentication.

Since applications solely talk to the standard LoginContext API, they can remain independent from the underlying plug-in modules. New or updated modules can be plugged in for an application without having to modify the application itself. Figure 3 illustrates the independence between applications and underlying login modules:

Figure 3 Authentication login modules plugging into the authentication framework

111

Page 112: Single Source.1

It is important to note that although login modules are pluggable components that can be configured into the Java platform, they are not plugged in via security Providers. Therefore, they do not follow the Provider searching model described in Section 3. Instead, as is shown in the above diagram, login modules are administered by their own unique configuration.

The Java platform provides the following built-in LoginModules, all in the com.sun.security.auth.module package:

Krb5LoginModule for authentication using Kerberos protocols JndiLoginModule for username/password authentication using LDAP or NIS

databases KeyStoreLoginModule for logging into any type of key store, including a

PKCS#11 token key store

Authentication can also be achieved during the process of establishing a secure communication channel between two peers. The Java platform provides implementations of a number of standard communication protocols, which are discussed in the following section.

7 Secure Communication

The data that travels across a network can be accessed by someone who is not the intended recipient. When the data includes private information, such as passwords and credit card numbers, steps must be taken to make the data unintelligible to unauthorized parties. It is also important to ensure that you are sending the data to the appropriate party, and that the data has not been modified, either intentionally or unintentionally, during transport.

Cryptography forms the basis required for secure communication, and that is described in Section 4. The Java platform also provides API support and provider implementations for a number of standard secure communication protocols.

SSL/TLS

The Java platform provides APIs and an implementation of the SSL and TLS protocols that includes functionality for data encryption, message integrity, server authentication, and optional client authentication. Applications can use SSL/TLS to provide for the secure passage of data between two peers over any application protocol, such as HTTP on top of TCP/IP.

The javax.net.ssl.SSLSocket class represents a network socket that encapsulates SSL/TLS support on top of a normal stream socket (java.net.Socket). Some applications might want to use alternate data transport abstractions (e.g., New-

112

Page 113: Single Source.1

I/O); the javax.net.ssl.SSLEngine class is available to produce and consume SSL/TLS packets.

The Java platform also includes APIs that support the notion of pluggable (provider-based) key managers and trust managers. A key manager is encapsulated by the javax.net.ssl.KeyManager class, and manages the keys used to perform authentication. A trust manager is encapsulated by the TrustManager class (in the same package), and makes decisions about who to trust based on certificates in the key store it manages.

SASL

Simple Authentication and Security Layer (SASL) is an Internet standard that specifies a protocol for authentication and optional establishment of a security layer between client and server applications. SASL defines how authentication data is to be exchanged, but does not itself specify the contents of that data. It is a framework into which specific authentication mechanisms that specify the contents and semantics of the authentication data can fit. There are a number of standard SASL mechanisms defined by the Internet community for various security levels and deployment scenarios.

The Java SASL API defines classes and interfaces for applications that use SASL mechanisms. It is defined to be mechanism-neutral; an application that uses the API need not be hardwired into using any particular SASL mechanism. Applications can select the mechanism to use based on desired security features. The API supports both client and server applications. The javax.security.sasl.Sasl class is used to create SaslClient and SaslServer objects.

SASL mechanism implementations are supplied in provider packages. Each provider may support one or more SASL mechanisms and is registered and invoked via the standard provider architecture.

The Java platform includes a built-in provider that implements the following SASL mechanisms:

CRAM-MD5, DIGEST-MD5, EXTERNAL, GSSAPI, and PLAIN client mechanisms CRAM-MD5, DIGEST-MD5, and GSSAPI server mechanisms

GSS-API and Kerberos

The Java platform contains an API with the Java language bindings for the Generic Security Service Application Programming Interface (GSS-API). GSS-API offers application programmers uniform access to security services atop a variety of underlying security mechanisms. The Java GSS-API currently requires use of a Kerberos v5 mechanism, and the Java platform includes a built-in implementation of this mechanism. At this time, it is not possible to plug in additional

113

Page 114: Single Source.1

mechanisms. Note: The Krb5LoginModule mentioned in Section 6 can be used in conjunction with the GSS Kerberos mechanism.

Before two applications can use the Java GSS-API to securely exchange messages between them, they must establish a joint security context. The context encapsulates shared state information that might include, for example, cryptographic keys. Both applications create and use an org.ietf.jgss.GSSContext object to establish and maintain the shared information that makes up the security context. Once a security context has been established, it can be used to prepare secure messages for exchange.

The Java GSS APIs are in the org.ietf.jgss package. The Java platform also defines basic Kerberos classes, like KerberosPrincipal and KerberosTicket, which are located in the javax.security.auth.kerberos package.

8 Access Control

The access control architecture in the Java platform protects access to sensitive resources (for example, local files) or sensitive application code (for example, methods in a class). All access control decisions are mediated by a security manager, represented by the java.lang.SecurityManager class. A SecurityManager must be installed into the Java runtime in order to activate the access control checks.

Java applets and Java™ Web Start applications are automatically run with a SecurityManager installed. However, local applications executed via the java command are by default not run with a SecurityManager installed. In order to run local applications with a SecurityManager, either the application itself must programmatically set one via the setSecurityManager method (in the java.lang.System class), or java must be invoked with a -Djava.security.manager argument on the commandline.

Permissions

When Java code is loaded by a class loader into the Java runtime, the class loader automatically associates the following information with that code:

Where the code was loaded from Who signed the code (if anyone) Default permissions granted to the code

This information is associated with the code regardless of whether the code is downloaded over an untrusted network (e.g., an applet) or loaded from the filesystem (e.g., a local application). The location from which the code was loaded is represented by a URL, the code signer is represented by the signer's certificate

114

Page 115: Single Source.1

chain, and default permissions are represented by java.security.Permission objects.

The default permissions automatically granted to downloaded code include the ability to make network connections back to the host from which it originated. The default permissions automatically granted to code loaded from the local filesystem include the ability to read files from the directory it came from, and also from subdirectories of that directory.

Note that the identity of the user executing the code is not available at class loading time. It is the responsibility of application code to authenticate the end user if necessary (for example, as described in Section 6). Once the user has been authenticated, the application can dynamically associate that user with executing code by invoking the doAs method in the javax.security.auth.Subject class.

Policy

As mentioned earlier, a limited set of default permissions are granted to code by class loaders. Administrators have the ability to flexibly manage additional code permissions via a security policy.

The Java platform encapsulates the notion of a security policy in the java.security.Policy class. There is only one Policy object installed into the Java runtime at any given time. The basic responsibility of the Policy object is to determine whether access to a protected resource is permitted to code (characterized by where it was loaded from, who signed it, and who is executing it). How a Policy object makes this determination is implementation-dependent. For example, it may consult a database containing authorization data, or it may contact another service.

The Java platform includes a default Policy implementation that reads its authorization data from one or more ASCII (UTF-8) files configured in the security properties file. These policy files contain the exact sets of permissions granted to code: specifically, the exact sets of permissions granted to code loaded from particular locations, signed by particular entities, and executing as particular users. The policy entries in each file must conform to a documented proprietary syntax, and may be composed via a simple text editor or the graphical policytool utility.

Access Control Enforcement

The Java runtime keeps track of the sequence of Java calls that are made as a program executes. When access to a protected resource is requested, the entire call stack, by default, is evaluated to determine whether the requested access is permitted.

115

Page 116: Single Source.1

As mentioned earlier, resources are protected by the SecurityManager. Security-sensitive code in the Java platform and in applications protects access to resources via code like the following:

SecurityManager sm = System.getSecurityManager();if (sm != null) { sm.checkPermission(perm);}

where perm is the Permission object that corresponds to the requested access. For example, if an attempt is made to read the file /tmp/abc, the permission may be constructed as follows:

Permission perm = new java.io.FilePermission("/tmp/abc", "read");

The default implementation of SecurityManager delegates its decision to the java.security.AccessController implementation. The AccessController traverses the call stack, passing to the installed security Policy each code element in the stack, along with the requested permission (for example, the FilePermission in the above example). The Policy determines whether the requested access is granted, based on the permissions configured by the administrator. If access is not granted, the AccessController throws a java.lang.SecurityException.

Figure 4 illustrates access control enforcement. In this particular example, there are initially two elements on the call stack, ClassA and ClassB. ClassA invokes a method in ClassB, which then attempts to access the file /tmp/abc by creating an instance of java.io.FileInputStream. The FileInputStream constructor creates a FilePermission, perm, as shown above, and then passes perm to the SecurityManager's checkPermission method. In this particular case, only the permissions for ClassA and ClassB need to be checked, because all system code, including FileInputStream, SecurityManager, and AccessController, automatically receives all permissions.

In this example, ClassA and ClassB have different code characteristics?they come from different locations and have different signers. Each may have been granted a different set of permissions. The AccessController only grants access to the requested file if the Policy indicates that both classes have been granted the required FilePermission.

Figure 4 Controlling access to resources

9 For More Information

116

Page 117: Single Source.1

Detailed documentation for all the Java SE 6 security features mentioned in this paper can be found at

http://java.sun.com/javase/6/docs/guide/security/index.html

Additional Java security documentation can be found online at

http://java.sun.com/security/

and in the book Inside Java 2 Platform Security, Second Edition (Addison-Wesley). See

http://java.sun.com/docs/books/security/index.html

Note: Historically, as new types of security services were added to the Java platform (sometimes initially as extensions), various acronymns were used to refer to them. Since these acronyms are still in use in the Java security documentation, here is an explanation of what they represent: JSSE (Java™ Secure Socket Extension) refers to the SSL-related services described in Section 7, JCE (Java™ Cryptography Extension) refers to cryptographic services (Section 4), and JAAS (Java™ Authentication and Authorization Service) refers to the authentication and user-based access control services described in Sections 6 and 8, respectively.

Appendix A Classes Summary

Table 1 summarizes the names, packages, and usage of the Java security classes and interfaces mentioned in this paper.

Table 1 Key Java security packages and classes

Package Class/Interface Name

Usage

com.sun.security.auth.module

JndiLoginModule Performs username/password authentication using LDAP or NIS database

KeyStoreLoginModule

Performs authentication based on key store login

Krb5LoginModule Performs authentication using Kerberos protocols

117

Page 118: Single Source.1

java.lang SecurityException Indicates a security violation

SecurityManager Mediates all access control decisions

System Installs the SecurityManager

java.security AccessController Called by default implementation of SecurityManager to make access control decisions

Key Represents a cryptographic key

KeyStore Represents a repository of keys and trusted certificates

MessageDigest Represents a message digest

Permission Represents access to a particular resource

Policy Encapsulates the security policy

Provider Encapsulates security service implementations

Security Manages security providers and security properties

Signature Creates and verifies digital signatures

java.security.cert Certificate Represents a public key certificate

CertStore Represents a repository of unrelated and typically untrusted certificates

javax.crypto Cipher Performs encryption and decryption

118

Page 119: Single Source.1

KeyAgreement Performs a key exchange

javax.net.ssl KeyManager Manages keys used to perform SSL/TLS authentication

SSLEngine Produces/consumes SSL/TLS packets, allowing the application freedom to choose a transport mechanism

SSLSocket Represents a network socket that encapsulates SSL/TLS support on top of a normal stream socket

TrustManager Makes decisions about who to trust in SSL/TLS interactions (for example, based on trusted certificates in key stores)

javax.security.auth Subject Represents a user

javax.security.auth.kerberos

KerberosPrincipal Represents a Kerberos principal

KerberosTicket Represents a Kerberos ticket

javax.security.auth.login LoginContext Supports pluggable authentication

javax.security.auth.spi LoginModule Implements a specific authentication mechanism

javax.security.sasl Sasl Creates SaslClient and SaslServer objects

SaslClient Performs SASL authentication as a client

SaslServer Performs SASL authentication as a server

org.ietf.jgss GSSContext Encapsulates a GSS-API security context and provides the security services available via the context

119

Page 120: Single Source.1

Appendix B Tools Summary

Table 2 summarizes the tools mentioned in this paper.

Table 2 Java security tools

Tool Usage

jar Creates Java Archive (JAR) files

jarsigner Signs and verifies signatures on JAR files

keytool Creates and manages key stores

policytool Creates and edits policy files for use with default Policy implementation

There are also three Kerberos-related tools that are shipped with the Java platform for Windows. Equivalent functionality is provided in tools of the same name that are automatically part of the Solaris and Linux operating environments. Table 3 summarizes the Kerberos tools.

Table 3 Kerberos-related tools

Tool Usage

kinit Obtains and caches Kerberos ticket-granting tickets

klist Lists entries in the local Kerberos credentials cache and key table

ktab Manages the names and service keys stored in the local Kerberos key table

Appendix C Built-in Providers

120

Page 121: Single Source.1

The Java platform implementation from Sun Microsystems includes a number of built-in provider packages. For details, see the Java™ Cryptography Architecture Sun Providers Documentation.

Java ™ Cryptography ArchitectureSun Providers Documentation

for JavaTM Platform Standard Edition 6

IntroductionThe SunPKCS11 Provider The SUN Provider The SunRsaSign Provider The SunJSSE Provider The SunJCE Provider The SunJGSS Provider The SunSASL Provider The XMLDSig Provider The SunPCSC Provider The SunMSCAPI Provider

Note: The Standard Names Documentation contains more information about the standard names used in this document.

Introduction

The Java platform defines a set of APIs spanning major security areas, including cryptography, public key infrastructure, authentication, secure communication, and access control. These APIs allow developers to easily integrate security mechanisms into their application code. The Java Cryptography Architecture (JCA)

121

Page 122: Single Source.1

and its Provider Architecture is a core concept of the Java Development Kit (JDK). It is assumed readers have an solid understanding of this architecture.

This document describes the technical details of the providers shipped as part of Sun's Java Environment.

Reminder: Cryptographic implementations in the Sun JDK are distributed through several different providers ("Sun", "SunJSSE", "SunJCE", "SunRsaSign") for both historical reasons and by the types of services provided. General purpose applications SHOULD NOT request cryptographic services from specific providers. That is:

getInstance("...", "SunJCE"); // not recommendedvs.

getInstance("..."); // recommended

Otherwise, applications are tied to specific providers which may not be available on other Java implementations. They also might not be able to take advantage of available optimized providers (for example, hardware accelerators via PKCS11 or native OS implementations such as Microsoft's MSCAPI) that have a higher preference order than the specific requested provider.

The SunPKCS11 Provider

The Cryptographic Token Interface Standard ( PKCS#11) provides native programming interfaces to cryptographic mechanisms, such as hardware cryptographic accelerators and Smart Cards. When properly configured, the SunPKCS11 provider enables applications to use the standard JCA/JCE APIs to access native PKCS#11 libraries. The SunPKCS11 provider itself does not contain cryptographic functionality, it is simply a conduit between the Java environment and the native PKCS11 providers. The Java PKCS#11 Reference Guide has a much more detailed treatment of this provider.

The SUN Provider

JDK 1.1 introduced the Provider architecture. The first JDK provider was named SUN, and contained two types of cryptographic services (MessageDigests and Signatures). In later releases, other mechanisms were added (SecureRandom number generators, KeyPairGenerators, KeyFactorys, etc.).

United States export regulations in effect at the time placed significant restrictions on the type of cryptographic functionality that could be made

122

Page 123: Single Source.1

available internationally in the JDK. For this reason, the SUN provider has historically contained cryptographic engines that did not directly encrypt or decrypt data.

The following algorithms are available in the SUN provider:

Engine Algorithm Name(s)

AlgorithmParameterGenerator DSA

AlgorithmParameters DSA

CertificateFactory X.509

CertPathBuilder PKIX

CertPathValidator PKIX

CertStore Collection LDAP

Configuration JavaLoginConfig

KeyFactory DSA

KeyPairGenerator DSA

KeyStore JKS

MessageDigest

MD2 MD5 SHA-1 SHA-256 SHA-384 SHA-512

Policy JavaPolicy

SecureRandom SHA1PRNG

Signature NONEwithDSA SHA1withDSA

Keysize Restrictions

The SUN provider uses the following default keysizes (in bits) and enforce the following restrictions:

123

Page 124: Single Source.1

KeyPairGenerator

Alg. Name

Default Keysize

Restrictions/Comments

DSA 1024 Keysize must be a multiple of 64, ranging from 512 to 1024 (inclusive).

AlgorithmParameterGenerator

Alg. Name

Default Keysize

Restrictions/Comments

DSA 1024 Keysize must be a multiple of 64, ranging from 512 to 1024 (inclusive).

CertificateFactory/CertPathBuilder/ CertPathValidator/CertStore implementations

Additional details on the SUN provider implementations for CertificateFactory, CertPathBuilder, CertPathValidator and CertStore are documented in Appendix B of the PKI Programmer's Guide.

The SunRsaSign Provider

The SunRsaSign provider was introduced in JDK 1.3 as an enhanced replacement for the RSA signatures in the SunJSSE provider.

The following algorithms are available in the SunRsaSign provider:

Engine Algorithm Name(s)

KeyFactory RSA

KeyPairGenerator RSA

Signature

MD2withRSA MD5withRSA SHA1withRSA SHA256withRSA SHA384withRSA SHA512withRSA

Keysize Restrictions

124

Page 125: Single Source.1

The SunRsaSign provider uses the following default keysizes (in bits) and enforce the following restrictions:

KeyPairGenerator

Alg. Name

Default Keysize

Restrictions/Comments

RSA 1024 Keysize must range between 512 and 65536 bits, the former of which is unnecessarily large.

The SunJSSE Provider

The Java Secure Socket Extension (JSSE) was originally released as a separate "Optional Package" (also briefly known as a "Standard Extension"), and was available for JDK 1.2.x and 1.3.x. The SunJSSE provider was introduced as part of this release.

In earlier JDK releases, there were no RSA signature providers available in the JDK, therefore SunJSSE had to provide its own RSA implementation in order to use commonly available RSA-based certificates. JDK 5 introduced the SunRsaSign provider, which provides all the functionality (and more) of the SunJSSE provider. Applications targeted at JDK 5.0 and higher should request instances of the SunRsaSign provider instead. For backwards-compatibility, the RSA algorithms are still available through this provider, but are actually implemented in the SunRsaSign provider.

The following algorithms are available in the SunJSSE provider:

Engine Algorithm Name(s)

KeyFactory RSA

KeyManagerFactory SunX509

KeyPairGenerator RSA

KeyStore PKCS12

Signature MD2withRSA MD5withRSA SHA1withRSA

SSLContext SSLv3 TLSv1

125

Page 126: Single Source.1

Engine Algorithm Name(s)

TrustManagerFactory PKIX

The SunJSSE also supports the following protocol parameters:

Protocol

SSLv3

TLSv1

SSLv2Hello

SunJSSE supports a large number of ciphersuites. The table below shows the ciphersuites supported by SunJSSE in their default preference order and the release in which they were introduced.

Cipher SuiteSupported In Releases

<1.4.2 1.4.2 J2SE 5 Java SE 6SSL_RSA_WITH_RC4_128_MD5 X X X XSSL_RSA_WITH_RC4_128_SHA X X X XTLS_RSA_WITH_AES_128_CBC_SHA   X X XTLS_RSA_WITH_AES_256_CBC_SHA   X X XTLS_ECDH_ECDSA_WITH_RC4_128_SHA       XTLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA       XTLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA       XTLS_ECDH_RSA_WITH_RC4_128_SHA       XTLS_ECDH_RSA_WITH_AES_128_CBC_SHA       XTLS_ECDH_RSA_WITH_AES_256_CBC_SHA       XTLS_ECDHE_ECDSA_WITH_RC4_128_SHA       XTLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA       XTLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA       XTLS_ECDHE_RSA_WITH_RC4_128_SHA       XTLS_ECDHE_RSA_WITH_AES_128_CBC_SHA       XTLS_ECDHE_RSA_WITH_AES_256_CBC_SHA       XTLS_DHE_RSA_WITH_AES_128_CBC_SHA   X X XTLS_DHE_RSA_WITH_AES_256_CBC_SHA   X X XTLS_DHE_DSS_WITH_AES_128_CBC_SHA   X X XTLS_DHE_DSS_WITH_AES_256_CBC_SHA   X X X

126

Page 127: Single Source.1

SSL_RSA_WITH_3DES_EDE_CBC_SHA X X X XTLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA       XTLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA       XTLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA       XTLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA       XSSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA   X X XSSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA X X X XSSL_RSA_WITH_DES_CBC_SHA X X X XSSL_DHE_RSA_WITH_DES_CBC_SHA   X X XSSL_DHE_DSS_WITH_DES_CBC_SHA X X X XSSL_RSA_EXPORT_WITH_RC4_40_MD5 X X X XSSL_RSA_EXPORT_WITH_DES40_CBC_SHA   X X XSSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA   X X XSSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA X X X XSSL_RSA_WITH_NULL_MD5 X X X XSSL_RSA_WITH_NULL_SHA X X X XTLS_ECDH_ECDSA_WITH_NULL_SHA       XTLS_ECDH_RSA_WITH_NULL_SHA       XTLS_ECDHE_ECDSA_WITH_NULL_SHA       XTLS_ECDHE_RSA_WITH_NULL_SHA       XSSL_DH_anon_WITH_RC4_128_MD5 X X X XTLS_DH_anon_WITH_AES_128_CBC_SHA   X X XTLS_DH_anon_WITH_AES_256_CBC_SHA   X X XSSL_DH_anon_WITH_3DES_EDE_CBC_SHA X X X XSSL_DH_anon_WITH_DES_CBC_SHA X X X XTLS_ECDH_anon_WITH_RC4_128_SHA       XTLS_ECDH_anon_WITH_AES_128_CBC_SHA       XTLS_ECDH_anon_WITH_AES_256_CBC_SHA       XTLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA       XSSL_DH_anon_EXPORT_WITH_RC4_40_MD5 X X X XSSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA X X X XTLS_ECDH_anon_WITH_NULL_SHA       XTLS_KRB5_WITH_RC4_128_SHA     X XTLS_KRB5_WITH_RC4_128_MD5     X XTLS_KRB5_WITH_3DES_EDE_CBC_SHA     X XTLS_KRB5_WITH_3DES_EDE_CBC_MD5     X XTLS_KRB5_WITH_DES_CBC_SHA     X X

127

Page 128: Single Source.1

TLS_KRB5_WITH_DES_CBC_MD5     X XTLS_KRB5_EXPORT_WITH_RC4_40_SHA     X XTLS_KRB5_EXPORT_WITH_RC4_40_MD5     X XTLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA     X XTLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5     X X

Ciphersuites that use AES_256 requires installation of the JCE Unlimited Strength Jurisdiction Policy Files. See Java SE Download Page.

Ciphersuites that use Elliptic Curve Cryptography (ECDSA, ECDH, ECDHE, ECDH_anon) require that a JCE crypto provider with the following properties be installed:

implements ECC as defined by the classes and interfaces in the packages java.security.spec and java.security.interfaces. The getAlgorithm() method of elliptic curve key objects must return the String "EC"

supports the Signature algorithms SHA1withECDSA and NONEwithECDSA, the KeyAgreement algorithm ECDH, and a KeyPairGenerator and a KeyFactory for algorithm EC. If one of these algorithms is missing, SunJSSE will not allow EC ciphersuites to be used.

the crypto provider should support all the SECG curves referenced in RFC 4492 specification, section 5.1.1 (see also appendix A). In certificates, points should be encoded using the uncompressed form and curves should be encoded using the namedCurve choice, i.e. using an object identifier. If these requirements are not met, EC ciphersuites may not be negotiated correctly.

The SunJCE Provider

As described briefly in The SUN Provider , US export regulations at the time restricted the type of cryptographic functionality that could be made available in the JDK. A separate API and reference implementation was developed that allowed applications to encrypt/decrypt data. The Java Cryptographic Extension (JCE) was released as a separate "Optional Package" (also briefly known as a "Standard Extension"), and was available for JDK 1.2.x and 1.3.x. During the development of JDK 1.4, regulations were relaxed enough that JCE (and SunJSSE) could be bundled as part of the JDK.

The following algorithms are available in the SunJCE provider:

128

Page 129: Single Source.1

Engine Algorithm Name(s)

AlgorithmParamete

rGenerator DiffieHellman

AlgorithmParamete

rs

AES Blowfish DES DESede DiffieHellman OAEP PBEWithMD5AndDES PBEWithMD5AndTripleDES PBEWithSHA1AndDESede PBEWithSHA1AndRC2_40 RC2

Cipher Alg. Name Modes Paddings

AES

ECB, CBC, PCBC, CTR, CTS, CFB, CFB8..CFB128, OFB, OFB8..OFB128

NOPADDING, PKCS5PADDING, ISO10126PADDING

AESWrap ECB NOPADDING

ARCFOUR ECB NOPADDING

Blowfish, DES, DESede, RC2

ECB, CBC, PCBC, CTR, CTS, CFB, CFB8..CFB64, OFB, OFB8..OFB64

NOPADDING, PKCS5PADDING, ISO10126PADDING

129

Page 130: Single Source.1

Engine Algorithm Name(s)

Alg. Name Modes Paddings

DESedeWrap CBC NOPADDING

PBEWithMD5AndDES, PBEWithMD5AndTripleDES1 PBEWithSHA1AndDESede, PBEWithSHA1AndRC2_40

CBC PKCS5Padding

RSA ECB

NOPADDING, PKCS1PADDING, OAEPWITHMD5ANDMGF1PADDING, OAEPWITHSHA1ANDMGF1PADDING, OAEPWITHSHA-1ANDMGF1PADDING, OAEPWITHSHA-256ANDMGF1PADDING, OAEPWITHSHA-384ANDMGF1PADDING, OAEPWITHSHA-512ANDMGF1PADDING

Footnote 1: PBEWithMD5AndTripleDES is a proprietary algorithm that has not been standardized.

KeyAgreement DiffieHellman

KeyFactory DiffieHellman

KeyGenerator AES ARCFOUR Blowfish DES DESede HmacMD5 HmacSHA1 HmacSHA256

130

Page 131: Single Source.1

Engine Algorithm Name(s)

HmacSHA384 HmacSHA512 RC2

KeyPairGenerator DiffieHellman

KeyStore JCEKS

Mac

HmacMD5 HmacSHA1 HmacSHA256 HmacSHA384 HmacSHA512

SecretKeyFactory

DES DESede PBEWithMD5AndDES PBEWithMD5AndTripleDES PBEWithSHA1AndDESede PBEWithSHA1AndRC2_40 PBKDF2WithHmacSHA1

Keysize Restrictions

The SunJCE provider uses the following default keysizes (in bits) and enforce the following restrictions:

KeyGenerator

Alg. Name Default Keysize

Restrictions/Comments

AES 128 Keysize must be equal to 128, 192, or 256.

ARCFOUR (RC4)

128 Keysize must range between 40 and 1024 (inclusive).

Blowfish 128 Keysize must be a multiple of 8, ranging from 32 to 448 (inclusive).

DES 56 Keysize must be equal to 56.

DESede (Triple DES)

168 Keysize must be equal to 112 or 168.

131

Page 132: Single Source.1

Alg. Name Default Keysize

Restrictions/Comments

A keysize of 112 will generate a Triple DES key with 2 intermediate keys, and a keysize of 168 will generate a Triple DES key with 3 intermediate keys.

Due to the "Meet-In-The-Middle" problem, even though 112 or 168 bits of key material are used, the effective keysize is 80 or 112 bits respectively.

HmacMD5 512 No keysize restriction.

HmacSHA1 512 No keysize restriction.

HmacSHA256 256 No keysize restriction.

HmacSHA384 384 No keysize restriction.

HmacSHA512 512 No keysize restriction.

RC2 128 Keysize must range between 40 and 1024 (inclusive).

NOTE: The various Password-Based Encryption (PBE) algorithms use various algorithms to generate key data, and ultimately depends on the targeted Cipher algorithm. For example, "PBEWithMD5AndDES" will always generate 56-bit keys.

KeyPairGenerator

Alg. Name

Default Keysize

Restrictions/Comments

Diffie-Hellman (DH)

1024 Keysize must be a multiple of 64, ranging from 512 to 1024 (inclusive).

AlgorithmParameterGenerator

132

Page 133: Single Source.1

Alg. Name

Default Keysize

Restrictions/Comments

Diffie-Hellman (DH)

1024 Keysize must be a multiple of 64, ranging from 512 to 1024 (inclusive).

DSA 1024 Keysize must be a multiple of 64, ranging from 512 to 1024 (inclusive).

The SunJGSS Provider

The following algorithms are available in the SunJGSS provider:

OID Name

1.2.840.113554.1.2.2 Kerberos v5

1.3.6.1.5.5.2 SPNEGO

The SunSASL Provider

The following algorithms are available in the SunSASL provider:

Engine Algorithm Name(s)

SaslClient

CRAM-MD5 DIGEST-MD5 EXTERNAL GSSAPI PLAIN

SaslServer CRAM-MD5 DIGEST-MD5 GSSAPI

The XMLDSig Provider

The following algorithms are available in the XMLDSig provider:

Engine Algorithm Name(s)

KeyInfoFactory DOM

133

Page 134: Single Source.1

Engine Algorithm Name(s)

TransformService

http://www.w3.org/TR/2001/REC-xml-c14n-20010315 - (CanonicalizationMethod.INCLUSIVE) http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments - (CanonicalizationMethod.INCLUSIVE_WITH_COMMENTS) http://www.w3.org/2001/10/xml-exc-c14n# - (CanonicalizationMethod.EXCLUSIVE) http://www.w3.org/2001/10/xml-exc-c14n#WithComments - (CanonicalizationMethod.EXCLUSIVE_WITH_COMMENTS) http://www.w3.org/2000/09/xmldsig#base64 - (Transform.BASE64) http://www.w3.org/2000/09/xmldsig#enveloped-signature - (Transform.ENVELOPED) http://www.w3.org/TR/1999/REC-xpath-19991116 - (Transform.XPATH) http://www.w3.org/2002/06/xmldsig-filter2 - (Transform.XPATH2) http://www.w3.org/TR/1999/REC-xslt-19991116 - (Transform.XSLT)

XMLSignatureFactory DOM

The SunPCSC Provider

The SunPCSC provider enables applications to use the Java Smart Card I/O API to interact with the PC/SC Smart Card stack of the underlying operating system. On some operating systems, it may be necessary to enable and configure the PC/SC stack before it is usable. Consult your operating system documentation for details.

On Solaris and Linux platforms, SunPCSC accesses the PC/SC stack via the libpcsclite.so library. It looks for this library in the directories /usr/$LIBISA and /usr/local/$LIBISA, where $LIBISA is expanded to lib on 32-bit platforms, lib/64 on 64-bit Solaris platforms, and lib64 on 64-bit Linux platforms. The system property sun.security.smartcardio.library may also be set to the full filename of an alternate libpcsclite.so implementation. On Windows platforms, SunPCSC always calls into winscard.dll and no Java-level configuration is necessary or possible.

If PC/SC is available on the host platform, the SunPCSC implementation can be obtained via TerminalFactory.getDefault() and TerminalFactory.getInstance("PC/SC"). If PC/SC is not available or not correctly configured, a getInstance() call will fail

134

Page 135: Single Source.1

with a NoSuchAlgorithmException and getDefault() will return a JRE built-in implementation that does not support any terminals.

The following algorithms are available in the SunPCSC provider:

Engine Algorithm Name(s)

TerminalFactory PC/SC

The SunMSCAPI Provider

The SunMSCAPI provider enables applications to use the standard JCA/JCE APIs to access the native cryptographic libraries, certificates stores and key containers on the Microsoft Windows platform. The SunMSCAPI provider itself does not contain cryptographic functionality, it is simply a conduit between the Java environment and the native cryptographic services on Windows.

The following algorithms are available in the SunMSCAPI provider:

Engine Algorithm Name(s)

Cipher RSA RSA/ECB/PKCS1Padding only

KeyPairGenerator RSA

KeyStore

Windows-MY

The keystore type that identifies the native Microsoft Windows MY keystore. It contains the user's personal certificates and associated private keys.

Windows-ROOT

The keystore type that identifies the native Microsoft Windows ROOT keystore. It contains the certificates of Root certificate authorities and other self-signed trusted certificates.

SecureRandom

Windows-PRNG

The name of the native pseudo-random number generation (PRNG) algorithm.

135

Page 136: Single Source.1

Engine Algorithm Name(s)

Signature MD2withRSA MD5withRSA SHA1withRSA

Keysize Restrictions

The SunMSCAPI provider uses the following default keysizes (in bits) and enforce the following restrictions:

KeyGenerator

Alg. Name

Default Keysize

Restrictions/Comments

RSA 1024 Keysize ranges from 384 bits to 16,384 bits (depending on the underlying Microsoft Windows cryptographic service provider).

136