An I n d e p e n d e n t P u b lica tion n ot aff ilia ted ... · 6 O R A C L E C O N N E C T| J A...

63
TECHNOLOGY · INNOVATION · BUSINESS An Independent Publication not affiliated with Oracle Corporation Jan 2019 Issue Connect Oracle

Transcript of An I n d e p e n d e n t P u b lica tion n ot aff ilia ted ... · 6 O R A C L E C O N N E C T| J A...

TECHNOLOGY · INNOVATION · BUSINESS

An Independent Publication not affiliated with Oracle Corporation

Jan 2019 Issue

Connect

Oracle

Oracle Connect Editorial Board

Sai PenumuruPresident, AIOUG

Editorial TeamHarish Panduranga RaoVeeratteshwaran SridharChandan TanwaniSai Pradeep VattemJustin Michael Raj

Contributors

Harish Panduranga Rao

Ashish HarbhajankaBalavignesh ArumugamBhanu Chander GoshikaChris Couture

Ketan GandhiKishore GenikalaSolomon Raya

Contact us at [email protected] your articles at [email protected]

Sangam 2018 Highlights

Developing a Cohesive User Experience

AME Integrated Approvals in EBS R12.2

Multi Period Accounting for Payables

Why to Adopt Oracle HCM Cloud Application

Oracle RAC Server Pools

Oracle Self Service Integration Cloud Service

03

11

06

14

30

47

37

Troubleshoot Database Performance53

TABLE OF CONTENTS

Order Management

AIOUG Events in Jan 201958

Upcoming Events59

About AIOUG

AIOUG is a non-profit organization started by like minded users who think such a community is required in India where the amount of Oracle user base is humongous. The idea of this group is to share what the Oracle users have learned from using Oracle technology over the years with fellow users who have similar interest.

AIOUG provides Oracle technology and database professionals an opportunity to enhance their productivity and influence the quality, usability and support of Oracle technology. AIOUG is composed of Oracle professionals helping fellow IT professionals develop solutions to their business challenges.

AIOUG is registered as a Society under Karnataka Societies Registration Act, 1960 vide Society Number SOR-SJR-35/07-08 dated 21st September, 2007.

Sangam 2018 Highlights

8-JUL BANGALORE

Vipin Samar, VP, Oracle

Penny Avril, VP, Oracle

Sangam 2018 Highlights

AUDIENCE

AWARDS

AUDIENCE

Sangam 2018 Highlights

AUDIENCE

AWARDS

SANGAM18

TEAM

SELFIES AT SANGAM

FUN AT SANGAM

6 | O R A C L E C O N N E C T - J A N 2 0 1 9

AME Integrated Approvals for Outbound Sales Orders in R12.2 EBS Order Management Kishor Genikala & Solomon Raya, Oracle India Pvt Ltd.,

Abstract

Presently in Oracle EBS release 12.2.2, the AME (Approval Management Engine) integration with Order Management is limited to Return Orders and Quotes only. As part of latest Oracle EBS release 12.2.8, AME integrated approvals has been extended to outbound Sales Orders.

A new seeded root workflow “Order Flow - Generic with Booking Approval” for Order Header has been supplied, which calls AME approval flow before booking of the Order.

This paper provides an insight into business requirements, Solution overview, Key features of Sales Order Booking approvals, and Seeded workflow changes for the existing AME approvals, Key Setups in Order Management, AME approval hierarchy setup steps and Process Flow.

I. Introduction

Limitations of Current Approval Process

Prior to Oracle EBS release 12.2.2, only negotiation flow and RMA flow in Order Management (OM) had Support for approval hierarchy where approval hierarchy can be Defined Using Transaction Types in OM. But, this has limitation to use only Static List of Approvers.

R12.2.X New Enhancements

In release 12.2.2 Order Management was integrated with Oracle Approval Management (AME) to support dynamic determination of approvers list for Quotes, Sales Agreements and Returns (RMA). The following seeded workflows were provided with the required integration:

Quote workflow “Negotiation Flow – Generic with Approval”

Oracle Header workflow “Order Flow - Mixed or Return with Approval

In release 12.2.8, a new feature has been added for approval of all “Sales Orders” at Booking using AME which is common across various order types and line types like:

Orders consisting of outbound lines only, return lines only or mix of outbound & return lines

Drop ship and back to back orders

Models (ATO, PTO and Hybrid) and Kits

Bill Only, Bill Only with Inventory Interface

Following seeded workflows were delivered as part of this enhancement:

Order header workflow “Order Flow - Generic with Booking Approval"

Sub process “Book Order – Manual with AME Approval"

This is a simplified and common approval process across order types which will help customers to improve productivity and reduce errors. Further it would also centralize approval policies to improve audit and accountability.

A I O U G | 7

EBS Integration with AME

AME Capabilities

OM flows use the below mentioned key inherent capabilities of AME to perform approval of Outbound Sales Orders, Returns, Quotes and Sales Agreements leveraging AME integration

Use seeded or user defined Conditions and Attributes to create AME rules

Determine approvers based on the expanded criteria

Perform Serial or Parallel Approvals

Implement the options like “First Responder Wins” and “Consensus” in Parallel Approvals

II. Setup Steps

Order Management Setup

Access OM system parameters from Order Management Super User Setup System Parameters

Values Define value for the New System Parameter “Treat AME Exception As”. It can take any of three values (Default Value is NULL)

Approval: Treat AME Exception as Approved

Rejection: Treat AME Exception as Rejection

Null: Treat AME Exception as Rejection

Navigate to responsibility “Order Management Super User” Setup Transaction Type Define the Sales Order Transaction Type as follows in order to enable AME Integration for a particular transaction type:

Select the option “Use Approvals Management Engine”.

Enter Fulfilment Flow “Order Flow - Generic with Booking Approval”

Enter Sales Document Type “Sales Order”

Order Category can be any – Order, Mixed or Return

AME Setup

The purpose of Oracle Approvals Management (AME) is to define approval rules that determine the approval processes for Oracle applications.

An application that uses AME to govern its transactions' approval processes is termed an integrating application. An integrating application may divide its transactions into several categories where each category requires a distinct set of approval rules. Each set of rules is called a transaction type.

An approval rule is a business rule that helps determine a transaction's approval process. Rules are constructed from conditions and actions. The approval rule's if part consists of zero or more conditions, and its then part consists of one or more actions. A condition consists of a business variable (in AME, an attribute) and a set of attribute values, any one of which makes the condition true. An action tells AME to modify a transaction's approval process in some fashion.

8 | O R A C L E C O N N E C T - J A N 2 0 1 9

Given below are the AME Transaction Types introduced for OM integration. These are used to associate business rules for generating Approval hierarchy in AME.

Order Management Negotiation Approval (OENEG)

Order Management Return Approval (OERMA)

Order Management Sales Agreement Approval (OEBSA)

To define necessary AME approval rules, navigate to responsibility “Approvals Management Business

Analyst” Select transaction type “Order Management Return Approval” click on setup

Go to attributes tab you can see all the seeded attributes defined for the transaction type New attributes can be created here as per business requirement.

Go to conditions tab create new qualifying conditions as per need.

Go to Action Types tab Create new or use existing action types as per need

Go to Approver Groups tab Create or update groups as per need

Now, navigate to Rules tab and create a new rule using the conditions, action type and action already defined.

This AME rule will determine the qualifying conditions for an order and also the action type and action to derive the list of approvers.

III. Process Flow

The below diagram shows the basic process flow of the functionality.

Approval Actions – Status Changes

When the order is booked, the order will be sent for approval, based on AME setup. The order status will be updated to “Pending Internal Approval”, indicating that the order has been sent for approval. At the same time, the status of the order lines will remain as “Entered”, awaiting booking approval of order.

If the approval is rejected, the status of the order is changed to “Review Required”. User can then change the order and send it for approval again.

After the order is approved, the order would be pushed ahead for booking changing the status of order to “Booked”.

If the order is approved, but does not get booked because of any reason, order workflow will move to Book-Eligible activity updating the status of order to “Entered”. When progressing the order again for booking, then in that case order is not sent again for approval.

Demo Business Case

Let us assume the following organization hierarchy for the purpose of this demo. “Pat Stock” will be the user submitting the Order for approval. The first level approver is “Jonathan Smith” and the second level approver is “Casey Brown”.

A I O U G | 9

Case I

Order submitted for approval first time. Let us assume the following rule has been setup in AME i.e. for the customer “Bigmart”, “Order Amount greater than $1,000” and Order Category "Order" (outbound lines) approval is required from two levels of manager hierarchy in serial mode.

Login to EBS as “Pat Stock” Navigate to responsibility “Order Management Super User” Create

an Order with Customer: “Bigmart”, Order Type: Standard-AME and Order Amount: $1,500 Save the Order

Order status is now reflected as ‘Entered’ Initiate “Book” action” Order status changes to “Pending Internal Approval”. When the user initiates the “Book” action, the approval process is triggered.

In AME, the list of approvers is generated. In this case, the approvers are “Jonathan Smith” and “Casey Brown”.

Notifications are sent sequentially to the approvers. Approval is requested from the first level in the hierarchy “Jonathan Smith”.

Now, login as “Jonathan Smith” and “Approve” the Order. When you login to EBS as “Jonathan

Smith” on the home page, find the approval notification with type “OM AME Approval” from “Pat Stock” along with order details on the subject in the worklist.

After the first level has approved the Order, approval is requested from the next level in the hierarchy. In our example, the next level approver is “Casey Brown”.

Now, login as “Casey Brown” and “Approve” the Order After completing two levels of approval the order header status now changes to “Booked”

Case II

Create another order similar to Case I, but without specifying the payment terms at the header and line level. After the Order is submitted for approval, complete the two levels of approval as discussed in Case I.

As the Order has completed approval, the activity book eligible attempts to book the order but, booking fails due to missing payment terms information and the status changes to “Entered”.

Now, update the missing payment terms information on the order and Initiate “Book” action. In this case, the order will not be resent for approval as it has already obtained the necessary approvals and the status changes to ‘Booked’ (without any re-approval).

Workflow Status Monitor

At any point of time after the user initiates the “Book” action, workflow status can be monitored from Tools

“Workflow Status” option on Order Header.

IV. Objects Modified/Added

Workflow

“Order Flow - Generic with Booking approval” process is added to the OM Order Header item type. This includes the sub process “Book - Order, Manual with AME approval” which will integrate with AME.

Changed Programs

“Purge Order Management workflow” and “Retry Activities in Error” concurrent programs are modified to include a new Item Type “OM AME Approval”.

AME

“Order Management Return Approval” Transaction Type in AME is modified for outbound sales orders functionality.

Order Category, Booked Attributes in AME are added new along with existing Header Level attributes: Customer, Customer Category, Order Type and Order Amount. Line Level attributes: Line Type, Item, Quantity, Line Amount and Line Discount Percent.

10 | O R A C L E C O N N E C T - J A N 2 0 1 9

V. Diagnostics and Troubleshooting

Whenever user gets an unexpected result while using fulfilment workflow “Order Flow - Generic with Booking Approval”, complete Order Management Debug log file along with the output of the concurrent request “Diagnostics Order Information” for the Sales Order is required to debug the issue.

Order Management Debug file - Note 121054.1

Diagnostics Order Information - Note 1326539.1

Navigate to responsibility “Approvals Management Business Analyst” Select the transaction type (Order Management Return Approval) and click on Run Real Transaction Test via Test Workbench by entering the Sales Order Header_ID as the transaction id and review the attribute values, conditions and actions via AME.

VI. Common Setup Issues

Issue Problem Cause Resolution

AME Rules are defined, but documents are not sent for approval

The conditions are most likely not defined in Installed Base Language

Always defined conditions in base language

Order booked without proper approval

Supervisors are not defined for "Chain of Authority" approval rules

Make sure that supervisors are defined in HR module

Order booked without sending any approval notifications

“Use AME” option is checked in Transaction Type Form, However valid rules are not setup in AME

Make sure to define appropriate rules in AME

Notifications are not sent, Exception raised in AME and Transaction gets booked or rejected as per System Parameter "Treat AME Exception As"

Issue with custom attributes and associated SQL exceptions like No Data Found, Too Many Rows Returned etc.

Make sure that all the SQLs associated with Custom Attributes related to OM are valid

Notifications are not sent, Exception raised in AME and Transaction gets booked or rejected as per System Parameter "Treat AME Exception As"

Approval Group is defined in AME with a list of Individual Approvers, But some of the approvers are not valid Employees

Make sure that all the approvers defined in the User Defined Approver Groups are valid

Kishore Genikala is a Senior Principal Software Engineer, EBS Mfg. COE at Oracle India Private

Limited. He has 17 years of Techno functional Experience in EBS applications and 3 years of

Domain Experience in Manufacturing Industry.

https://www.linkedin.com/in/kishorgenikala/

Solomon Raya is a certified Oracle Inventory Specialist Engineer currently with Oracle India

Private Limited as a Principal Software Engineer. He has around 17 years of experience

spanning across EBS applications in Manufacturing & Retail domains.

A I O U G | 11

Developing a Cohesive User Experience Chris Couture, CherryRoad Technologies Inc.,

We are blessed with many options to develop our solutions. But, in the end, the user must be able to use the solution in context of other solutions - whether out of the box from Oracle or something another developer built. How do we - developers, admins, architects, innovators - exploit the capability of our selected platform and language without creating a technical minefield of disassociated, inconsistent, solutions for that user?

The Cost and ROI of User Experience

Collectively, we who implement, support and maintain enterprise applications tend to downplay the financial aspects of user experience. There are costs and returns, but we rarely see them because actual money moves nowhere, whether lost or gained. When we do see headlines like…

“Avon Pulls Plug on $125 Million SAP Project”

… we tend to think in terms of scope creep, poor project management, no leadership buy-in. We can’t fathom pulling the plug on a massive project simply because the users’ experiences were so bad that key pieces of functionality were unusable. (But, yes, that’s what happened!)

Calculating Cost and ROI

There are several readily-available calculators used to show the efficiency gain, the return on investment, of a well-done user experience. HumanFactors.org has the most comprehensive, and I’ve used it several times. However, in a pinch, I like my own simple approach. I call it, “Chris’ $100 per Hour Rule”.

It’s a simple rule. I believe that if I get a random group of task-qualified humans together, on average an hour of their individual time is worth around $100 USD. I also think that me trying to calculate actual cost per hour when I’m doing rough guesses is not worth my time, and $100 is an easy figure with which to work. I like simple! (For those who want to fine tune it, they can make it two people hours for $100, or whatever works for their own mathematical leaning.) Lastly, given that people tend to lag if they are interrupted, rather than easily switch from one task to another like machines, I don’t split time. One hour. $100. Simple.

So, take any process; assume it will take two hours to complete. Imagine this process is made much easier by an intuitive user interface, a flow that fits what the user does, conventions that keep the user on task rather than guessing. Easier by an hour’s time. We save $100.

Now, make the user experience clunky. The process that took two hours requires a manual on-hand to move through a cumbersome interface across three modules and several unwieldy navigations. Clunky by an hour’s time. We spend $100.

User Cost Per Hour

Base Process Time

Base Cost per Process per User

Time Saved or Spent due to UX

Adjusted Process Cost

Saved or Spent per Process per

User

$100 2 hours $200 Good UX: -1 hour $100 ($100)

$100 2 hours $200 Poor UX: +1 hour $300 $100

Now, multiply the Adjusted Process Cost by the number of process executions and users involved. The cost or savings grows quickly. Use my simple rule, HumanFactors.org calculators, or any of the other calculators and you’ll come to the same point – the user experience has a financial impact, whether good or bad, on our organization.

12 | O R A C L E C O N N E C T - J A N 2 0 1 9

Challenges of Enterprise UX

Now, can we go about wildly constructing the most amazing user experience imaginable? Yes! But, can we do that freely in the world of enterprise applications? Not quite so simple. We need to respect that enterprise applications are built by vendors who develop the features, tools, and functionality to execute standard business processes in an expected way, across multiple clients in multiple industries. We are not building our HR application from the ground up. Our Finance system was built by someone else – heck, it may have moved from our facility to a private cloud or even to a vendor SaaS cloud.

On top of this – sometimes literally – we have many Platform as a Service offering. Whether from our back-end vendor (such as Oracle) or as a complementary platform like Amazon, Google, or a host of others. These tools are powerful, but what we do with that power is still impacts our organization bottom line. $100 up or $100 down?

Constraints

Constraints of developing a cohesive user experience in an enterprise application are well known. The application does what it does, and customers can use the delivered tools to somewhat tailor the experience. While we may like the experience a React framework affords, our enterprise application likely won’t let us just drop in React and reshape the user interface. We may prefer coding with Python, but our application uses its native tool set and IDE, so we take a class and learn. This restriction on our developer freedom is the trade-off for buying a purpose-built enterprise application.

Building Around the Constraints

But, but, but… we have PaaS! We can build with our tool of choice, in record time, and create wonderful experiences for our users. $100 bills flow magically into our hands!

Not quite.

True, we can build with the tool of our choice, and all the modern PaaS offerings let us reduce our build-to-deploy timelines significantly. Alas, there is no magic. We still need to consider what the user does to build a cohesive experience.

Building in a Silo

Let’s consider three developers building solutions for our Cloud HCM application. Our developers are:

- Siva, who has been an Oracle guy for the last decade and prefers Java. He is quite competent with Oracle Java Cloud Service.

- Akshaya, who has a strong leaning towards PHP. She is well-versed in multiple platforms but her go-to is Google Cloud.

- Divya, who develops front-ends with various JavaScript frameworks. She’s comfortable with React and has been building within Azure as of late.

Three developers, three different tools, three different platforms – plus Cloud HCM. One set of users will use these various solutions to do business every day, week, month, quarter, year.

What happens far too often is each of the well-intended developers builds their solution in a silo, leaning on their own skills and the advantages of their chosen tool/platform. Thus, Siva’s Java-based solution is robust and powerful behind the scenes, but the front-end is rigid and boxy. Akshaya’s PHP solution is well-balanced but has periodic quirks in the integrations between Google and Oracle. (Oh, and Akshaya is our only PHP expert; if she leaves on holiday, we pray nothing breaks until she returns!) Divya’s front end is quite intuitive - in fact it is so well done it makes Cloud HCM feel archaic. Alas, it suffers in back-end performance such that the otherwise modern interactions are slow.

A I O U G | 13

The user’s journey:

User Interface: Cloud HCM Siva’s Java Akshaya’s PHP Divya’s JavaScript

User Expectation: “It just works.” “It just works.” “It just works.” “It just works.”

User Experience: “What I have come to expect from this Cloud stuff.”

“Fast, but hard to navigate.”

“It’s sort of like HCM but sort of not. Has weird error messages once in a while.”

“Looks nice but takes forever to load the next piece.”

Making a Cohesive User Experience

Ironically, the “magic” of making a cohesive user experience is neither magic nor complicated. Follow these three simple guidelines.

1. Focus on the user. Engage your users, understand their stories and what they do. Conduct usability testing. Operationalize your user engagement. Take your focus away from technology-first and put the user in the center of your approach. You can still pick your tool of choice but let that be secondary to supporting users executing their stories.

2. Extend behaviors. While each tool will tempt you with “all the cool things you can do” your goal is to blur the lines between the enterprise application and your custom solution. Keep it consistent. Observe the interactions, navigation approaches, non-text cues, and other patterns native to the enterprise applications. Build with those in mind. The result may not be a technical masterpiece. The user, though, will have a consistent experience as they move seamlessly through their work unaware of where your solution begins or ends.

3. Separate presentation from content. Learn to use style sheets so you can refactor your front-end without needing to recode the back-end. Cascading Style Sheets are immensely powerful, when used properly.

Spend your $100 hours wisely developing cohesive user experiences!

Resources

Here are several resources you can use to enhance your UX capabilities.

Oracle Cloud trial services - https://cloud.oracle.com get your hands on the tools and start exploring.

Nielsen Norman Group - https://nngroup.com

UX Professionals Group on LinkedIn – https://www.linkedin.com/groups/38178/

UX Collective – https://uxdesign.cc

Humanfactors.com ROI calculators: http://humanfactors.com/coolstuff/roi.asp

Chris Couture is an enterprise strategist, helping clients orient to deliver user-centric solutions

which exploit the true capabilities of Oracle technologies. During his twenty-plus year career,

he has traversed multiple Oracle technologies including PeopleSoft, Oracle Cloud SaaS, and

PaaS, melding these with modern user interfaces in browsers, mobile platforms, voice and

IoT devices. Certified in both User Experience and Oracle technologies, Chris brings a diverse

history of client experience and industry innovation to clients as he collaborates to shape IT

visions through the eyes of end users with a multitude of needs.

https://www.linkedin.com/pub/chris-couture/4/38a/58a

https://twitter.com/couturex

14 | O R A C L E C O N N E C T - J A N 2 0 1 9

Multi Period Accounting for Payables Ketan Gandhi, Cognizant

This article is a case study presentation on implementing R12 Multi Period Accounting (MPA) for Payables for the customer – one of the largest banks in the United Arab Emirates. We used, Encumbrance Accrual Subledger Accounting method, by using the Custom Subledger accounting source to automate the recognition of Prepaid AP invoices for Items and Non Recoverable tax lines and accounting of transactions showing Encumbrance accrual and recognition accounting over multiple periods.

The customer was using the custom process to amortize Prepaid expenses. It requires the entry of Prepayment type, start date, end date, and amortization amount at AP invoice header DFF level. In GL, amortization journal entries were created manually on the first day of every month. The recognition amount per period is to be calculated by assuming 360 days in a year. A challenge surfaced with reconciliation between AP and GL for these transactions during month end. Multi Period Accounting (MPA) provided the customer a uniform and scalable solution with reporting, to meet business requirements along with granular details from Subledger to GL. It streamlined the AP to GL reconciliation process with data accuracy by leveraging MPA standard reports and alleviating manual interference. The existing custom process was not possible to implement in Cloud.

Use cases where Premises Insurance Premium is paid yearly, It’s an advance payment of insurance premium for 12 months. The customer prepays the insurance premium on 01st March every year and expensed the Insurance Premium payment from March to February. Also rent for ATM & CDM Network is paid yearly. It’s an advance payment of rent for 12 months. The customer prepays rent on 20th June every year and expensed the rent payment from June to May. To map these use cases, the below configuration was done in R12

Configuration Steps

1. Define Payables Lookups 2. Define Invoice Line Descriptive Flexfields (DFF) segment 3. Define SLA: Custom Source 4. Define Account Derivation Rules (AAD) 5. Define Journal Line Type (JLT) 6. Define Journal Lines Definition (JLD) 7. Define Application Accounting Definitions (AAD) 8. Define Subledger Accounting Method (SLAM) 9. Attach Subledger Accounting Method (SLAM) to ledger definition

1. Define Payables Lookups

Navigation: Payables Responsibility Setup Lookups Payables

In Payables Lookups window, setup the prepayment category codes and GL accounts for these Prepayment categories which has to be amortized.

A I O U G | 15

2. Define Invoice Line Descriptive Flexfields (DFF) Segments

Navigation: Payables Responsibility Setup Flexfields Descriptive Segments

Setup the Invoice Line descriptive flexfield to capture the prepayment category GL account belongs to purchase category.

AP invoice line form personalization is done to map the purchasing categories values and expense GL accounts with payables lookups, so based on the purchasing category value selection at invoice line, the system will populate the corresponding payables lookup code in the invoice line DFF field. Additionally, form personalization at AP invoice line is done to default the Invoice Distribution Prepaid GL A/c for prepayment purchase categories.

16 | O R A C L E C O N N E C T - J A N 2 0 1 9

A I O U G | 17

3. Define SLA: Custom Source

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Sources Custom Sources

Setup the SLA: Custom source by using the PL/SQL function to derive the Natural account for prepayment category code captured in Invoice Line DFF field for purchasing category.

4. Define Account Derivation Rule (ADR)

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Journal Entry Setups Account Derivation Rules

Setup the Account Derivation Rule by using the SLA custom source to pass the Natural Account segment to recognition journal line type when Create Accounting program generates accounting entries for recognition in respective periods.

18 | O R A C L E C O N N E C T - J A N 2 0 1 9

5. Define Journal Line Type(JLT)

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Journal Entry Setups Journal Line Types

Query the seeded JLT

- Application: Payables

- Event Class: Invoices

- Accounting Class: Item Expense

- Line Type Code: AP_ITEM_EXPENSE_INV

Copy seeded JLT and setup the custom Accrual Journal Line Type and Recognition Journal Line Type for Item Expense Accounting Class

Query the seeded JLT

- Application: Payables

- Event Class: Invoices

- Accounting Class: Non-Recoverable Tax

- Line Type Code: AP_NON_RECOV_TAX_INV

Copy seeded JLT and setup the custom Accrual Journal Line Type and Recognition Journal Line Type for Non-Recoverable Tax Accounting Class.

- Accrual Journal Line Type will capture the prepaid expense account and Recognition

- Journal Line Type will store the actual expense account in each period of recognition

A I O U G | 19

20 | O R A C L E C O N N E C T - J A N 2 0 1 9

6. Define Journal Lines Definition(JLD)

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Methods and Definitions Journal Lines Definitions

Query the seeded JLD

- Application: Payables

- Event Class: Invoices

- Definition Code: ACCRUAL_INVOICES_ALL

Copy seeded JLD and setup the custom JLD, disable the Oracle seeded Journal Line Types ‘Item Expense’ and ‘Non-Recoverable Tax’ so based on the same conditions, the system should not derive 2 JLTs for accounting. Attach the custom AP Item Expense Accrual, Recognition JLT and AP Non-Recoverable Tax Accrual and Recognition JLT.

A I O U G | 21

22 | O R A C L E C O N N E C T - J A N 2 0 1 9

A I O U G | 23

7. Define Application Accounting Definitions(AAD)

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Methods and Definitions Application Accounting Definition

Query the seeded AAD

- Application: Payables

- Definition Code: AP_ENC_ACCRUAL

Copy seeded AAD and setup custom AAD. For the Invoices Event Class, delete the Oracle seeded Journal Lines Definition ‘Accrual, Invoices All’ and attach the custom Journal Line Definition setup in step # 6 and then click on ‘Validate’ button to validate all the Event classes.

24 | O R A C L E C O N N E C T - J A N 2 0 1 9

8. Define Subledger Accounting Method (SLAM)

Navigation: Payables Responsibility Setup Accounting Setups Subledger Accounting Setup

Accounting Methods Builder Methods and Definitions Subledger Accounting Methods

Query the seeded SLAM

- Method Code: ENCUMBRANCE_ACCRUAL

- Method Name: Encumbrance Accrual

A I O U G | 25

Copy seeded SLA and setup custom accounting method. For Application ‘Payables’, delete the Oracle seeded Application Accounting Definition ‘Encumbrance Accrual’ and attach custom AAD setup in step # 7.

9. Attach Subledger Accounting Method (SLAM) to Ledger Definition

Attach the custom Encumbrance Subledger Accounting method setup in step # 8 to Ledger.

Use Case 1

Premises Insurance Premium is paid yearly. It’s an advance payment of insurance for 12 months. The customer prepays the insurance premium of AED 16,000 on 01st Mar,2018. Insurance premium payment is expensed from Mar-18 to Feb-19.

Invoice Date: 10-MAR-18 GL Date: 10-MAR-18, Invoice amount: AED 16,800, Non Recoverable Tax 5%.

At AP Invoice line for Purchasing category ‘financial services. Insurance services Bank Premises’ Prepaid Insurance account 251360 will default as distribution account, enter the ‘Deferred Start Date: 01-MAR-2018 and ‘Deferred End Date: 28-FEB-2019 and at Invoice Line DFF, system will default Insurance Premium Lookup code ‘AP0010’.

The ‘Create Accounting’ request in Payables for 31-MAR-18 generates the following accounting entries in Multiperiod Accounting and Accrual Reversal Report.

Invoice Accounting

Account code combinations Dr/Cr Amount Description

21010.000000.251360.00000.000000 Dr AED 16,000 Prepaid Insurance

21010.000000.251360.00000.000000 Dr AED 800 Non Recoverable Tax

21010.000000.362250.00000.000000 Cr AED 16,800 Reserve for Encumbrance

21010.000000.251360.00000.000000 Dr AED 16,000 Prepaid Insurance

21010.000000.251360.00000.000000 Dr AED 800 Non Recoverable Tax

21010.000000.364160.00000.000000 Cr AED 16,000 Liability

21010.000000.364160.00000.000000 Cr AED 800 Liability

26 | O R A C L E C O N N E C T - J A N 2 0 1 9

Account code combinations Dr/Cr Amount Description

21010.000000.362250.00000.000000 Dr AED 16,800 Reserve for Encumbrance

21010.000000.251360.00000.000000 Cr AED 16,000 Prepaid Insurance

21010.000000.251360.00000.000000 Cr AED 800 Non Recoverable Tax

Following are the accounting entries for recognizing the Insurance Premium for AED 16,000 on a prorated basis during respective periods. From MAR-18 to JAN-19 month’s, system will take 30 days to calculate amount as Proration Type is set as 360 Days for Journal Lines Definition setup and for FEB-19-month, system will take actual 28 days.

GL Date: 01-MAR-18 to GL Date: 01-JAN-19

Account code combinations Dr/Cr Per month 11 month Description

21010.000000.740725.00000.000000 Dr AED 1,340.78 AED 14,748.58 Actual expense for Insurance Premium

21010.000000.251360.00000.000000 Cr AED 1,340.78 AED 14,748.58 Reversal of Prepaid Insurance

GL Date: 01-FEB-19

Account code combinations Dr/Cr Amount Description

21010.000000.740725.00000.000000 Dr AED 1,251.42 Actual expense for Insurance Premium

21010.000000.251360.00000.000000 Cr AED 1,251.42 Reversal of Prepaid Insurance

For Non Recoverable Tax account line enter the ‘Deferred Start Date: 01-MAR-2018 and ‘Deferred End Date: 28-FEB-2019 and at AP Line DFF enter Insurance Premium Lookup code ‘AP0010’.

Following are the accounting entries for recognizing the Non Recoverable Tax for AED 800 on a prorated basis during respective periods. From MAR-18 to JAN-19 months, system will take 30 days to calculate amount as Proration Type is set as 360 Days for Journal Lines Definition setup and for FEB-19-month system will take actual 28 days.

GL Date: 01-MAR-18 to GL Date: 01-JAN-19

Account code combinations Dr/Cr Per month 11 months Description

21010.000000.740725.00000.000000 Dr AED 67.04 AED 737.44 Actual expense for Insurance Premium

21010.000000.251360.00000.000000 Cr AED 67.04 AED 737.44 Reversal of Non Recoverable Tax

A I O U G | 27

GL Date: 01-FEB-19

Account code combinations Dr/Cr Amount Description

21010.000000.740725.00000.000000 Dr AED 62.56 Actual expense for Insurance Premium

21010.000000.251360.00000.000000 Cr AED 62.56 Reversal of Non Recoverable Tax

'Complete Multiperiod Accounting' request will create the Multi-Period Journals with ‘Status Final’ for MAR-18, and for Journals from month APR-18 to FEB-19 it will show ‘Status Incomplete’. Run this request or schedule it in subsequent months from APR-18 to FEB-19. It changes the journal status from ‘Incomplete’ to ‘complete’ journals for the period for which it’s run. It generates the output of Complete Multiperiod Accounting and Accrual Reversal Detail report.

Use Case 2

Rent for ATM & CDM Network is paid yearly. It’s an advance payment of rent for 12 months. The customer prepays the Rent of AED 9,600 on 20th June, 2018. The Rent payment is expensed from June-18 to May-19.

Invoice Date: 25-JUN-18, GL Date: 28-JUN-18, Invoice amount: AED 10,080, Non Recoverable Tax 5%.

At AP Invoice line for Purchasing category ‘Properties & facilities Real estate rent. ATM & CDM Network’ Prepaid Rent account 251395 will default as Distribution account, enter the ‘Deferred Start Date: 20-JUN-2018 and ‘Deferred End Date: 19-MAY-2019 and at Invoice Line DFF system will default Premises rent Lookup code ‘AP0001’.

The ‘Create Accounting’ request in Payables for 30-JUN-18 generates the following accounting entries in Multiperiod Accounting and Accrual Reversal Report.

Invoice Accounting

Account code combinations Dr/Cr Amount Description

21010.000000.251395.00000.000000 Dr AED 9,600 Prepaid Rent

21010.000000.251395.00000.000000 Dr AED 480 Non Recoverable Tax

21010.000000.362250.00000.000000 Cr AED 10,080 Reserve for Encumbrance

21010.000000.251395.00000.000000 Dr AED 9,600 Prepaid Rent

21010.000000.251395.00000.000000 Dr AED 480 Non Recoverable Tax

21010.000000.364160.00000.000000 Cr AED 9,600 Liability

21010.000000.364160.00000.000000 Cr AED 480 Liability

21010.000000.362250.00000.000000 Dr AED 10,080 Reserve for Encumbrance

21010.000000.251395.00000.000000 Cr AED 9,600 Prepaid Rent

21010.000000.251395.00000.000000 Cr AED 480 Non Recoverable Tax

Following are the accounting entries for recognizing the Premises Rent for AED 9,600 on a prorated basis during respective periods. For JUN-18 system will take 11 days to calculate the amount, from JUL-18 to APR-19 system will take 30 days to calculate the amount, and for MAY-19 system will take actual 19 days to calculate amount.

GL Date: 01-JUN-18

Account code combinations Dr/Cr Amount Description

21010.000000.723025.00000.000000 Dr AED 320 Actual expense for Premises Rent

28 | O R A C L E C O N N E C T - J A N 2 0 1 9

Account code combinations Dr/Cr Amount Description

21010.000000.251395.00000.000000 Cr AED 320 Reversal of Prepaid Rent

GL Date: 01-JUL-18 to 30-APR-19

Account code combinations Dr/Cr Per month 10 months Description

21010.000000.723025.00000.000000 Dr AED 872.73 AED 8,727.27 Actual expense for Premises Rent

21010.000000.251395.00000.000000 Cr AED 872.73 AED 8,727.27 Reversal of Premises Rent

GL Date: 01-MAY-19

Account code combinations Dr/Cr Amount Description

21010.000000.723025.00000.000000 Dr AED 552.73 Actual expense for Premises Rent

21010.000000.251395.00000.000000 Cr AED 552.73 Reversal of Prepaid Rent

At AP Invoice line for Non Recoverable Tax account code combination, enter the ‘Deferred Start Date: 20-JUN-2018’ and ‘Deferred End Date: 19-MAY-2019’ and at Line DFF enter Premises Rent Lookup code ‘AP0001’.

Following are the accounting entries for recognizing the Non Recoverable Tax for AED 480 on a prorated basis during respective periods. For JUN-18 system will take 11 days to calculate the amount, from JUL-18 to APR-19 system will take 30 days to calculate the amount and for MAY-19 system will take 19 days to calculate the amount.

GL Date: 01-JUN-18

Account code combinations Dr/Cr Amount Description

21010.000000.723025.00000.000000 Dr AED 16 Actual expense for Premises Rent

21010.000000.251395.00000.000000 Cr AED 16 Reversal of Non Recoverable Tax

GL Date: 01-JUL-18 to 30-APR-19

Account code combinations Dr/Cr Per month 10 month Description

21010.000000.723025.00000.000000 Dr AED 43.64 AED 436.36 Actual expense for Premises Rent

21010.000000.251395.00000.000000 Cr AED 43.64 AED 436.36 Reversal of Non Recoverable Tax

GL Date: 01-MAY-19

Account code combinations Dr/Cr Amount Description

21010.000000.723025.00000.000000 Dr AED 27.64 Actual expense for Premises Rent

21010.000000.251395.00000.000000 Cr AED 27.64 Reversal of Non Recoverable Tax

A I O U G | 29

‘Complete Multiperiod Accounting’ request will create the Multiperiod Journals with ‘Status Final’ for JUN-18 and for Journals from month JUL-18 to MAY-19 it will show ‘Status Incomplete’. Run this request or schedule it in subsequent months from JUL-18 to MAY-19. It changes the status from incomplete to complete in the journals for the period for which it’s run. It generates the output of Complete Multiperiod Accounting and Accrual Reversal Detail report.

In conclusion, I’d like to sum up by stating that this session presented the setups required to implement R12 Multi Period Accounting (MPA) for Payables, for Encumbrance Accrual Subledger Accounting method. It automated the recognition of Prepaid AP invoices for Items and Non Recoverable tax lines and their accountings of transaction over multiple periods.

Ketan Gandhi is an Oracle EBS Applications and Fusion Financials Functional Principal

Consultant with 15 years of Oracle Application and 5 years Finance domain experience. He

has executed several full live implementation cycles for Oracle EBS Financials for Order to

Cash, Procure to Pay, and Projects Suite in Release 11i and Release 12. Ketan has done 2 full

live Implementations of Oracle Fusion Financials Application Ver. 11.1.1.5 and Ver. 11.8.1.0.

He has worked on Assessment, Implementation, Re-implementation, Rollouts, testing and

support projects. He has extensive experience in delivering Oracle enterprise solutions to

Banking and Financial Services, Insurance, Telecom, Manufacturing, Logistics, Energy,

Consumer Products industries.

30 | O R A C L E C O N N E C T - J A N 2 0 1 9

Why to Adopt Oracle HCM Cloud Application Ashish Harbhajanka, Accenture Solutions India Pvt Ltd.,

Introduction

Fusion as the word suggests stand for Culmination / Mix / Combination. In context of Oracle, Fusion is referred as Fusion Applications. So Why Fusion Applications? Before that let’s understand What is Fusion Applications?

Fusion Application is a new Oracle Product to cater to ERP Needs. But we already have so many ERP Applications. Some of the most popularly used being EBS (E-Business Suite), PeopleSoft, JDEdwards to name a few. So why a new ERP application to cater to business needs. The answer is:

Fusion Applications is an attempt (and for that matter a very popular and widely accepted one) by Oracle. It takes the best features from EBS and PeopleSoft and does makes life simpler for Business Users and (Both Functional and Technical) Implementation Consultants. Let’s try to get into more details of the same.

We would categorize this discussion into following broad categories namely:

Business Reasons (Details about why should Business adopt Fusion)

Functional Reasons (Details about why should Functional Consultant adopt Fusion)

Technical Reasons (Details about why should Technical Consultant adopt Fusion)

1. Business Reasons

- Easy to use

- Better Look and Feel

- Better Functionality

- Better User Experience

2. Functional Reasons

i) Configuration is simpler (Most configurations from ‘Setup and Maintenance’)

A I O U G | 31

Most of the setups start with Manage% (refer screenshot below)

ii) Configuration is based on Train Stop Models (First Step guides you to second step and so on)

iii) Allows What IF Scenarios (Manage Enterprise Structures, Allows Multiple Combination but Loads just one)

Below Screenshots gives details of creating Enterprise Structures (and then to view to Technical Summary Report). This would not get loaded but would be used for comparative analysis. Screenshots displaying what-if scenarios (ESC Setup Example). Example of What if scenario (Using Enterprise Structure Configurator)

Step 1: Manage Enterprise

32 | O R A C L E C O N N E C T - J A N 2 0 1 9

Step 2: Manage Divisions

Step 3: Manage Legal Entities

Step 4: Create Business Units

Step 5: Manage Business Units

Step 6: Manage Reference Data Sets

Step 7: Manage Business Unit Set Assignment

A I O U G | 33

Step 8: Manage Location Reference Set

Interview Results

Management Reporting Structure

Technical Summary Report

34 | O R A C L E C O N N E C T - J A N 2 0 1 9

Report Screenshots

A I O U G | 35

36 | O R A C L E C O N N E C T - J A N 2 0 1 9

We can configure multiple such Enterprise Values and compare before loading one. So we can have WHAT IF scenarios. This feature is not supported in older Legacy Systems (EBS / PPLSOFT).

3. Technical Reasons

Broad Responsibilities of any Technical Resource on any ERP Implementations are:

1. Inbound Integration (HDL from Release 10 makes life simpler, data-load on button click) 2. Reporting via (BIP / OTBI) 3. Outbound Integration (HCM Extracts)

All the above three are very easy to implement in Fusion HCM.

Ashish Harbhajanka is a Solution Architect with 13 years of experience in HCM Domain.

Published around 150+ blog posts and author of half dozen books in Oracle HCM Cloud.

https://apps2fusion.com/at/ashish-harbhajanka

A I O U G | 37

Oracle Self Service Integration Cloud Service Harish Panduranga Rao, AIOUG Evangelist

Introduction

In the era of Internet, IT is going through a major transformation. With the boom of Artificial Intelligence & Machine Learning each and every day we see a lot of innovative ideas which help businesses and also the humans. Devices can communicate to each other; Digital assistants are available to perform tasks for humans. Data is accessed multiple times through different medium. Cloud Era has started and every component is available on demand and is ready for use in minutes. Cloud Marketplace has evolved many startups to design their own cloud apps and is available for customers. Days are gone where software is only written in one particular programming language or only available for certain platforms. It really becomes a challenge for the IT Team and business to manage the huge volume of data, requests etc., Any service provider in any industry is always available over the Internet through websites, social media and also owns a Mobile Application. To process across different channels Integration plays a key component. Impact of Digitization brings the need of Self Service.

The Need for Self Service Integration

As digitization is happening across the world, the need for integration has come into picture. Industries need to adapt different strategies to run their business.

The data volume, variety and velocity have a tremendous change. The traditional integration techniques do not perform better and business are not able to cope up the change. With IOT, Cloud, Automation the need for business to integrate solutions has become vital. Self Service Integrations comes into picture which helps business applications talk to each other through different mechanisms and help easily automate a task like sending a notification, email a report or trigger an action based on need.

Let us take an example of an Online Ticketing system for booking movie tickets. A Customer will use the website or a Mobile app to login to XYZ System. Customer views the portal chooses his own favourite movie or play, Checks the availability of ticket, Selects the Timing, Selects the number of seats, payment and checks out. Once the Payment is received immediate notification has to be sent to the customer. If booking is success, the movie tickets are sent via an Email and SMS. Also the confirmation is also sent to the booking company about the customer’s ticket and payment detail. If these details have to entered manually and have to be sent by typing it through text for an SMS, Writing an email it becomes a difficult task. This is where the need for self-service comes into picture. The provider can create a Self Service which will have a trigger and an action, if payment is successful send a notification to both customer and to the company. An email and SMS is sent though trigger and action which makes the whole process quite simple.

Oracle Self Service Integration Cloud Service

This article talks about Oracle’s Cloud Solution – Oracle Self Service Integration Cloud Service which helps IT and business evolve.

Oracle Self Service Integration (SSI) Cloud service helps customers and business create, integrate their cloud apps in an efficient way. Oracle SSI provides a Set of Recipes which is ready for use. A new recipe can also be created based on your own business use case. Oracle SSI Service also gives you an option to create your own Cloud App Instances & Definitions. Any Cloud application may depend on its own REST API. Oracle SSI uses the REST API and only exposes the data which is required for the business users. The security is

38 | O R A C L E C O N N E C T - J A N 2 0 1 9

also put in place by authorizing only the users relevant to that particular business model. Oracle SSI is very simple and easy to use.

Oracle SSI provides inbuilt cloud apps like Gmail, Slack, Facebook, Eventbrite, Google Drive, Calendar and lot many other useful apps used for integration.

Key Concepts

courtesy: https://docs.oracle.com/en/cloud/paas/self-service-integration-cloud/ossug/what-is-recipe-action.html

1. Recipes

Oracle SSI Recipes help to setup a recipe that will help you specify a trigger and its corresponding action. Oracle SSI provides lot of public ready to use recipes which can be used for any business use case. The recipe is served hot i.e. ready to use and needs no development. Recipe automates the action requested by the user.

Oracle SSI also provides an option to create your own recipes. Recipes also provide you many options like if then else conditions, for loops, multiple actions and also allow you use functions.

2. Trigger

A trigger is an event which is configured in Oracle SSI. A trigger is nothing but a set of conditions or an activity. Eg., Upload a file to Dropbox

3. Action

Action is the actual execution of the task based on the initiation performed by the trigger. When a trigger condition is met the action is executed. Every recipe will contain a Trigger and its corresponding action.

Eg., When a file gets uploaded, send a notification email.

4. Cloud App Definitions

Oracle SSI provides several cloud applications which can be used in Recipes. This service also gives you an option to create your own App Definition and Instances. When you create your own app definition following are required.

1. Cloud app Name

2. App URL - REST API

3. Template File

You also have the option to specify Basic Template or API File or use a post man collection file.

5. Cloud App Instances

Cloud App Instance is used to define the URL, authentication, authorization and API Keys for a particular cloud application. Authentication options include OAuth 2.0, Basic and No auth. You also have the option to use the same cloud app account or create separate account to run the app and perform actions.

Now let us see how to provision an Oracle Self Service Integration Cloud service instance create our own recipe.

A I O U G | 39

Scenario:

We are going to create a small quick recipe in which we will be using Google Drive and Gmail. The use case is very simple and straight forward. When a user creates a folder in Google Drive the administrator will receive an email notification. Note: We will only cover about Recipes in this article.

Steps

1. Login to your Oracle cloud account using your credentials. 2. Go to Services and Click on Oracle Self-Service Integration. Use Create Instance to provision Oracle

SSI Instance.

3. Enter the Instance details and click on Create button to create the SSI instance.

40 | O R A C L E C O N N E C T - J A N 2 0 1 9

4. Open the Oracle SSI Service Console

5. Select “Create a Recipe”. Use create recipe to create your own recipe. Enter the Recipe name. Specify the Trigger and select the application.

A I O U G | 41

6. As discussed earlier there a lot of applications integrated with Oracle SSI Cloud. Depending on your use case you should select. In this scenario we are going to select Google Drive.

7. Configure Trigger by authorizing your Gmail Account to use Google Drive.

42 | O R A C L E C O N N E C T - J A N 2 0 1 9

8. Specify the Event.

9. Controlling your Action. Action is to send an email. So authorize access the Gmail account.

A I O U G | 43

10. View your Recipes once the Trigger and Action are completed.

11. Activate the Recipe by turning it ON.

44 | O R A C L E C O N N E C T - J A N 2 0 1 9

Testing

1. Go to Google Drive and create a folder “TestSSI”.

2. Run the Recipe using the “Run Now” option and the job is started.

3. Verify the Action by checking the Gmail inbox.

A I O U G | 45

We have seen how to create our own recipe and have completed our small use case. Oracle SSI also offers some pre-built recipes for public usage. Click Public Recipes to use the existing templates.

To create your own App Definition navigate to Cloud Apps and click on “Create a Cloud App Definition”

46 | O R A C L E C O N N E C T - J A N 2 0 1 9

To summarize Oracle SSI is a very simple integration service which can be used for our own business use cases. The public recipes and cloud app definitions can be really helpful. Through its authorization and authentication features Oracle SSI is definitely the tool for your enterprise.

Hope this article has given you an overview to spin up your own instance in Oracle cloud and explore its various features. We will focus on creating our own cloud app instances and definitions in future.

Harish Panduranga Rao is an Oracle DBA with 10+years of experience. He is a technology

enthusiast and also a speaker who is interested to learn & explore. He has experience in

various domains like Banking, Health Care and Logistics. He is associated with AIOUG Chennai

Chapter and has presented on various topics like In Memory, Big Data, RAC, Oracle 12c

Multitenant, Oracle Cloud, Exadata, Devops and Machine Learning.

Twitter: @pharishdba http://dbachanakya.blogspot.com/ https://www.linkedin.com/in/harish-panduranga-rao-5a33196/

A I O U G | 47

Oracle RAC Server Pools Bhanu Chander Goshika, Technical Evangelist

Traditional way of managing database on clusterware is called ADMIN Managed Database. Here every instance of the database is tightly coupled with every node in the cluster setup. When we are creating admin managed database, we explicitly define which instance of the database should start on which node. For eg., in a 3 node RAC, we have configured instance TESTDB1 to start on Node1 and instance TESTDB2 to start on Node2 and there is no instance on Node 3. In this setup every instance is pinned to a specific node. TESTDB1 is pinned to Node1 and TESTDB2 is pinned to Node 2. For some reason if Node 2 failed, I can’t get instance TESTDB2 running on Node 3 dynamically though this node is idle.

Hard coupling of Instances with Nodes

srvctl add instance –d TESTDB –I TESTDB1 –n node1

srvctl add instance –d TESTDB –I TESTDB2 –n node2

The Ideology of GRID architecture is to make the resources available wherever and whenever necessary and these resources should be allowed to move freely with in the cluster setup with minimal or no DBA intervention. With Admin Managed Database setup we were not able to achieve it because of hard coupling of instance with node.

Scaling up of nodes based on business needs is not an easy task and it is time taking activity. If we want to add a new node to the existing setup, DBA has to explicitly create the undo tablespace and redo log thread for the instance on the new node and register the instance. If this database is having services with Preferred and Available instance configuration, then these services need to be modified manually to make use of the newly added nodes.

In any organization the clusterware setup is configured to run for the peak load, but this peak load is seen weekly, monthly or quarterly depending on the type of the business. For eg., an application is required to run month end jobs at the end of every month which would create load on the server. Clusterware setup has been done to meet this load by having 4 nodes. In Normal days, we would only need two nodes. We are wasting resources in terms of licensing cost based on number of cores for the additional two servers. We could have saved this cost if we would have allowed to configure on demand basis. We might configure production with two nodes and lower environment with two nodes on normal days and on the month end we would get the nodes of lower environment reassigned to production to meet the processing load for month end jobs.

The above mentioned problems have been addressed with the introduction of SERVER POOLS in 11gr2 by moving from ADMIN Managed databases to Policy Managed databases. In Policy managed databases any instance can come up on any node with in the server pool. From the previous example instance TESTDB1 can be started on Node 3 and TESTDB2 can be started on Node1. Any node added or moved out of the server pool on that database instances are started and stopped dynamically on that node.

With all the discussion so far, now we should be good to start with our actual topic. Without waiting anymore lets tight our belt to start our journey in to SERVERPOOLS.

Using Server Pools, we could logically divide the larger cluster in to smaller pools and each pool can have its own attributes. By default, two pools are created after the clusterware installation,

- Generic Pool

- Free Pool

48 | O R A C L E C O N N E C T - J A N 2 0 1 9

Important Attributes of Pools

- Importance: The value ranges from 0 to 1000. Higher the value higher the importance. This attribute is used to determine how to reconfigure the server pools when a node joins or leaves the cluster. A higher importance pool attracts nodes from the lower importance pools to meet its Minimum required servers.

- MAX_SIZE: Maximum number of servers a server pool can contain. A value -1 indicates unlimited

- MIN_SIZE: Defines Minimum size of a server pool. if nodes in the server pool falls below this value. Cluster ware automatically moves servers from low priority pools to this pool to maintain minimum limit.

- ACL (owner:user:rwx,pgrp:group:rwx,other::r--) : Defines the owner of the server pools and privileges that are granted to operating system users and groups.

- Generic Pool: Servers in this pool host the Admin Managed databases. This is internally managed pool and the modification of attributes of this pool is not possible. When clusterware starts up with admin managed database it will be added as a child pool to generic pool. For eg., I have a 3 node RAC with admin managed database CDBRAC with two instances one on node1 and the other on node 2. A pool with the database name is created and is added to Generic pool as a child pool.

[root@ora12cr2n1 bin]# ./crsctl status serverpool NAME=Free ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2 ora12cr2n3 NAME=Generic ACTIVE_SERVERS=

Let us register the database and instances with clusterware

[oracle@ora12cr2n1 ~]$ srvctl add database -db cdbrac -oraclehome /u01/app/oracle/product/12.2.0.1/db_1 -spfile +DATA/CDBRAC/PARAMETERFILE/spfile.284.973160435 -pwfile +DATA/CDBRAC/PASSWORD/pwdcdbrac.258.973159573 [oracle@ora12cr2n1 ~]$ srvctl add instance -d CDBRAC -i CDBRAC1 -n ora12cr2n1 [oracle@ora12cr2n1 ~]$ srvctl add instance -d CDBRAC -i CDBRAC2 -n ora12cr2n2 [oracle@ora12cr2n1 ~]$ srvctl config database -d cdbrac Database unique name: cdbrac Database name: Oracle home: /u01/app/oracle/product/12.2.0.1/db_1 Oracle user: oracle Spfile: +DATA/CDBRAC/PARAMETERFILE/spfile.284.973160435 Password file: +DATA/CDBRAC/PASSWORD/pwdcdbrac.258.973159573 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dba Database instances: CDBRAC1,CDBRAC2 Configured nodes: ora12cr2n1,ora12cr2n2 CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is administrator managed

A I O U G | 49

From the below output it is evident that a new child pool got created with the database name under the parent pool GENERIC POOL

[root@ora12cr2n1 bin]# ./crsctl status serverpool NAME=Free ACTIVE_SERVERS=ora12cr2n3 NAME=Generic ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2 NAME=ora.cdbrac ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2

FREE POOL: Servers which are not assigned to any pool are part of this pool .When clusterware starts, all the servers are assigned to this pool. Servers are assigned from this pool to GENERIC or USER DEFINED pools as databases starts.

Let us try to unregister the database from clusterware.

[oracle@ora12cr2n1 ~]$ srvctl remove database -d cdbrac Remove the database cdbrac? (y/[n]) y [oracle@ora12cr2n1 ~]$

Now, we can see that all servers are assigned to FREE POOL.

[root@ora12cr2n1 bin]# ./crsctl status serverpool NAME=Free ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2 ora12cr2n3 NAME=Generic ACTIVE_SERVERS=

Let us create a server pool called "spool1" with Min=2, Max=2 and importance is 100 .

[oracle@ora12cr2n1 ~]$ srvctl add srvpool -serverpool spool1 -min 2 -max 2 -importance 100 [oracle@ora12cr2n1 ~]$ srvctl status srvpool Server pool name: Free Active servers count: 1 Server pool name: Generic Active servers count: 0 Server pool name: spool1 Active servers count: 2 [oracle@ora12cr2n1 ~]$

Now we can see three pools FREE, GENERIC and User created POOL (spool1). From the below output we can see that two servers are assigned to user created pool and the left over server is in to free pool.

[root@ora12cr2n1 bin]# ./crsctl status srvpool NAME=Free ACTIVE_SERVERS=ora12cr2n3 NAME=Generic ACTIVE_SERVERS= NAME=ora.spool1 ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2

We can see the complete configuration details of the serverpool using the below command.

[root@ora12cr2n1 bin]# ./crsctl status srvpool ora.spool1 -f NAME=ora.spool1 IMPORTANCE=100 MIN_SIZE=2 MAX_SIZE=2

50 | O R A C L E C O N N E C T - J A N 2 0 1 9

SERVER_NAMES= PARENT_POOLS= EXCLUSIVE_POOLS= ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- SERVER_CATEGORY=ora.hub.category ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2

Let’s try to create another server pool "spool2" with -min 2 -max 2 -importance 200.

[oracle@ora12cr2n1 ~]$ srvctl add srvpool -serverpool spool2 -min 2 -max 2 -importance 200

We have only one server in free pool and for spool2 we have defined min servers are 2. Where does this second server come from? Answer is Importance attribute plays the role here. The Importance of spool2 is higher than the spool1 so a server from spool1 is pulled and given to spool2.

[root@ora12cr2n1 bin]# ./crsctl status srvpool ora.spool2 -f NAME=ora.spool2 IMPORTANCE=200 MIN_SIZE=2 MAX_SIZE=2 SERVER_NAMES= PARENT_POOLS= EXCLUSIVE_POOLS= ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- SERVER_CATEGORY=ora.hub.category ACTIVE_SERVERS=ora12cr2n1 ora12cr2n3

Here we can see that spool2 having two servers.

[root@ora12cr2n1 bin]# ./crsctl status srvpool NAME=Free ACTIVE_SERVERS= NAME=Generic ACTIVE_SERVERS= NAME=ora.spool1 ACTIVE_SERVERS=ora12cr2n2 NAME=ora.spool2 ACTIVE_SERVERS=ora12cr2n1 ora12cr2n3

Let’s try to increase the importance of "spool1" and see what happens. Nothing strange, server is pulled from spool2 and given to spool1.

[oracle@ora12cr2n1 ~]$ srvctl modify srvpool -serverpool spool1 -importance 300 [root@ora12cr2n1 bin]# ./crsctl status srvpool ora.spool1 -f NAME=ora.spool1 IMPORTANCE=300 MIN_SIZE=2 MAX_SIZE=2 SERVER_NAMES= PARENT_POOLS= EXCLUSIVE_POOLS= ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- SERVER_CATEGORY=ora.hub.category ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2 [root@ora12cr2n1 bin]# ./crsctl status srvpool NAME=Free ACTIVE_SERVERS= NAME=Generic ACTIVE_SERVERS= NAME=ora.spool1

A I O U G | 51

ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2 NAME=ora.spool2 ACTIVE_SERVERS=ora12cr2n3

Servers are distributed among the pools in order of importance to meet minimum servers and in order of importance to meet their maximum servers.

Now let’s try to create a policy managed database. Here in this example i am converting my previous admin managed database CDBRAC to policy managed database.

[oracle@ora12cr2n1 ~]$ srvctl add database -db cdbrac -oraclehome /u01/app/oracle/product/12.2.0.1/db_1 -spfile +DATA/CDBRAC/PARAMETERFILE/spfile.284.973160435 -pwfile +DATA/CDBRAC/PASSWORD/pwdcdbrac.258.973159573 -serverpool spool1 [oracle@ora12cr2n1 ~]$ srvctl config database -d cdbrac Database unique name: cdbrac Database name: Oracle home: /u01/app/oracle/product/12.2.0.1/db_1 Oracle user: oracle Spfile: +DATA/CDBRAC/PARAMETERFILE/spfile.284.973160435 Password file: +DATA/CDBRAC/PASSWORD/pwdcdbrac.258.973159573 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: spool1 Disk Groups: DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: dba OSOPER group: dba Database instances: Configured nodes: CSS critical: no CPU count: 0 Memory target: 0 Maximum memory: 0 Default network number for database services: Database is policy managed

We are not going to add instances here as we do for admin managed databases where we pin each instance to a particular node. Instances come up on the nodes of server pools. In this setup any instance can come up on any node in the server pool. Instance name for policy managed database is <dbname>_<instanc_num> e.g. CDBRAC_2. In the below output we can see that instance cdbrac_2 started on node1 and cdbrac_1 started on node 2.

[oracle@ora12cr2n1 ~]$ ps -ef|grep smon oracle 4482 1 0 06:07 ? 00:00:00 asm_smon_+ASM1 oracle 8190 1 0 07:45 ? 00:00:00 ora_smon_cdbrac_2 [oracle@ora12cr2n1 ~]$ srvctl status database -d cdbrac Instance cdbrac_1 is running on node ora12cr2n2 Instance cdbrac_2 is running on node ora12cr2n1 [oracle@ora12cr2n1 ~]$

Server pool spool1 minimum required servers are 2 and maximum required servers are also 2 so it has started two instances. If we bring down node 2 here, server pool falls below its minimum required servers and it has

52 | O R A C L E C O N N E C T - J A N 2 0 1 9

to grab a server from either free pool or any other low priority pool and the instance should automatically start on the new node part of this server pool.

From the below output i can see one server is in FREE POOL and two are in spool1

[root@ora12cr2n1 bin]# ./crsctl status srvpool NAME=Free ACTIVE_SERVERS=ora12cr2n3 NAME=Generic ACTIVE_SERVERS= NAME=ora.spool1 ACTIVE_SERVERS=ora12cr2n1 ora12cr2n2

We have three nodes active now.

[root@ora12cr2n1 bin]# ./olsnodes ora12cr2n1 ora12cr2n2 ora12cr2n3

After stopping node 2 we see that Node 3 has become part of server pool spool1 and there are no free servers in FREE POOL.

[root@ora12cr2n1 bin]# ./crsctl status srvpool NAME=Free ACTIVE_SERVERS= NAME=Generic ACTIVE_SERVERS= NAME=ora.spool1 ACTIVE_SERVERS=ora12cr2n1 ora12cr2n3

Let us see whether the database has started automatically on Node 3 or not.

[oracle@ora12cr2n1 ~]$ srvctl status database -d cdbrac Instance cdbrac_2 is running on node ora12cr2n1 Instance cdbrac_1 is running on node ora12cr2n3

Here we go. We see that the database instance has been started on the new node. As a DBA we are not adding any new REDO log threads and UNDO tablespace when a node is added or removed. Clusterware manages everything by its own.

Bhanu Chander Goshika is an Oracle Database Administrator with Overall 10+ Years of

experience in Banking and other domains. He has sound Knowledge in Oracle RAC,

Performance Tuning, PL/SQL, Oracle Cloud, Shell Scripting and MySQL. He is passionate about

learning new stuff and experimenting technology. Bhanu has received many awards for his

wonderful work throughout his career.

https://www.linkedin.com/in/bhanu-chander-83530225/

A I O U G | 53

Effective Tools to Troubleshoot Database Performance Balavignesh Arumugam , Oracle India Pvt. Ltd

Overview

"Performance Hub" & "SQL Details" are new tools from Oracle Database 12c. The Performance Hub is a new feature in Oracle Enterprise Manager Database Express (EM Express) from 12c that provides a new consolidated view of all performance data for a given time range. DBAs can generate and check the active real time performance hub report that will have summary information on overall IO, CPU and Memory usage by the DB and would give clear picture using HTML interface. The Performance Hub Report is very useful to diagnose the performance problems effectively using the holistic information it contains. SQL Details, on the other hand, displays information for a given sql_id aggregated across all executions in a given time range. The SQLs may have been executed multiple times, and may have multiple plans. Clicking the SQL_ID under Activity tab of Performance hub report displays the SQL details page for that particular SQL_ID.

Performance Hub

- The Performance Hub Report can be generated both in Single-Instance Database and RAC.

- The report can be generated in real-time or historical mode depending upon the range of time interval.

- The Performance Hub Report can be manually generated as Database Active Report without using DB Express using the following command.

Features 1. Performance Hub Report provides single unified view of DB performance. 2. New interactive report for analysing AWR data. 3. It is RAC, Exadata and Multitenant aware. 4. The Performance Hub report can be used to view both historical and real-time data. 5. Supports historical view of SQL Monitoring reports.

How to Generate Performance Reports?

The DBMS_PERF package from 12c provides an interface for generating database performance active reports powered by Enterprise Manager UI. REPORT_PERFHUB Function – Performance Hub Report REPORT_SQL Function – SQL Details Report Example 1 (Performance Hub using Script):

SQL> conn / as sysdba SQL> @?/rdbms/admin/perfhubrpt.sql ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~ PERFHUB ACTIVE REPORT ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~ This script will generate PerfHub active report according to ~ ~ users input. The script will prompt users for the ~ ~ following information: ~ ~ (1) report level: basic, typical or all ~ ~ (2) dbid ~ ~ (3) instance id ~ ~ (4) selected time range ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Specify the report level

54 | O R A C L E C O N N E C T - J A N 2 0 1 9

~~~~~~~~~~~~~~~~~~~~~~~~ * Please enter basic, typical or all * Report level "basic" - include tab contents but no further details * Report level "typical" - include tab contents with details for top SQL statements * Report level "all" - include tab contents with details for all SQL statements Please enter report level [typical]: <<<<<<<<<<<<<<< Select typical or all Using typical for report level Available Databases and Instances. <<<<<<<<<<<<<< Select the Database The database with * is current database ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance ------------ -------- ------------ ------------ * 1975797631 1 X12102 x12102 Specify the database ID ~~~~~~~~~~~~~~~~~~~~~~~ Please enter database ID [1975797631]: Using 1975797631 for database ID Specify the Instance Number ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Single Instance Database. Please press Enter Default Start Time Default End Time AWR Start Time ------------------- ------------------- ------------------- 08/16/2018 23:35:49 08/16/2018 23:40:49 08/08/2018 16:00:57 Specify selected time range ~~~~~~~~~~~~~~~~~~~~~~~~~~~ * If the selected time range is in the past hour, report data will be * retrieved from V$ views. * If the selected time range is over 1 hour ago, report data will be retrieved from AWR. * If the dbid selected is not for the current database, only AWR data is available. * * Valid input formats: * To specify absolute time: * [mm/dd[/yyyy]] hh24:mi[:ss] * Examples: 02/23/14 14:30:15 * 02/23 14:30:15 * 14:30:15 * 14:30 * To specify relative time: (start with '-' sign) * -[hh24:]mi * Examples: -1:15 (SYSDATE - 1 Hr 15 Mins) * -25 (SYSDATE - 25 Mins) Please enter start time [08/16/2018 23:35:49]: -1:00 <<<<<<<<<<<<< Specify start time as -1:00 (sysdate -1 hr) Please enter end time [08/16/2018 23:40:49]: <<<<<<<<<<<<<< Leave it as null to generate the real time report for the last 1 hour Generating report for 08/16/2018 22:40:49 - 08/16/2018 23:40:49 Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ Please enter report name [perfhub_rt_08162340.html]: Generating report perfhub_rt_08162340.html ... Report written to perfhub_rt_08162340.html

A I O U G | 55

Example 2 (Performance Hub using DBMS_PERF):

set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000 spool perfhub_realtime_active.html select dbms_perf.report_perfhub(is_realtime=>1,type=>'active') from dual; spool off

Real-time mode

In real-time mode, performance data is retrieved from in-memory views. The default selection is the past 5 minutes.

Historical mode

In historical mode, data is retrieved from the Automatic Workload Repository (AWR). The user can select any time period, provided there is sufficient data in AWR. The default selected time range is one day. Example.

Details The Performance Hub organizes performance data by dividing it into different tabs. Each tab addresses a specific aspect of database performance. Details for each tab display can be found below:

Summary - this tab is available in both realtime and historical mode. In realtime mode, this tab shows metrics data that gives an overview of system performance in terms of Host Resource

56 | O R A C L E C O N N E C T - J A N 2 0 1 9

Consumption (CPU, I/O and Memory), and Average Active Sessions. In historical mode, the tab displays system performance in terms of resource consumption, average active sessions, and load profile information.

Activity - this tab displays ASH Analytics, and is available in both realtime and historical mode.

Workload - this tab is available in both realtime and historical mode, and shows metric information about the workload profile, such as call rates, logon rate and the number of sessions. In addition, the tab also displays the Top SQL for the selected time range. In realtime mode, the tab displays Top SQL only by DB time, but in historical mode, the user can also display Top SQL by other metrics, such as CPU time or Executions. Clicking a SQL_ID displays the SQL Details page with more information about that SQL statement.

RAC - this tab is only available in RAC. It displays RAC-specific metrics such as the number of global cache blocks received, and the average block latency.

Monitored SQL - this tab displays Monitored Executions of SQL, PL/SQL and Database Operations, and is available in both realtime and historical mode.

ADDM - this tab is available in both realtime and historical mode. It displays ADDM and Automatic Real Time ADDM reports.

Current ADDM Findings - this tab is available only in realtime mode, and displays a realtime analysis of system performance for the past 5 minutes.

Database Time - this tab is available in historical mode only and enables to view wait events by category for various metrics, and to view time statistics for various metrics for the selected time period.

Resources - this tab is available only in historical mode only and enables to view operating system resource usage statistics, I/O resource usage statistics, and memory usage statistics for the selected time period.

System Statistics - this tab is available only in historical mode only and enables to view database statistics by value, per transaction, or per second for the selected time period.

SQL Details SQL Details, on the other hand, displays information for a given sql_id aggregated across all executions in a given time range. The SQLs may have been executed multiple times, and may have multiple plans.

Example 3 (SQL Details):

set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000 spool sql_details.html select dbms_perf.report_sql(sql_id=>'9vkyyg1xj6fgc') from dual; spool off

A I O U G | 57

The SQL Details performance report contains the following tabs:

Summary - This tab contains an overview of the SQL statement with key attributes like the SQL text, user name, sessions executing it, and related information. It also contains a Plans tab which shows statistics and activity for each distinct plan for this SQL statement found in memory and in the AWR.

Activity - This tab shows activity broken down by wait classes for this SQL statement. The data used for this chart is fetched from Active Session History (ASH).

Execution Statistics - This tab shows statistics and activity for each distinct plan for this statement along with a graphical and tabular representation of the plan.

Monitored SQL - All executions of this SQL statement that were monitored by Real-time SQL Monitoring are listed in this tab.

Plan Control - This tab shows information about SQL Profiles and SQL Plan Baselines if they exist for this SQL statement.

Historical Statistics - This tab is available only in Historical mode. It contains statistics, such as number of executions, number of I/Os, rows processed, and other information produced over time for different execution plans. This information is retrieved from AWR.

REMEMBER: Usage of Performance Hub & SQL Details requires Oracle Diagnostics + Tuning pack license.

Refer https://docs.oracle.com/cd/E55822_01/DBLIC/options.htm#DBLIC170

Conclusion:

We have discussed about the effective tools to diagnose database performance powered by EM UI from 12c.

1. Performance Hub 2. SQL details

Performance Hub is RAC, Exadata and Multitenant aware and can be used to view both historical and real-time data. SQL Details report is an Interactive Enterprise Manager based report display details about a SQL statement. This includes the SQL text, Top Activity by various dimensions, CPU and Wait Activity over time, key SQL statistics, and execution plans.

Balavignesh Arumugam is with Oracle for more than 12 years working on Database

Performance. He has been regarded as the Advanced Resolution Engineer within the Product

Support for Performance team in India Support Center.

The Author acquired many Excellence Awards during the tenure towards customer/technical

excellence. The Author had also presented many papers in Performance area (Mutexes, Plan

Stability, Adaptive Plans etc) during the previous AIOUG-Sangam events.

AIOUG Events - Jan 2019

19 Jan 2019 - Bengaluru - Exadata Day

19 Jan 2019 - Chennai - TechDay

09 Feb 2019 - Hyderabad

Upcoming Events...

10 Feb 2019 - Mumbai

02 Feb 2019 - Pune

Upcoming Events...

24 Feb 2019 - Vizag

16 Mar 2019 - Hyderabad

23 Feb 2019 - Gujarat

https://www.facebook.com/aioug/

@AIOUG@aioug_chennai@aioug_pune@aioug_hyderabad

https://www.linkedin.com/company/ all-india-oracle-users-group-aioug/

While you're anxiously waiting for our next magazine,Connect with us on

@aioug_gujarat@aioug_bengaluru@aioug_mumbai@aioug_NorthIn

Submit your articles at [email protected]

aioug.