Virtualizing Oracle 11g/R2 RAC Database on Oracle VM: Methods ...

74 downloads 147 Views 1018KB Size Report
By combining Oracle VM with RAC Technology, we can provide the grid-ready ... This bridge functions as a virtual switch presented to the guest VMs. With six ...
Virtualizing Oracle 11g/R2 RAC Database on Oracle VM: Methods and Tips Oracle VM provides a solution to enable server partition and consolidation for improving the resource utilization and achieving greater flexibility and high availability. By combining this with RAC technology, it provides the grid-ready architecture to consolidate the data center infrastructure. This article covers the steps to configure an infrastructure based on Oracle VM 2.2 and Oracle 11g R2 RAC. It will focus on the tips and tricks of network and shared storage configuration on the multiple layers of virtual infrastructure to support 11g R2 RAC database.

Introduction Oracle Virtual Machine or VM is an Oracle software based server virtualization solution. Oracle VM provides a fully certified virtualization environment to run the entire application stack including Oracle database, middleware and Applications. With Oracle VM, we can achieve the following architecture benefits from its virtualization infrastructure:  Server partitioning allows consolidating multiple applications with one or a fewer physical servers. A physical server can be partitioned in several virtual machines (VMs) each with its own OS and applications. These VMs run independently without interfering each other. We can assign the number of virtual CPUs or virtual memory to each VM.  Reduction in software-licensing cost by allowing customers to only pay for the software license according to number of virtual CPUs used by the VM instead of all the CPUs of the physical server.  Simplification of application provisioning through VM templates that have a pre-installed and pre-configured OS and applications. By using VM templates, deployments can skip the OS and applications installation and configuration process.  High Availability by two features: live migration and fail over. A VM can be failed over to another physical server in case the physical server that runs the VM fails. The live migration allows a manual migration of a VM from one physical server to another while the guest VM is online. By combining Oracle VM with RAC Technology, we can provide the grid-ready architecture to consolidate data center infrastructure. This grid-ready architecture can also take advantage of an Oracle Sub-Capacity license, which allows customers to license RAC based on the number of virtual CPUs of the guest VM running the RAC node instead of the CPUs in the physical server. For the development or test environment, we can build a virtual cluster with multiple RAC VMs running on a single physical machine (Figure 1). For production environment, we can run multiple RAC databases in an Oracle Grid structure where each RAC VM is running on separate physical machines (Figure 2). Any issue with the instance such as node eviction will not impact other database instances running on the same physical server as they are running on their separate OS and virtual machines.

Figure 1: Develop /test RAC on VMs

Figure 2: Oracle Grid based Oracle VM

Oracle VM Architecture Overview The Oracle VM 2.2 architecture consists of several components: Oracle VM Server (OVM Server), guest VM and VM Manager.  OVM server is based on the open source Xen Hypervisor that provides the virtualization environment to run multiple domains (OS plus applications) on a physical server. As shown in the Figure 3, The OVM server is installed directly on the bare metal hardware to support multiple domains. The special domain dom0 provides administrative functions such as networking, and storage.  All other domains in a VM server called guest VMs, domU, or just VMs run applications. All the networking and storage IOs of the guest VMs have to go through dom0.  A VM sever pool collects a number of physical servers to provide the resources. All the VM servers in the VM server pool access the common shared storage. All the VM images of the VM server pool are stored in the shared storage. A VM can be migrated and failed over to other VM server of the same VM server pool.  VM manager provides a graphical user interface to manage the VM infrastructure. With a VM manager, you can manage VM server pool and VMs as well as the resources.

Figure 3: Oracle VM Architecture

Virtualizing Oracle RAC Database with Oracle VM In Oracle Real Applications Cluster (RAC), multiple oracle database instances run on multiple hosts to access a single database. These hosts are connected by a high speed interconnect network. Oracle clusterware provides the base infrastructure for multiple hosts to communicate with each other and Oracle cache fusion technology handles the database node synchronization to allow transactions to be executed simultaneously on the single database in the shared storage. When we deploy an Oracle RAC database on the Oracle VM virtual infrastructure environment, each database host (node) is a guest VM. Figure 4 below shows the architecture of a two node 11gR2 RAC database running on VMs.

Figure 4: 11g R2 RAC Architecture on Oracle VM VMs have to rely on the underneath virtual infrastructure for resources such as CPUs, memory, the networks and shared storage to run Oracle RAC database. Figure 5 shows an implementation example of such a virtualization infrastructure that includes the following components:  Shared storage attached to physical servers and then accessible to VMs for VM images as well as Oracle Database files.  OVM servers that are installed and run on the bare metal server hardware.  Oracle VM manager that provide the GUI interface for the management of the virtual environment  Oracle VMs that will be configured as hosts for the Oracle RAC nodes.

Figure 5: Components and architecture of virtualizing Oracle RAC on Oracle VM The rest of the article will examine the configuration of such a virtual infrastructure and how to deploy Oracle RAC database on this infrastructure.

Configuring Oracle VM Infrastructure Oracle Virtual Server Installation An OVM server can be manually installed or automatically provisioned by Oracle Enterprise Manager provisioning pack using industry standard PXE boot technology. For this paper, we manually installed the OVM servers on bare metal servers. The steps include:  Prepare local disk and enable virtualization on BIOS  Install Oracle VM server OVM 2.2  Change Dom0 memory : /boot/grub/menu.lst: edit line: kernel /xen-64bit.gz dom0_mem=1024M  Ensure VM agent working with command service ovs-agent status

Oracle VM server Network Configuration With the updated version of the underlying Xen Hypervisor technology, the OVM server also includes a Linux kernel. This Linux kernel runs as dom0 to manage one or more domU guest virtual machines. Oracle VM uses Xen bridge to provide the network for guest VMs running on domU. This bridge functions as a virtual switch presented to the guest VMs. With six Network Interface Cards (NIC) installed our example servers, by default, in dom0 there are six physical NICs each of which is associated with a Xen bridge that can be presented to guest VM in domU as shown in figure 6:

Figure 6: Default Xen Bridge Configuration To meet the requirement for running Oracle RAC, we need at least 1 NIC for public network, two bonded NICs for private interconnects and two NICs for iSCSI storage connections. The default configuration can be modified to the following customization configuration as shown in figure 7:  eth0 is associated with Xen bridge xenbr0 for the virtual public network eth0 of VMs  eth2 and eth3 will be used for storage connections utilizing iSCSI switches. Since the entire guest VM’s IO operations are handled by dom0, there is no need to expose eth2 and eth3 to guest VMs. Therefore no Xen bridge is needed for eth2 or eth3.  eth4 and eth5 are bonded as bond0 on which Xen bridge Xenbr1 is based. The virtual network interface eth1 in the guest VM is built on Xenbr1 to provide private interconnect between guest VMs to carry the clusterware heartbeat and RAC node synchronization. VM server and Guest VM Networking Oracle VM Host

Guest VMs

Dom 0

Public NetworK

xenbr0

eth0

eth0 DomU

eth1

EqulLogic storage

eth2

Private Interconnectt

eth4

eth1

eth3 eth0 DomU

bond0

xenbr1

eth1

eth5

Figure 7 - VM Server and Guest VMs Networking Configuration

Following are the steps used to implement these network configurations by customizing the default Xen bridge configuration and network configurations from the VM server installation: 1.

Shutdown the default Xen bridge configuration: /etc/xen/scripts/network-bridges stop

2. Make sure there is no Xen bridge shown by the command: brctl show

3. Modify /etc/xen/xend-config.sxp change the line: (network-script network-bridges) to (network-script network-bridges-dummy)

4. Edit /etc/xen/scripts/network-bridges-dummy to include only the following two lines: #!/bin/sh /bin/true

5. Configure the network interfaces and bonding and Xen bridges xenbr0 and xenbr1 by editing the following network scripts in /etc/sysconfig/network-scripts: Public network and Xenbr0: ifcfg-eth0: Ifcfg-xenbr0: DEVI CE=et h0 DEVI CE=x enbr 0 BOOTPROTO=none BOOTPROTO=none HWADDR=00: 25: 64: FB: 08: 2C TYPE=Br i dge ONBOOT=y es I PADDR=155. 16. 9. 91 BRI DGE=x enbr 0 ONBOOT=y es iSCSI network interfaces: ifcfg-eth2:

ifcfg-eth3:

DEVI CE=et h2 BOOTPROTO=none ONBOOT=y es HWADDR=00: 1b: 21: 39: a8: c d I PADDR=10. 16. 7. 1 NETMASK=255. 255. 255. 0 USERCTL=no

DEVI CE=et h3 BOOTPROTO=none ONBOOT=y es HWADDR=00: 1b: 21: 39: a9: 74 I PADDR=10. 16. 7. 2 NETMASK=255. 255. 255. 0 USERCTL=no

Private network interface and Xenbr1: ifcfg-eth4: DEVI CE=et h4 BOOTPROTO=none HWADDR=00: 25: 64: FB: 08: 30 ONBOOT=y es TYPE=Et her net MASTER=bond0 SLAVE=y es USERCTL=no ONBOOT=y es

Bond0 and Xenbr1:

ifcfg-eth5: DEVI CE=et h4 BOOTPROTO=none HWADDR=00: 15: 17: 0d: 00: 1b ONBOOT=y es TYPE=Et her net MASTER=bond0 SLAVE=y es USERCTL=no ONBOOT=y es

ifcfg-bond0: DEVI CE=bond0 ONBOOT=y es BOOTPROTO=none BRI DGE=x enbr 1

ifcfg-xenbr1: DEVI CE=x enbr 1 ONBOOT=y es TYPE=Br i dge BOOTPROTO=none I PADDR=192. 168. 9. 91

6. Restart the network service: service network restart

7. And check the Xen bridge configuration: brctl show bridge name xenbr0 xenbr1

bridge id 8000.002219d1ded0 8000.002219d1ded2

STP enabled no no

interfaces eth0 bond0

Oracle VM Server Storage Configuration Next task is to prepare the storage access on VM servers as well as guest VMs. The shared storage is used for two purposes:  OVS repositories that store all the resources such as VM images, VM templates, local or shared virtual disks  Storage volumes for Database. For a better performance of Oracle RAC production database, it Is recommend to attach the physical volumes to the virtual machines as the virtual disks for the Oracle database. For example, we created the following volumes in the iSCSI SAN Storage: OVS volume for OVS repository. The rest of volumes Owidata1-5 and owifra1-2 volumes for the database, OCR1-5 volumes for OCR and Voting disks will be attached to guest VMs as the shared Virtual disks for 11g R2 clusterware and RAC database. Volumes

Usage

Mount point

OVS

OVS repository

/OVS

Owidata1-5 OWifra1-2 OCR1-OCR5

OWI DB Data OWI DB FRA OCR / Votinig disks

ASM diskgroup ASM diskgroup ASM Diskgroup

To access these iSCSI volumes from dom0, two network interfaces (eth2, eth3) are configured in dom0 for iSCSI connections:.  Create an iSCSI interface for each NIC: iscsiadm -m iface -I ieth2--op=new iscsiadm -m iface -I ieth3--op=new



Correlate the iSCSI interface with the each NIC device:



Discover the iSCSI target :

iscsiadm -m iface -I ieth2 --op=update -n iface.net_ifacename -v eth2 Iscsiadm -m iface -I ieth2 --op=update -n iface.net_ifacename -v eth2

iscsiadm -m discovery -t st -p 10.16.7.15 --interface=ieth2 -interface=ieth3



Login to the iSCSI targets:

iscsiadm -m node -p 10.16.7.15 –login-all; Now each volume is accessible from two paths of iSCSI network interfaces. The Linux device mapper multipath is configured to create a multipath alias for each volume: The multipath configuration file /etc/multipath.conf contains the mapping between the iSCSI device id of each volume with a multipath alias name : multipaths { multipath { wwid alias } }

36090a068b0bc14240abe94020000204e ovs

Restart multipath service: service multipathd restart

To check the multipath devices: $ls /dev/mapper/* ocr1, ocr2, ocr3, ocr4, ocr5, ovs, owidata1, owidata2, owidata3, owidata4, owidata5 owifra1 owifra2

Create the OVM Repository on the Shared Storage Now we can create OVM server OVS repository on the multipath device /dev/mapper/ovs in the following steps:  Create OCFS2 cluster file system on OVS volume o Configure 02cb server on each node: service o2cb configure

o

Create partition /dev/mapper/ovsp1 on the volume: fdisk /dev/mapper/ovs

o

Create OCFS file system on the partition mkfs.ocfs2 -T datafiles -N 8 -L "OVS" /dev/mapper/ovsp1



Create the new VM repository on the shared storage volume /dev/mapper/ovs: o Delete the default local repository: /opt/ovs-agent-2.3/utils/repos.py -d /dev/sda3

o

Create the new VM repository on the shared storage: /opt/ovs-agent-2.3/utils/repos.py -n /dev/mapper/ovsp1

o

Get the uuid (Universal Unique Identifier) of the storage repository /opt/ovs-agent-2.3/utils/repos.py –l [ ] 226b143f-9579-4c66-adc2-2def917e97e3 => /dev/mapper/ovsp1 [ ] 226b143f-9579-4c66-adc2-2def917e97e3 => /dev/mapper/ovsp1

o

Make the newly created repository the cluster root repository for /opt/ovs-agent-2.3/utils/repos.py -r 226b143f-9579-4c66-adc22def917e97e3 [ R ] 226b143f-9579-4c66-adc2-2def917e97e3 => /dev/mapper/ovsp1 /opt/ovs-agent-2.3/utils/repos.py -l [ R ] 226b143f-9579-4c66-adc2-2def917e97e3 => /dev/mapper/ovsp1

Configure Oracle VM Manager

Oracle VM manager is a web based GUI console to simplify VM management. Once the network, shared storage and OVS repository in VM servers are configured, the rest of virtual infrastructure configuration and management tasks will be done through the VM Manager GUI interface. This includes actions such as managing VM servers, VM server pool, and VMs. Oracle VM manager can be installed either in a separate physical server with a RedHat or Oracle Linux OS; or on an Oracle VM; but it cannot be installed in OVM server. If you run the VM manager in a VM, that VM will have to be managed through Oracle VM Manager Command Line Interface (CLI). To install the OVM manager, download the OVM manager 2.2 software and start the installer as Oracle user: sh ./runInstaller.sh

Select option 1 to finish the installation process. As VM manager communicates each VM server through the VM agent, make sure the VM agent running on each VM server Using command: $service ovs-agent status

Virtual infrastructure Configuration through Oracle VM Manager Once we have the OVM manager, we can complete the following configuration tasks through the Oracle VM GUI. Create VM server Pool Login to OVM Manager to create a VM server pool. At least one OVM server is required to create the VM server pool, and more OVM servers can be added to the VM server pool later. Figure 8 shows the GUI that creates the VM server pool owi_pool with the first OVM server owivs1.us.dell.com. You can enable to disable the high availability option of the VM server pool.

Figure 8 - OVM Manager – Server Pool

A VM server pool is a collection of computer resources such as CPUs, memory and storage. A VM server pool can be scaled out by adding additional OVM servers with more CPUs and memory and additional storage. To add an additional OVM server to the VM server pool, you need first to configure them with the access to the shared storage and network using the same steps mentioned in the previous section. Then add the new OVM server to the VM server pool. Figure 9 shows how to add a new VM server owvs2.us.dell.com to VM server pool: owi_pool.

Figure 9: Add VM server to VM server pool OVS Repository: After creating a VM server pool and adding an OVM server to the pool, you can see the following OVS repository file system in the OVM server:

Figure: OVS repository mount point The real mount point for this OVS repository follows the pattern /var/ovs/mount/XXXXXXXXXXXXXXXXXX. The soft link /OVS is created for this mount point so that we can simply use ‘ cd /OVS’ to get the OVS repository. Inside of /OVS, we can see the following directories: /OVS/running_pool stores all the VM images.The images for this local virtual disks where the VM stores the guest OS and applications. /OVS/seed_pool stores all VM templates. You need to cop the VM templates into this directory and also import into the VM Manager before we can use it. Figure 11 shows four directories under /OVS/running_pool. Each of them is for one VM, as illustrated in Figure 11.

Figure 11: Virtual machine image directories in OVS repository.

Creating and Configuring Virtual Machines With the OVM infrastructure completed, we are ready to create Oracle VMs using a VM template. Here we will show how to create VMs using 64 bit Oracle Enterprise Linux 5U5 as the host for the Oracle RAC database node: 1. Import OEL VM template: OVM_EL5U5_X86_64_PVM_10GB. This template was downloaded from Oracle e-Delivery site and copied to /OVS/seed_pool directory and then imported into the Oracle VM Manager.: 2. Create VM with the OEL 5U5 VM template. Start the VM creation process by selecting the option ‘ Creating virtual machine based on virtual machine templates’ and follow the GUI steps by providing the information about the VM (Figures 12 and 13).

Figure 12: Start VM creating using the VM template

Figure 13: Two VMs created as the Oracle RAC Hosts 3. Configuring VM virtual network: the two VM virtual network interfaces are connected to two Xen Bridge Xenbr0 and Xenbr1. These two Xen bridges act like the virtual switches for the virtual network interfaces eth0 and eth1 of VMs: Virtual network interface vif0 (eth0) for public network is on xenbr0, Virtual network interface vif1 (eth1) for private network xenbr1 The virtual network interfaces are configured as a part of the VM creation as shown in figure 14. This also generates the two following entries in the vm configure file: vmcfg vif = ['bridge=xenbr0,mac=00:16:3E:36:8A:53,type=netfront', 'bridge=xenbr1,mac=00:16:3E:3A:8F:CC,type=netfront', ]

Figure 14: The Virtual Network Configuration 4. Virtual disks on VM: These virtual disks are for OCR / voting disks, and RAC Database. To achieve a better storage IO performance, it is recommended that we attach the device partitions on VM servers to the virtual machines. This attachment process is implemented by adding the corresponding storage attachment entries in the vm.cfg file, like this: (Note the lines starting phy:): Disk= ['file:/var/ovs/mount/226B143F95794C66ADC22DEF917E97E3/running_pool/30_o wi2/System.img,xvda,w!', 'phy:/dev/mapper/ocr1p1,xvdc,w!', 'phy:/dev/mapper/ocr2p1,xvdd,w!', 'phy:/dev/mapper/ocr3p1,xvde,w!', 'phy:/dev/mapper/ocr4p1,xvdf,w!', 'phy:/dev/mapper/ocr5p1,xvdg,w!', 'phy:/dev/mapper/owidata1p1,xvdh,w!', 'phy:/dev/mapper/owidata2,xvdi,w!', 'phy:/dev/mapper/owidata3,xvdj,w!', 'phy:/dev/mapper/owidata4,xvdk,w!', 'phy:/dev/mapper/owidata5,xvdl,w!', 'phy:/dev/mapper/owifra1p1,xvdm,w!', 'phy:/dev/mapper/owifra2,xvdn,w!', ]

With these mapping, we can see the virtual disk partitions in the VM illustrated in Figure 15.

Figure 15: Virtual disks in VM for 11g R2 Clusterware and RAC database As the VM template provided only 12GB for the entire OS disk, we need to allocate additional virtual disk space to VMs for the local file systems as well as swap space. Figure 16 and 17 show how to allocate the disk space from OVS repository to VMs.

Figure 16: Add additiona local disk space to VM

Figure 17: Virtual disks for local storage. As a result, a new virtual disk /dev/xvdp was added to the VMs. This virtual disk will be partitioned by command: fdisk /dev/xvdp’

and made as the swap space as mkswap /dev/xvdp1 swapon /dev/xvdp1

The virtual infrastructure is composed as the following components and ready for Oracle 11gR2 RAC deployment (Figure 18)  OVM servers and VM server pool  Shared storage, Network, and Xen bridges



VMs with OEL 5U5 with shared virtual storage and virtual public / private network Shared Virtual Disks

Guest1(RAC NODE 1) eth0

xenbr0

eth0

dom0 VM server1

eth2

Public

Guest2(RAC NODE 2)

eth1

eth3

ISCSI

eth1

eth0

xenbr1

xenbr1

bond0

bond0

eth4

eth1

eth1

dom0 VM server2

eth4

Private Interconnect

xenbr0

eth3

eth2

ISCSI

RAC Configuration Configure on Guest VMs Figure 18: Virtual Infrastructure for Oracle 11gR2 RAC

Configuring Oracle 11gR2 RAC in VMs So far we have discussed how the virtual infrastructure provides VMs the computing resources such as virtual CPUs, memory, networks and storage. With these resources, Oracle VM provides the OS and applications the platform similar to one of a physical machine. This similarity also applies to the deployment and operating Oracle 11g R2 RAC database. We will now discuss the overview of the Oracle 11g RAC deployment steps with mentioning some specialties related to the VM environment. For the detailed 11g R2 RAC configuration, please refer to references [6, 7]. Besides the manual deployment method, other options are Oracle RAC VM template based deployment and Oracle EM provisioning procedure. Please refer to references [2] and [3] for the details of these options.    

Preparing VMs to meet the 11g R2 RAC requirements, Install 11g R2 GI infrastructure software Install 11g R2 RAC database software Create 11g R2 RAC database

Preparing the VMs to meet the 11g R2 RAC requirements: The steps are very similar to those for the regular physical machines, creating users, adding the required Linux kernel configuration, rpms, ntpd configuration, resource limits settings, etc. Refer to Oracle Grid infrastructure installation guide 11gR2 for Linux, E17212-08. For Oracle VMs, the additional steps are: 1. Make sure to disable the Linux firewall service which is enabled by default in the VM template:

eth0

Public

service iptables stop, chkconfig iptables off

2. Two virtual network interfaces: eth0 for the public network and eth1 for the private interconnect are configured based on Xenbr0 and Xenbr1 as shown in Figure 20 . 3. Shared storage for OCR / voting disks, database files and optionally a flash recovery area as showed in figure 17. To partition the virtual disks, use the fdisk command such as: fdisk /dev/xvdc to create partition /dev/xvdc1

4. Create the ASM disks on this virtual devices. (optional step). service oracleasm createdisk OCR1 /dev/xvdc1

Installing 11g R2 GI Infrastructure The process to configure 11gR2 RAC database starts with the 11g R2 Grid Infrastructure (GI) installation on two VMs. The installed 11g R2 GI includes clusterware and ASM. This step creates the ASM diskgroup on ASM disks OCR1, OCR2, OCR3, OCR4, OCR5 to store OCR and voting disks. It also requires two virtual network interfaces for the public and private networks respectively.

Figure 19: 11gR2 Grid Infrastructure Installation

Comment [GG1]: Possibly missing some information here? Comment [YK2]: Kai: Yes, add the create asm disk command here

Install Oracle RAC Software 11gR2 RAC Oracle software can be installed either on a local storage of each VM or on a shared ACFS Oracle HOME. If a local disk is used, we need to create a local disk from VMs using the space from OVS repository as similar to how we created local disk for the swap space. If we decide to use a shared ACFS Oracle Home, We need to prepare shared storage in the OVM servers such as an iSCSI volume called acfs. Then we attach it to the VMs by adding additional entry 'phy:/dev/mapper/acfs,xvdb,w!' to the virtual disk attachment in the vm.cfg file shown in figure 17. This maps /dev/mapper/acfs devices to /dev/xvdb virtual disk in the VMs. Then we partition /dev/xvdb by fdisk /dev/xvdb

and create ASM disk ORAHOME using the command service oracleasm createdisk ORAHOME /dev/xvdb1

The ACFS cluster file system is created using asmca utility in the following steps:

Figure 20: ACFS shared ORACLE_HOME Before installing 11gR2 RAC software, it is recommended to run the cluster verification utility to ensure all the requirements are met: $ORACLE_HOME/bin/cluvfy stage -pre dbinst -fixup -n owirac1,owirac2 -r 11gR2 osdba dba -verbose

To install Oracle 11gR2 RAC, run the runInstaller utility as the Oracle user and select the two VM hosts as the RAC nodes and select the ORACLE_HOME location for the RAC software. Complete the installation by going through all the steps in the runinstaller workflow.

Create Oracle 11gR2 RAC database Before creating the Oracle database, make sure you create the ASM diskgoups for the database on the virtual disks. As grid user, run asmca utility to create the DATA and FRA ASM diskgroups:

Figure 21: Create DATA and FRA diskgroups for RAC database Run the verification utility to ensure all the requirements are met for the RAC database configuration: $ORACLE_HOME/bin/cluvfy stage -pre dbcfg -fixup -n , , -d $ORACLE_HOME -verbose

As the oracle user, run the dbca utility to create a database with the proper selections such as selecting Oracle Real Application Clusters database; specifying two RAC nodes names; select +DATA for database area; +FRA for flash recovery area, Specifying the database configuration parameter values. With a successful database creation, we have the 11gR2 RAC database configured on two Oracle VMs which are based on the virtualization infrastructure provided by Oracle VM 2.2 and Oracle VM Manage 2.2 and iSCSI SAN as the shared storage.

References 1. Oracle Real Application Clusters in Oracle VM Environments, An Oracle whitepaper, June 2010 2. Power of the New Oracle RAC 11g Release 2 Oracle VM Templates, Saar Maoz & Philip Newlan, Oracle RAC SIG Web seminar presentation, Nov 23rd, 2010 3. Oracle RAC on Oracle VM Automated Provisioning with Enterprise Manager 11g , Kai Yu, Akanksha Sheoran, Oracle OpenWorld 2010, Sept. 19, 2010, Session ID: S316318. 4. Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux, E17212-08 5. Oracle® Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX, E17214-07 6. Oracle® Clusterware Administration and Deployment Guide 11g Release 2 (11.2), E16794-08 7. Oracle® Real Application Clusters, Administration and Deployment Guide, 11g Release 2 (11.2), E16795-08

About the Author Kai Yu is a senior Oracle solutions engineer and architect in Dell Oracle Solutions Engineering lab. Kai have worked with Oracle Technology since 1995. His expertise areas include Oracle RAC, Oracle VM and Oracle E-Business Suite. He is an Oracle ACE Director and has published more 15 Oracle whitepapers and gave more than 40 technical presentations at major Oracle conferences worldwide such as Oracle OpenWorld 06-11, Collaborate 08-11, UKOUG, Scotland OUG, Ottawa OUG, OTN Latin America and APAC conference tours and IOUG webcasts. Kai has served the president of IOUG Oracle RAC SIG and the IOUG Virtualization SIG founding committee. Currently Kai is the webinar char of IOUG RAC SIG and IOUG Virtualization SIG. In April 2011, Kai won the 2011 OAUG Innovator of Year Award. Kai holds an M.S. in Computer Science from the University of Wyoming. Kai’s has his Oracle Blog at http://kyuoracleblog.wordpress.com/

Author Contact Name: Kai Yu E-mail: [email protected] Phone: 512-728-6343 Address: Dell Inc., One Dell Way, Round Rock, TX 78648