Tuesday, December 29, 2009

Configuring ASM on Windows environment

1) With asmtool

Follow below steps and create a ASM diskgroup on your local m/c

1) Creating a disks with asmtool

D:\>mkdir asmdisks
D:\>cd asmdisks
D:\asmdisks>asmtool -create D:\asmdisks\disk1 1024
D:\asmdisks>asmtool -create D:\asmdisks\disk2 1024
D:\asmdisks>asmtool -create D:\asmdisks\disk3 1024

Note: - size should be in MB’s

Now you have 3 disks of 1024mb each which can be used to create a ASM disk group.

2) Create ASM instance: -

a) Configure Cluster Synchronization Service: -

C:\oracle\product\10.2.0\db_1\BIN>localconfig add

Step 1: stopping local CSS stack
Step 2: deleting OCR repository
Step 3: creating new OCR repository
successfully accumulated necessary OCR keys.
Creating OCR keys for user 'hostname', privgrp ' ' . .
Operation successful.
Step 4: creating new CSS service
successfully created local CSS service
successfully reset location of CSS setup

b) Create init file: -

Open notepad edit the following parameters and save file as "C:\oracle\product\ 10.2.0\db_ 1\database\ init+ASM. ora"
INSTANCE_TYPE= ASM
DB_UNIQUE_NAME= +ASM
LARGE_POOL_SIZE= 8M
ASM_DISKSTRING= 'D:\asmdisks\ *'
_ASM_ALLOW_ONLY_ RAW_DISKS= FALSE

c) Create service and password file
oradim will create an ASM instance and start it automatically.
D:\> orapwd file=C:\oracle\ product\10. 2.0\db_1\ database\ PWD+ASM.ora password=asm
D:\> oradim -NEW -ASMSID +ASM -STARTMODE auto

3) Starting ASM instance: -

a) Change PFILE to SPFILE, Add ASM Diskgroup parameter and your all set to go and use ASM.
SQL> create spfile from pfile;
SQL> startup nomount;

Create asm disk group: -

SQL> select path, mount_status from v$asm_disk;
PATH MOUNT_STATUS
----------------------- -------------------
D:\ASMDISKS\ DISK1 CLOSED
D:\ASMDISKS\ DISK3 CLOSED
D:\ASMDISKS\ DISK2 CLOSED

SQL> create diskgroup data external redundancy disk
2 'D:\ASMDISKS\DISK1',
3 'D:\ASMDISKS\DISK2',
4 'D:\ASMDISKS\DISK3'
5 /
Diskgroup created.

SQL> alter system set asm_diskgroups= data scope=spfile;
SQL> shutdown immediate
SQL> startup

ASM instance started
Total System Global Area 83886080 bytes
Fixed Size 1247420 bytes
Variable Size 57472836 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted

Now you can go ahead and use your DBCA and create a database and on step 6 of 13, you can use Automatic Storage management as your Filesystem.

Monday, December 21, 2009

SYSRESV Utility

This utility was the result of enhancement request 566223 [BUG: 566223]:

"Currently, many Oracle applications determine whether a particular instance is up by checking for the presence of the sgadef file. We should provide a separate utility for this purpose instead, based on the contents of the SGA."

The sysresv utility included with Oracle 8.1.5 and above provides instance status (and OS resources used) for specified ORACLE_SIDs. This utility is especially useful when multiple instances are running. OS resources can be removed using this utility if the specified instance is detected to be dead.

This utility may be useful when an instance has crashed or was aborted and memory and semaphores related to this instance were not cleaned up automatically. This utility is also helpful in determining which instance is running. The sysresv utility, located in $ORACLE_HOME/bin, can be used from locations other than $ORACLE_HOME/bin.

Point your environment to the instance of interest before using sysresv.

NOTE: (9i 64-bit):

Must set up as follows to get sysresv to work.

Oracle 9.X.X (64Bit) on Solaris (64Bit) OS
- Set LD_LIBRARY_PATH=$ORACLE_HOME/lib32
- Set LD_LIBRARY_PATH_64=$ORACLE_HOME/lib

Usage:

sysresv:
usage :
sysresv [-if] [-d (on/off)] [-l sid1 (sid2) ...]
-i : Prompt before removing ipc resources for each sid
-f : Remove ipc resources silently, over rides -i option
-d (on/off) : List ipc resources for each sid if on
-l sid1 (sid2) .. : apply sysresv to each sid

Default: sysresv -d on -l $ORACLE_SID
Note: ipc resources are attempted to be deleted for a sid only if there is no currently running instance with that sid.


Examples:

Instance is not running:

/u02/app/oracle/product/8.1.7 $ sysresv
IPC Resources for ORACLE_SID “R817”:
Shared Memory
ID KEY
No shared memory segments used
Semaphores:
ID KEY
No semaphore resources used
Oracle Instance not alive for sid "R817"

Instance is running:

/u03/app/oracle/product/8.1.6 $ sysresv

IPC Resources for ORACLE_SID "X816" :
Shared Memory:
ID KEY
16437 0xe4efa8dc
Semaphores:
ID KEY
12320802 0x09d48346
Oracle Instance alive for sid "X816"

Attempting to remove memory and semaphores using sysresv when Oracle detects an instance is running:

/u03/app/oracle/product/8.1.6 $ sysresv -f
IPC Resources for ORACLE_SID "X816" :
Shared Memory:
ID KEY
16437 0xe4efa8dc
Semaphores:
ID KEY
12320802 0x09d48346
Oracle Instance alive for sid "X816"
SYSRESV-005: Warning
Instance maybe alive - aborting remove for sid "X816"

Removing IPC resources:

[Sysresv shows memory and semaphores exist but Oracle determines the instance is not alive. Cleanup is needed.]

/u03/app/oracle/product/8.1.6 $ sysresv
IPC Resources for ORACLE_SID "X816" :
Shared Memory:
ID KEY
16837 0xe4efa8dc
Semaphores:
ID KEY
12714018 0x09d48346
Oracle Instance not alive for sid "X816"

Removing IPC resources using sysresv:

/u03/app/oracle/product/8.1.6 $ sysresv -i

IPC Resources for ORACLE_SID "X816" :
Shared Memory
ID KEY
No shared memory segments used
Semaphores:
ID KEY
No semaphore resources used

Oracle Instance not alive for sid "X816"
Remove ipc resources for sid "X816" (y/n)?y
Done removing ipc resources for sid "X816"
/u03/app/oracle/product/8.1.6

Verify the resources were removed:

/u03/app/oracle/product/8.1.6 $ sysresv

IPC Resources for ORACLE_SID "X816" :
Shared Memory
ID KEY
No shared memory segments used
Semaphores:
ID KEY
No semaphore resources used
Oracle Instance not alive for sid "X816"

If you need to remove memory segments, and Oracle detects the instance is alive through sysresv:

$ ipcrm -m (memid)

Where (memid) is the memory id shown in the sysresv output.

Example:
$ ipcrm -m 16437

If you need to remove semaphores, and Oracle detects the instance is alive through sysresv:

$ ipcrm -s (semid)

where (semid) is the semaphore id shown in the sysresv output.

Example:
$ ipcrm -s 12320802

115235.1
Resolving ORA-7279 or ORA-27146 errors when starting instance

Problem Description
-------------------

When attempting to start an instance, you receive the following error:

ORA-7279: spcre: semget error, unable to get first semaphore set

This error may possibly be accompanied by one or more of the following
additional errors:

ORA-600: internal error code, arguments [SKGMBUSY]
SVR4 error: 28: no space left on device

If you are using version 8.1.5 or 8.1.6, you receive the following error instead of the ora-7279 when attempting to start an instance:

ORA-27146: post/wait initialization failed

With 8.1.7, instance startup fails with ora-3113.

These errors happen whenever attempting to start an instance, which includes, but is not limited to, creating a new database during an install, creating a new database with an existing install, starting an existing database which had the initSID.ora parameter PROCESSES changed, or starting an existing database which was either shutdown incorrectly or crashed.


Solution Description

The problem is that there are not enough semaphores to start the instance. This can be resolved based on one of the two scenarios below:

Scenario 1
If no new instances have been created and you are trying to start an existing database which was shutdown incorrectly or crashed, the semaphores from the crashed instance may still be allocated and will need to be removed.

Step 1: -

Before removing semaphores, you should be certain that there are no running instances using them. First check for running instances using the following command:
$ ps -ef | grep -v grep | grep pmon
If there are no pmon processes running, then there should be no semaphores owned by the oracle user, and any semaphores owned by the oracle user can safely be removed.

See steps 3 and 4 for the commands to list and remove semaphores.

Step 2: -

If you have multiple instances running, you will need to either shut them down or determine which semaphores are being used by each instance. It is recommended that you shut down all of the instances, but if this is not possible, then follow the instructions below.

A) If the running instances are version 8.1.x or higher, set your environment to that of the running instance, then run the following command:
% $ORACLE_HOME/bin/sysresv
This will show you the semaphore ids being used by this instance. Make a note of them and repeat the sysresv command for each running instance.
B) If you are not running version 8.1.x or higher, use the following commands to get a listing of semaphores:
$ svrmgrl
svrmgr) connect internal
svrmgr) oradebug ipc

Repeat these commands for each running instance making note of the "Semaphore identifiers" for each.

Step 3)

Now get a list of all of the allocated semaphores on the system using the following command:

% ipcs -sb | grep -v grep | grep (oracle)
where (oracle) is the name of the oracle user.

Compare the output of the ipcs command with the notes you made from the sysresv command or oradebug ipc command , and remove any semaphores that do not belong to a running instance.

Step 4)
You can remove semaphores using the following command:
% ipcrm -s (semid)
where (semid) is the semaphore id shown in the second column of the ipcs output titled ID.

Step 5)
Now try to start your instance again.


Scenario 2
----------
You are trying to start a new instance, or an existing instance that had the initSID.ora parameter PROCESSES changed. You will need to raise the kernel parameter SEMMNS.

Set the kernel parameter SEMMNS to double the sum of all of the initSID.ora PROCESSES parameters on the system.

For example, if you have 5 instances with the initSID.ora PROCESSES parameter set to 25, 50, 100, 200, and 250, then set the kernel parameter SEMMNS equal to 1250.

((25 + 50 + 100 + 200 + 250) * 2) = 1250

After making changes to the kernel parameter SEMMNS, you will be required to reboot the system. Depending on your operating system, you may also be required to rebuild the kernel. For more information on changing kernel parameters, please consult your OS documentation and/or your OS vendor.

Tuesday, November 17, 2009

Oracle Merge Statements

Syntax: -
MERGE (hint) INTO (table_name)
USING (table_view_or_query)
ON ((condition))
WHEN MATCHED THEN (update_clause)
DELETE (where_clause)
WHEN NOT MATCHED THEN (insert_clause)
[LOG ERRORS (log_errors_clause) (reject limit (integer unlimited)];

Ex: -

CREATE TABLE employee (employee_id NUMBER(5),first_name VARCHAR2(20),last_name VARCHAR2(20),dept_no NUMBER(2),salary NUMBER(10));
INSERT INTO employee VALUES (1, 'Dan', 'Morgan', 10, 100000);INSERT INTO employee VALUES (2, 'Helen', 'Lofstrom', 20, 100000);INSERT INTO employee VALUES (3, 'Akiko', 'Toyota', 20, 50000);INSERT INTO employee VALUES (4, 'Jackie', 'Stough', 20, 40000);INSERT INTO employee VALUES (5, 'Richard', 'Foote', 20, 70000);INSERT INTO employee VALUES (6, 'Joe', 'Johnson', 20, 30000);INSERT INTO employee VALUES (7, 'Clark', 'Urling', 20, 90000);

CREATE TABLE bonuses (employee_id NUMBER, bonus NUMBER DEFAULT 100);
INSERT INTO bonuses (employee_id) VALUES (1);INSERT INTO bonuses (employee_id) VALUES (2);INSERT INTO bonuses (employee_id) VALUES (4);INSERT INTO bonuses (employee_id) VALUES (6);INSERT INTO bonuses (employee_id) VALUES (7);COMMIT;

SELECT * FROM employee;
SELECT * FROM bonuses;

MERGE INTO bonuses bUSING ( SELECT employee_id, salary, dept_no FROM employee WHERE dept_no =20) eON (b.employee_id = e.employee_id)WHEN MATCHED THEN UPDATE SET b.bonus = e.salary * 0.1 DELETE WHERE (e.salary <> 40000);

SELECT * FROM bonuses;

Tuesday, November 10, 2009

ORA-00600 Metalink Notes

ORA - 600 -- Look at these Metalink notes for Help
ORA-600 Errors 1 to 3000
Note 138300.1 "ORA-600 [105]"
Note 263295.1 "ORA-600 [106]"
Note 28104.1 "ORA-600 [504]"
Note 138871.1 "ORA-600 [510]"
Note 138888.1 "ORA-600 [525]"
Note 138939.1 "ORA-600 [711]"
Note 39308.1 "ORA-600 [723]"
Note 31056.1 "ORA-600 [729]"
Note 131490.1 "ORA-600 [733]"
Note 217860.1 "ORA-600 [1013]"
Note 138123.1 "ORA-600 [1100]"
Note 41767.1 "ORA-600 [1113]"
Note 40514.1 "ORA-600 [1114]"
Note 137262.1 "ORA-600 [1115]"
Note 66387.1 "ORA-600 [1158]"
Note 138354.1 "ORA-600 [1236]"
Note 28045.1 "ORA-600 [2103]"
Note 28929.1 "ORA-600 [2662]"
Note 31057.1 "ORA-600 [2845]"
Note 138733.1 "ORA-600 [2865]"


ORA-600 Errors 3001 to 6000
Note 30866.1 "ORA-600 [3020]"
Note 93665.1 "ORA-600 [3668]"
Note 47456.1 "ORA-600 [4000]"
Note 96642.1 "ORA-600 [4036]"
Note 43914.1 "ORA-600 [4137]"
Note 28226.1 "ORA-600 [4146]"
Note 39282.1 "ORA-600 [4193]"
Note 39283.1 "ORA-600 [4194]"
Note 138822.1 "ORA-600 [4400]"
Note 29702.1 "ORA-600 [4414]"
Note 138836.1 "ORA-600 [4454]"
Note 39553.1 "ORA-600 [4511]"
Note 73455.1 "ORA-600 [4512]"
Note 27955.1 "ORA-600 [4519]"
Note 204536.1 "ORA-600 [4820]"
Note 41840.1 "ORA-600 [4882]"


ORA-600 Errors 6001 to 9000
Note 47449.1 "ORA-600 [6002]"
Note 116552.1 "ORA-600 [6034]"
Note 40640.1 "ORA-600 [6101]"
Note 99300.1 "ORA-600 [6122]"
Note 138913.1 "ORA-600 [6193]"
Note 39399.1 "ORA-600 [6731]"
Note 41719.1 "ORA-600 [6856]"


ORA-600 Errors 9001 to 15000
Note 138325.1 "ORA-600 [12209]"
Note 33174.1 "ORA-600 [12235]"
Note 138332.1 "ORA-600 [12261]"
Note 138340.1 "ORA-600 [12304]"
Note 35928.1 "ORA-600 [12333]"
Note 28229.1 "ORA-600 [12700]"
Note 67496.1 "ORA-600 [13009]"
Note:28185.1 "ORA-600 [13013]"


ORA-600 Errors 15001 to 17000
Note 138428.1 "ORA-600 [15160]"
Note 138430.1 "ORA-600 [15201]"
Note 138431.1 "ORA-600 [15203]"
Note 131186.1 "ORA-600 [15212]"
Note 137266.1 "ORA-600 [15419]"
Note 216277.1 "ORA-600 [15456]"
Note 138457.1 "ORA-600 [15709]"
Note 67490.1 "ORA-600 [15851]"
Note 76528.1 "ORA-600 [15868]"
Note 138499.1 "ORA-600 [16201]"
Note 106607.1 "ORA-600 [16365]"
Note 138523.1 "ORA-600 [16515]"
Note 138526.1 "ORA-600 [16607]"

ORA-600 Errors 17001 to 30000
Note 138537.1 "ORA-600 [17003]"
Note 138541.1 "ORA-600 [17012]"
Note 41472.1 "ORA-600 [17034]"
Note 138554.1 "ORA-600 [17059]"
Note 39616.1 "ORA-600 [17069]"
Note 29616.1 "ORA-600 [17090]"
Note 138565.1 "ORA-600 [17099]"
Note 47411.1 "ORA-600 [17112]"
Note 39453.1 "ORA-600 [17113]"
Note 34782.1 "ORA-600 [17114]"
Note 138576.1 "ORA-600 [17128]"
Note 138580.1 "ORA-600 [17147]"
Note 34781.1 "ORA-600 [17148]"
Note 138586.1 "ORA-600 [17172]"
Note 263110.1 "ORA-600 [17175]"
Note 34779.1 "ORA-600 [17182]"
Note 45725.1 "ORA-600 [17271]"
Note 138597.1 "ORA-600 [17274]"
Note 134139.1 "ORA-600 [17280]"
Note 39361.1 "ORA-600 [17281]"
Note 138602.1 "ORA-600 [17285]"
Note 138621.1 "ORA-600 [17585]"
Note 138640.1 "ORA-600 [18209]"
Note 216278.1 "ORA-600 [18261]"
Note 138678.1 "ORA-600 [20084]"
Note 100073.1 "ORA-600 [25012]"


ORA-600 Errors kc
Note 138981.1 "ORA-600 [kcbgcur_2]"
Note 70097.1 "ORA-600 [kcbgcur_3]"
Note 114058.1 "ORA-600 [kcbgcur_9]"
Note 138990.1 "ORA-600 [kcbgtcr_4]"
Note 138991.1 "ORA-600 [kcbgtcr_5]"
Note 261264.1 "ORA-600 [kcbgtcr]"
Note 248874.1 "ORA-600 [kcbgtcr_6]"
Note 233612.1 "ORA-600 [kcbgtcr_12]"
Note 204512.1 "ORA-600 [kcbnew_3]"
Note 216104.1 "ORA-600 [kcbrbo1]"
Note 139011.1 "ORA-600 [kcbzpb_1]"
Note 139012.1 "ORA-600 [kcbzpb_2]"
Note 229467.1 "ORA-600 [kcbzwb_4]"
Note 139013.1 "ORA-600 [kccsbck_first]"
Note 216108.1 "ORA-600 [kcllcu_0]"
Note 263225.1 "ORA-600 [kclwcrs_15]"
Note 76434.1 "ORA-600 [kcoapl_blkchk]"
Note 248718.1 "ORA-600 [kcratr1_lostwrt]"


ORA-600 Errors kd to kw
Note 139037.1 "ORA-600 [kdddgb2]"
Note 139042.1 "ORA-600 [kdisle:nrows]"
Note 139051.1 "ORA-600 [kghalo2]"
Note 139052.1 "ORA-600 [kghasp1]"
Note 139066.1 "ORA-600 [kghxhdr1]"
Note 229809.1 "ORA-600 [kgliep_1]"
Note 66501.1 "ORA-600 [kkrqtab2]"
Note 139095.1 "ORA-600 [kkslgop1]"
Note 139116.1 "ORA-600 [kohdtf048]"
Note 264061.1 "ORA-600 [kqludp2]"
Note 139162.1 "ORA-600 [kssrmp1]"
Note 247822.1 "ORA-600 [ksmals]"
Note 139153.1 "ORA-600 [ksmguard2]"
Note 233864.1 "ORA-600 [kteuproptime-2]"
Note 139180.1 "ORA-600 [ktsgsp5]"
Note 139193.1 "ORA-600 [kttmupd_segment-]"
Note 228480.1 "ORA-600 [kwqitnmptme:read]"
Note 228480.1 "ORA-600 [kwqitnmptme:ready]"
Note 228364.1 "ORA-600 [kwqitnmptme:wait]"


ORA-600 Errors q to z
Note 248095.1 "ORA-600 [qctcte1]"
Note 216273.1 "ORA-600 [qctstc2o1]"
Note 209363.1 "ORA-600 [qerrmofbu1]"
Note 237598.1 "ORA-600 [qertqtableallocate2]"
Note 226887.1 "ORA-600 [qkagby4]"
Note 222876.1 "ORA-600 [qmxiunppacked2]"
Note 244365.1 "ORA-600 [rworupo.1]"
Note 139263.1 "ORA-600 [srsget_1]"
Note 260951.1 "ORA-600 [ttcgcshnd]"
Note 216452.1 "ORA-600 [ttcgcshnd-1]"
Note 216453.1 "ORA-600 [ttcgcshnd-2]"

Monday, November 9, 2009

Table Partitioning

Now a days enterprises run databases of hundred of Gigabytes in size. These databases are known as Very Large Databases (VLDB). From Oracle Ver. 8.0 Oracle has provided the feature of table partitioning i.e. you can partition a table according to some criteria . For example you have a SALES table with the following structure

Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993
and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following

select sum(amt) from sales where year=1991;

select product,sum(amt) from sales where year=1992 Group by product;

Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.

CREATING PARTITION TABLES

To create a partition table give the following statement

create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5;

In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the partitions will still be available.

The above example the table is partition by range.

In Oracle you can partition a table by
*Range Partitioning
*Hash Partitioning
*List Partitioning
*Composite Partitioning

Range Partitioning: -
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed; for example, value of year. Performance is best when the data evenly distributes across the range

Hash partitioning: -
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key

The following example shows how to create a hash partition table.

The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).

CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);

List Partitioning: -
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete values for the partitioning column in the description for each partition. This is different from range partitioning, where a range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to partition mapping.

List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally

The following example creates a table with list partitioning

Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the city column the row is stored in that partition.

COMPOSITE PARTITIONING: -
Composite partitioning partitions data using the range method, and within each partition, sub-partitions it using the hash method. Composite partitions are ideal for both historical data and striping, and provide improved manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.

When creating composite partitions, you specify the following:

* Partitioning method: range
* Partitioning column(s)
* Partition descriptions identifying partition bounds
* Sub-partitioning method: hash
* Sub-partitioning column(s)
* Number of sub-partitions for each partition or descriptions of subpartitions

The following statement creates a composite-partitioned table. In this example, three range partitions are created, each containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).

CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

ALTERING PARTITION TABLES: -

To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause.

For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);

To add a partition to a Hash Partition table give the following command.
Alter table products add partition;

Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace. To add a partition by user define name and in your specified tablespace give the following command.

Alter table products add partition p5 tablespace u5;

To add a partition to a List partition table give the following command.
alter table customers add partition central_India values (‘BHOPAL’,’NAGPUR’);

Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions of the table.

Coalescing Partitions: -

Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of sub-partitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.

To coalesce a hash partition give the following statement.
Alter table products coalesce partition;

This reduces by one the number of partitions in the table products.

DROPPING PARTITIONS

To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;

Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.

Alter index sales_ind rebuild;

To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this

Delete from sales where year=1994;

Alter table sales drop partition p4;

This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.

Another method of dropping partitions is give the following statement.

ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;

This causes the global index to be updated at the time the partition is dropped.

Exchanging a Range, Hash, or List Partition: -

To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.

ALTER TABLE stocks EXCHANGE PARTITION p3 WITH stock_table_3;

Merging Partitions: -
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.

Alter table sales merge partition p2 and p3 into partition p23;

Modifying Partitions: Adding Values

Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.

The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.

ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');

Modifying Partitions: Dropping Values

Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.

ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);

SPLITTING PARTITIONS: -
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.

Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));

TRUNCATING PARTITON: -
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement

Alter table sales truncate partition p5;

LISTING INFORMATION ABOUT PARTITION TABLES: -

To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;

To see on partition level partitioning information
Select * from user_tab_partitions;

Friday, November 6, 2009

Oracle Background processes

You can see the Oracle background processes with this queries:
* select * from v$session where type ='BACKGROUND';
* select name,description from v$bgprocess;

Here are some of the most important Oracle background processes:

ARCH - (Optional) Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery.

CJQ - Job Queue Process (CJQ) - Used for the job scheduler. The job scheduler includes a main program (the coordinator) and slave programs that the coordinator executes. The parameter job_queue_processes controls how many parallel job scheduler jobs can be executed at one time.

CKPT - Checkpoint process writes checkpoint information to control files and data file headers.

CTWR - This is a new process Change Tracking Writer (CTWR) which works with the new block changed tracking features in 10g for fast RMAN incremental backups.

CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn processes to handle jobs.

DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block cache to the database data files. Generally, DBWR only writes blocks back to the data files on commit, or when the cache is full and space has to be made for more blocks. The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished.

FMON - The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. When you specify the FILE_MAPPING initialization parameter for mapping data files to physical devices on a storage subsystem, then the FMON process is spawned.

LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. In RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs.

LMON - Lock Manager process

MMON - The Oracle 10g background process to collect statistics for the Automatic Workload Repository (AWR).

MMNL - This process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation.

MMAN - is used for internal database tasks that manage the automatic shared memory. MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components.

PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture) is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as service registration agent is particularly important.

Pnnn - (Optional) Parallel Query Slaves are started and stopped as needed to participate in parallel query operations.

RBAL - This process coordinates rebalance activity for disk groups in an Automatic Storage Management instance.

SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents. SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.

WMON - The "wakeup" monitor process


Data Guard/Streams/replication Background processes

DMON - The Data Guard Broker process.

SNP - The snapshot process.

MRP - Managed recovery process - For Data Guard, the background process that applies archived redo log to the standby database.

ORBn - performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these at a time, called ORB0, ORB1, and so forth.

OSMB - is present in a database instance using an Automatic Storage Management disk group. It communicates with the Automatic Storage Management instance.

RFS - Remote File Server process - In Data Guard, the remote file server process on the standby database receives archived redo logs from the primary database.

QMN - Queue Monitor Process (QMNn) - Used to manage Oracle Streams Advanced Queuing.


Oracle Real Application Clusters (RAC) Background Processes


The following are the additional processes spawned for supporting the multi-instance coordination:

DIAG: Diagnosability Daemon – Monitors the health of the instance and captures the data for instance process failures.

LCKx - This process manages the global enqueue requests and the cross-instance broadcast. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).
LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)

LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also handles deadlock detection and remote enqueue requests. Remote resource requests are the requests originating from another instance.

LMSx - The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster.

The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances.

The LMSn processes handle the blocking interrupts from the remote instance for the Global Cache Service resources by:

*Managing the resource requests and cross-instance call operations for the shared resources.
*Building a list of invalid lock elements and validating the lock elements during recovery.
*Handling the global lock deadlock detection and Monitoring for the lock conversion timeouts

Friday, October 16, 2009

ORA errors

ORA-27101: shared memory realm does not exist tips

Oracle docs note this about ORA-27101:
ORA-27101: shared memory realm does not exist
Cause: Unable to locate shared memory realm
Action: Verify that the realm is accessible

MetaLink offers valuable information regarding ORA-27101. ORA-27101 is often thrown along with ORA-01034 during database installation. ORA-27101 is also seen after installation is complete, when attempting to use SQL*Plus.

To resolve this issue with ORA-27101, you should be sure that ORACLE_HOME and ORACLE_SID are correct, and ORACLE_HOME should not have trailing slash. Here is the MetaLink example to accompany ORA-27101 resolution instructions:

For checking on trailing trash:

$ echo $ORACLE_HOME
$ /app/oracle/817/ << style="font-weight: bold;">ORA-04030: out of process memory when trying to allocate nn bytes

Oracle Tips by Burleson Consulting

On an auto execute of an external job I get the error "ORA-04030: out of process memory when trying to allocate nn bytes".
Answer:
This can be caused by a shortage of RAM on a dedicated (non shared server) environment and by not setting kernel parms correctly.
RAM shortage - Depending on your release of Oracle, you have several options:

* Increase pga_aggregate_target
* Decrease sort_area_size and/or hash_area_size
* Move to multi-threaded server (a.k.a. MTS or shared servers)

Kernel Parms - (Karpinski, Joseph E) In installing Oracle 10G we ran into a number of ORA-4030 issues.

We changed the setting of “ulimit” as follows to resolve the issue:

ulimit –a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194302
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 4096
-----------------------------------------------------------------------------------------------------
ORA-12519: TNS:no appropriate service handler found tips

Oracle docs note this about ORA-12519:

ORA-12519: TNS:no appropriate service handler found

Cause: The listener could not find any available service handlers that are appropriate for the client connection.

Action: Run "lsnrctl services" to ensure that the instance(s) have registered with the listener, and are accepting connections.

Oracle MetaLink shares information regarding ORA-12519 which can occur following a change to MTS. What has commonly happened here is that moving from a dedicated server to a shared server (MTS) ORA-12519 is thrown even though you have correctly changed al of the necessary parameters.

ORA-12519 occurs here because the listener has not been properly set to the default port and the PMON automatic listener registration fails with ORA-12519. You may also notice with this ORA-12519 error that the RDA LOCAL_LISTENER reads NULL.

To resolve ORA-12519, register the listener manually after the MTS change if you still wish to use non-default ports.

Also, on Oracle DBA Forums, there is information referring to ORA-12519. Here, it is pointed out that ORA-12519 can be caused by low "processes" values, which can be resolved by increasing he DB parameter, "parallel_max_servers". Also, ORA-12519 can be thrown because of DB and client versions which do not match.
______________________________________________________________________
Oracle 10g Release 1 - Listener problem: -

Add below parameter in Listener.ora file

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

Tuesday, September 15, 2009

Using SSH and SCP without a password

From time to time you may want to write scripts that will allow you to copy files to a server, or login, without being prompted for passwords. This can make them simpler to write and also prevents you from having to embed the password in your code.

SCP has a feature that allows you to do this. You no longer have to worry about prying eyes seeing your passwords nor worrying about your script breaking when someone changes the password. You can configure SSH to do this by generating and installing data transfer encryption keys that are tied to the IP addresses of the two servers. The servers then use these pre-installed keys to authenticate one another for each file transfer. As you may expect, this feature doesn't work well with computers with IP addresses that periodically change, such as those obtained via DHCP.

There are some security risks though. The feature is automatically applied to SSH as well. Someone could use your account to log in to the target server by entering the username alone. It is therefore best to implement this using unprivileged accounts on both the source and target servers.

The example that follows enables this feature in one direction (from server bigboy to server smallfry) and only uses the unprivileged account called filecopy.

Configuration: Client Side
Here are the steps you need to do on the computer that acts as the SSH client:
1) Generate your SSH encryption key pair for the filecopy account. Press the Enter key each time you are prompted for a password to be associated with the keys. (Do not enter a password.)

[filecopy@bigboy filecopy]# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key
(/filecopy/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in
/filecopy/.ssh/id_dsa.
Your public key has been saved in
/filecopy/.ssh/id_dsa.pub.
The key fingerprint is:
1e:73:59:96:25:93:3f:8b:50:39:81:9e:e3:4a:a8:aa
filecopy@bigboy
[filecopy@bigboy filecopy]#

2) These keyfiles are stored in the.ssh subdirectory of your home directory. View the contents of that directory. The file named id_dsa is your private key, and id_dsa.pub is the public key that you will be sharing with your target server. Versions other than RedHat/Fedora may use different filenames, use the SSH man pages to verify this.

[filecopy@bigboy filecopy]# cd ~/.ssh
[filecopy@bigboy filecopy]# ls
id_dsa id_dsa.pub known_hosts
[filecopy@bigboy .ssh]#

3) Copy only the public key to the home directory of the account to which you will be sending the file.

[filecopy@bigboy .ssh]# scp id_dsa.pub filecopy@smallfry:public-key.tmp

Now, on to the server side of the operation.

Configuration - Server Side

Here are the steps you need to do on the computer that will act as the SSH server.

1) Log into smallfry as user filecopy. Create an .ssh subdirectory in your home directory and then go to it with cd.

[filecopy@smallfry filecopy]# ls
public-key.tmp
[filecopy@smallfry filecopy]# mkdir .ssh
[filecopy@smallfry filecopy]# chmod 700 .ssh
[filecopy@smallfry filecopy]# cd .ssh

2) Append the public-key.tmp file to the end of the authorized_keys file using the >> append redirector with the cat command. The authorized_keys file contains a listing of all the public keys from machines that are allowed to connect to your Smallfry account without a password. Versions other than RedHat/Fedora may use different filenames, use the SSH man pages to verify this.

[filecopy@smallfry .ssh]# cat ~/public-key.tmp >> authorized_keys
[filecopy@smallfry .ssh]# rm ~/public-key.tmp

Saturday, September 12, 2009

Oracle SDU and TDU parameters in listener.ora and tnsnames.ora

The session data unit (SDU) specifies the size of the packets to send over the network. The maximum transmission unit (MTU) is a fixed value that depends on the actual network implementation used. Ideally, SDU should not surpass the size of the MTU. Oracle recommends that SDU be set equal to the MTU. The tnsnames.ora and listener.ora files house the SDU and TDU parameters.

To group data together, the TDU value is the default packet size used within Oracle*Net. The default value for both SDU and TDU is 2,048 bytes and the maximum value is 32,767 bytes. The TDU parameter should ideally be a multiple of the SDU parameter. For SDU and TDU, the following guidelines apply:

On fast network connections such as T1 or T3 lines, SDU and TDU should be set equal to the MTU for the network. On standard Ethernet networks, the default MTU size should be set to 1,514 bytes. On standard token ring networks, the default MTU size is 4,202 bytes.

If the users are connecting via dial-up modem lines, it may be desirable to set SDU and TDU to smaller values in consideration of the frequent resends that occur over modem connections.

The mts_dispatchers must also be set with the proper MTU - TDU configuration, if a Multi-Threaded Server (MTS) is used.

As a rule, SDU should not be set greater than the TDU because network resources will be wasted by shipping wasted space in each packet.

This brief review of network-related parameters was intended provide an introduction to the scope and complexity of network tuning. It is important to understand that Oracle*Net is simply a layer in the OSI model that is above the network-specific protocol stack, and therefore, virtually all network tuning is external to Oracle.

Ex: -

Doc ID: Note:71921.1
Subject: ORA-12151, ORA-12571 errors on Windows NT
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 09-JUN-1999
Last Revision Date: 07-FEB-2002

PURPOSE

Give an overview of what to adjust and verify in case of ORA-12151
and ORA-12571 errors

SCOPE AND APPLICATION

This notes applies to all who are facing intermittent SQL*Net read and
write errors while working on NT.

==============================================================================


ORA-12151 and ORA-12571 errors on Windows NT

Intermittent SQL*Net TCP/IP read and write errors are sometimes encountered
on NT. The underlying reasons of these errors are a synchronization error
in the TCP/IP layer on NT. To help prevent this kind of errors, a
few things can be adjusted to help the synchronization:

1. TCP.NODELAY parameter
This parameter is to be added in the "PROTOCOL.ORA" file in the
"NETWORK\\ADMIN" directory.

In most cases, TCP/IP info send across the network is buffered
till at least a complete network packet can be send. This means
that in certain cases commands are not issued directly, and kept
buffered until some other info can be send as well.
This has the potential to generate timeouts and errors.
To avoid this, this delay can be switched off.

tcp.nodelay = yes

2. Disabling AUTOMATIC_IPC on Clients
On client PC's, checking for IPC connections is pointless as there
is never a database installed on them. So, in order to save some
time during the connections phase, set AUTOMATIC_IPC=OFF in the
"SQLNET.ORA" file.

3. "NAMES.DIRECTORY_PATH" to force using "TNSNAMES.ORA" or "ONAMES"
If you have a fixed environment, it's best to specify this in the
"SQLNET.ORA" file. The parameter "NAMES.DIRECTORY_PATH" specifies how
the TNS resolving takes place.

By default, if this parameter is not present - the SQL*Net layer
will first check if there is a Names server anywhere on the network,
and will afterwards start checking for a "TNSNAMES.ORA" file.

If you have only a "TNSNAMES.ORA" file, and no Oracle Names installed,
it is best to specify this parameter to skip the Names searching in
order to speed up the TNS resolving time.

The value of this parameter is a comma separated list, with as
possible values TNSNAMES (for "TNSNAMES.ORA" searching) and "ONAMES"
(for Oracle Names searching).

4. TCP/IP timeouts on NT
The default retransmission count on NT is 5, before it detects that
the network is down. With the value of 5, the actual timeout is
aproximately 15 seconds.

This default value can be easily increased to a higher value.
In the registry, change the following value:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

This parameter can be useful on both client and server. Suggested
course of action is to first add this parameter on the machine
generating the SQL*Net errors, and if the problem persists, also
include the parameter in the registry of the other machine.

5. TCP/IP keepalive on NT
KEEPALIVE is an extension to TCP/IP which enables the closing of
dead connections which are no longer being used.

Problems can occur when the server does not close a connection
after a PC client process has disappeared. This typically happens
when a PC user switches off or reboots their machine while still
connected to Oracle. Note that this is not an Oracle problem, but
a limitation of TCP/IP, which has no way of knowing for sure
whether a remote connection has disappeared.

This feature is enabled by default on NT. Problem can occur however
if the timeout period is too fast for some heavily used or slow
network. In those conditions, the KEEPALIVE registry value can be
used to specify a KEEPALIVE value before a connection gets cut.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
KeepAlive REG_DWORD "number"

A value of '10' is a common value specified for this variable.

Again, this parameter can be useful on both client and server.
Start with the machine generating the error, and if needed, also add
it on the machine on the other side.

6. TCP/IP timeouts on Windows 95
The same parameter can also be specified on Windows 95. It has the
same functionality, only the location of the parameter in the
registry is different.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Winsock
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

The purpose and behavior of the parameter is the same on the Windows 95
and Windows 98, as on the Windows NT platform.

7. SDU & TDU parameters
Part of this problem is the sequence of information that gets transmitted.
If there are disruptions in the sequence, the errors ORA-12151 and
ORA-12571 can also appear, alerting the application that not all information
has been send across the network succesfully.

The sequence of information is determined by the amount of data the program
is sending, and the actual size the protocol can send across the network
at a time.

The more data the program wants to send in one 'go', the more sequences and
transport packet split-ups will have to be made.

By default, SQL*Net is using an SDU (Session Data Unit) of 2048 bytes (2Kb)
and a TDU (Transport Data Unit) of 32768 (32Kb) bytes.
On standard Ethernet connections, with no special modifications made, the
SDU is 1500 bytes, while the TDU is 8760 bytes.

With these values, each data request made by SQL*Net will have to be split
up in several smaller packets to actually be able to transmit.

Therefore, to minize the creation the additional packets, it is advised, in
case of these errors, to synchronize the SDU and TDU parameters at the
SQL*Net level with those of the actual network topology used.

These SDU and TDU parameters have to be specified at both the client and
the server configuration files:

TNSNAMES.ORA:
-------------
ORCL.WORLD =
(DESCRIPTION =
(SDU=1500)
(TDU=8760)
(ADDRESS_LIST =
(ADDRESS =(PROTOCOL=TCP)(Host=foobar)(Port=1521))
)
(CONNECT_DATA =
(SID = ORCL)
)
)

LISTENER.ORA:
-------------
SID_DESC_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 1500)
(TDU = 8760)
(SID_NAME = ORCL)
)
)

For more information about the SDU and TDU parameter, see Note 44694.1,
Note 44694.1: SQL*Net Packet Sizes (SDU & TDU Parameters)

8. Setting a new TDU size on Windows NT
You can modify the TDU size on NT, via the TcpWindowSize parameter:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Tcpip
Parameters
TcpWindowSize REG_DWORD "number"


Additional information about the NT network parameters on NT:
-------------------------------------------------------------
Q120642: TCP/IP & NBT Configuration Parameters for Windows NT
http://support.microsoft.com/support/kb/articles/Q120/6/42.asp

Q140375: Default MTU Size for Different Network Topology
http://support.microsoft.com/support/kb/articles/Q140/3/75.asp

ORA-27102: out of memory Error on Solaris 10

Symptom: -
As part of a database tuning effort you increase the SGA/PGA sizes; and Oracle greets with an ORA-27102: out of memory error message. The system had enough free memory to serve the needs of Oracle.
SQL> startup
ORA-27102: out of memory
SVR4 Error: 22: Invalid argument

Diagnosis: -
$ oerr ORA 27102
27102, 00000, "out of memory"
// *Cause: Out of memory
// *Action: Consult the trace file for details

Not so helpful. Let's look the alert log for some clues.
% tail -2 alert.log
WARNING: EINVAL creating segment of size 0x000000028a006000
fix shm parameters in /etc/system or equivalent

Oracle is trying to create a 10G shared memory segment (depends on SGA/PGA sizes), but operating system (Solaris in this example) responded with an invalid argument (EINVAL) error message. There is a little hint about setting shm parameters in /etc/system.

Prior to Solaris 10, shmsys:shminfo_shmmax parameter has to be set in /etc/system with maximum memory segment value that can be created. 8M is the default value on Solaris 9 and prior versions; where as 1/4th of the physical memory is the default on Solaris 10 and later. On a Solaris 10 (or later) system, it can be verified as shown below:

% prtconf | grep Mem
Memory size: 32760 Megabytes

% id -p
uid=59008(oracle) gid=10001(dba) projid=3(default)

% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 7.84GB - deny -
system 16.0EB max deny -

Now it is clear that the system is using the default value of 8G in this scenario, where as the application (Oracle) is trying to create a memory segment (10G) larger than 8G. Hence the failure.

So, the solution is to configure the system with a value large enough for the shared segment being created, so Oracle succeeds in starting up the database instance.

On Solaris 9 and prior releases, it can be done by adding the following line to /etc/system, followed by a reboot for the system to pick up the new value.

set shminfo_shmmax = 0x000000028a006000

However shminfo_shmmax parameter was obsoleted with the release of Solaris 10; and Sun doesn't recommend setting this parameter in /etc/system even though it works as expected.

On Solaris 10 and later, this value can be changed dynamically on a per project basis with the help of resource control facilities . This is how we do it on Solaris 10 and later:

% prctl -n project.max-shm-memory -r -v 10G -i project 3

% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 10.0GB - deny -
system 16.0EB max deny -


Note that changes done with prctl command on a running system are temporary, and will be lost when the system is rebooted. To make the changes permanent, create a project with projadd command as shown below:

$ projadd -p 102 -c 'eBS benchmark' -U oracle -G dba -K 'project.max-shm-memory=(privileged,10G,deny)' OASB


Finally make sure the project is created with projects -l or cat /etc/project commands.
$ projects -l
...
...
OASB
projid : 102
comment: "eBS benchmark"
users : oracle
groups : dba
attribs: project.max-shm-memory=(privileged,10737418240,deny)

$ cat /etc/project
...
...
OASB:102:eBS benchmark:oracle:dba:project.max-shm-memory=(privileged,10737418240,deny)

With these changes, Oracle would start the database up normally.

SQL> startup
ORACLE instance started.

Total System Global Area 1.0905E+10 bytes
Fixed Size 1316080 bytes
Variable Size 4429966096 bytes
Database Buffers 6442450944 bytes
Redo Buffers 31457280 bytes
Database mounted.
Database opened.

Oracle 10g Installation on Solaris10

1. /etc/hosts file must contain a fully qualified name for the server:
(ip-address) (fully-qualified-machine-name) (machine-name)

2. Set Kernel Parameters
In previous versions of Solaris, kernel parameters were amended by adding entries to the "/etc/system" file, followed by a system reboot.

#set semsys:seminfo_semmni=100
#set semsys:seminfo_semmsl=256
#set shmsys:shminfo_shmmax=4294967295
#set shmsys:shminfo_shmmni=100

3. As the root user, issue the following command.
#projadd oracle

4. Append the following line to the "/etc/user_attr" file.
oracle::::project=oracle

If you've performed a default installation, it is likely that the only kernel parameter you need to alter is "max-shm-memory". To check the current value issue the following command.
# prctl -n project.max-shm-memory -i project oracle

project: 100: oracle
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 254MB - deny -
system 16.0EB max deny -

To reset this value, make sure at least one session is logged in as the oracle user, then from the root user issue the following commands.

# prctl -n project.max-shm-memory -v 4gb -r -i project oracle
# projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" oracle

The first dynamically resets the value, while the second makes changes to the "/etc/project" file so the value is persistent between reboots.
# cat /etc/project
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
oracle:100::::project.max-shm-memory=(priv,4294967296,deny)

5. The Oracle installer seems incapable of recognising kernel parameter set using resource control projects, but if you ignore the warnings the installation completes successfully.
Setup
Add the "SUNWi1cs" and "SUNWi15cs" packages using the "pkgadd" command.

# pkgadd -d /cdrom/sol_10_106_x86/Solaris_10/Product SUNWi1cs SUNWi15cs
(or) copy those directories from CD to local disk.
# pkgadd -d . SUNWi1cs
# pkgadd -d . SUNWi15cs
Processing package instance from

X11 ISO8859-1 Codeset Support(i386) 2.0,REV=2004.10.17.15.04
Copyright 2004 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.

This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.

Using as the package base directory.
## Processing package information.
## Processing system information.
16 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of [y,n,?] y

Installing X11 ISO8859-1 Codeset Support as
## Installing part 1 of 1.
Installation of was successful.
Processing package instance from
X11 ISO8859-15 Codeset Support(i386) 2.0,REV=2004.10.17.15.04
Copyright 2004 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
This appears to be an attempt to install the same architecture and
version of a package which is already installed. This installation
will attempt to overwrite this package.
Using as the package base directory.
## Processing package information.
## Processing system information.
21 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.
Do you want to continue with the installation of [y,n,?] y
Installing X11 ISO8859-15 Codeset Support as
## Installing part 1 of 1.
Installation of was successful.

6. Create the new groups and users:
#groupadd oinstall
#groupadd dba
#groupadd oper
#useradd -g oinstall -G dba -d /export/home/oracle oracle
#mkdir /export/home/oracle
#chown oracle:oinstall /export/home/oracle
#passwd -r files oracle

7. Create the directories in which the Oracle software will be installed:
#mkdir -p /u01/app/oracle/product/10.2.0/db_1
#chown -R oracle:oinstall /u01

8. Login as the oracle user and add the following lines at the end of the .profile file, making sure you have set the correct ORACLE_BASE value:

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
# Select the appropriate ORACLE_BASE
#ORACLE_BASE=/export/home/oracle; export ORACLE_BASE
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=TSH1; export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH; export PATH

9. Installation
Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable:

$DISPLAY=:0.0; export DISPLAY

10. Start the Oracle Universal Installer (OUI) by issuing the following command in the database directory:
$./runInstaller

to check Kernel value: -
#isoinfo -kv
to check RAM size: -
#prtconf | head -3 |grep Mem

Wednesday, September 9, 2009

MEMORY_TARGET in Oracle 11g

It is amazing memory management in Oracle 11g. Upto Oracle 9i one of the hectic process was to define the memory parameters (SGA and PGA) and there are some thumb role needs to followed for the each parameters. Now DBA sit back and relax since Oracle automated the Memory parameter completely on 11g.

Some lights on old versions:

Oracle 8i: –
db_block_buffer
shared_pool_size
large_pool_size
java_pool_size
sort_area_size
pga size,etc........

Oracle 9i: - introduced PGA_AGGREGATE_TARGET.

Oracle 10g: - Continued this era by automating SGA management using the SGA_TARGET and PGA_AGGREGATE_TARGET parameter.

Oracle 11g: - Fully automated using MEMORY_TARGET

Oracle 10g onwards this parameter automated, but again there was some sort of calculation for 2 parameters called sga_target and pga_aggregate_target. Now, you can take long breath, leave it without any difficult, because 11g all these two parameters auto tuned.

In Oracle 10g setting the sga_target, I have seen many bugs. Some people use the workaround as normal sga setting.

Automatic Memory Management (AMM) is only supported on the major platforms like AIX,Linux, Solaris, Windows, and HP-UX.

There are 2 initialization parameters for AMM confugration:

MEMORY_MAX_TARGET: - The maximum size the memor_target can be increased to without an instance restart (like earlier version sga_max_size). If the memory_max_target is not specified, it defaults to MEMORY_TARGET setting.

Using AMM, the SGA_TARGET and PGA_AGGREGATE_TARGET act as minimum size settings for their respective memory areas.

MEMORY_TARGET: - Elaborating this parameter that Shared memory available for Oracle to use when dynamically controlling the SGA and PGA. This parameter is dynamic, so the total amount of memory available to Oracle can be increased or decreased, provided it does not exceed the MEMORY_MAX_TARGET limit.

Automatic Memory Management Setup: -
The DBCA is also allows you to configure automatic memory management during database creation.

We can manually set, the appropriate MEMORY_TARGET and MEMORY_MAX_TARGET initialization parameters before creating the database or after.

Enabling automatic memory management on a system that didn't previously use it is a simple task.

Kindly use the following calculations:

MEMORY_TARGET = SGA_TARGET + GREATEST(PGA_AGGREGATE_TARGET, "maximum PGA allocated")

Assuming our required setting was 4G, we might issue the following statements.

$sqlplus “/as sysdba”

Set the static parameter. Leave some room for possible future growth without restart.

ALTER SYSTEM SET MEMORY_MAX_TARGET=5G SCOPE=SPFILE;

Set the dynamic parameters. Assuming Oracle has full control.

ALTER SYSTEM SET PGA_AGGREGATE_TARGET=0 SCOPE=SPFILE;
ALTER SYSTEM SET SGA_TARGET=0 SCOPE=SPFILE;
ALTER SYSTEM SET MEMORY_TARGET=4G SCOPE=SPFILE;

Shutdown and restart the instance: -
SHUTDOWN IMMEDIATE;
STARTUP;

The database is restarted the MEMORY_TARGET parameter can be amended as required without an instance restart.

ALTER SYSTEM SET MEMORY_TARGET=4G SCOPE=SPFILE;

Dictionary View Informations: -

V$MEMORY_TARGET_ADVICE
V$MEMORY_CURRENT_RESIZE_OPS
V$MEMORY_RESIZE_OPS
V$MEMORY_DYNAMIC_COMPONENTS

You can also, configure and get lot of information through ENERPRISE

MANAGER, using graphics,etc.

Tuesday, September 8, 2009

Oracle 11g Installation on Linux 5

This articles covers the Installation of Oracle Database 11g on Enterprise Linux 5 and CentOS 5. Server needs minimum 2gb RAM for the installation. Swap should have double of physical ram.
Set the Linux Kernal Parameters:
Linux Kernel Parameters
What is the Shared Memory:
Shared memory allows processes to access common structures and data by placing them in shared memory segments. It's the fastest form of IPC (Interprocess Communication) available since no kernel involvement occurs when data is passed between the processes. In fact, data does not need to be copied between the processes.
Oracle uses shared memory segments for the SGA (Shared Global Area) which is an area of memory that is shared by all Oracle background and foreground processes. The size of the SGA has a major impact to Oracle's performance since it holds database buffer cache and much more.
Setting SHMMAX Parameter
This parameter defines the maximum size in bytes for a shared memory segment. Since the SGA is comprised of shared memory, SHMMAX can potentially limit the size of the SGA.
Setting SHMMNI Parameter
This parameter sets the maximum number of shared memory segments system wide.
Setting SHMALL Parameter
This parameter sets the total amount of shared memory in bytes that can be used at one time on the system.
The SEMMSL Parameter
This parameter defines the maximum number of semaphores per semaphore set.
Oracle recommends to set SEMMSL to the largest PROCESSES init.ora parameter of any database on the Linux system.
The SEMMNI Parameter
This parameter defines the maximum number of semaphore sets in the entire Linux system.
The SEMMNS Parameter
This parameter defines the total number of semaphores (not semaphore set) in the entire Linux system.
The SEMOPM Parameter
This parameter defines the maximum number of semaphore operations that can be performed per semop(2) system call.
Setting File Handles
The maximum number of file handles denotes the maximum number of open files that you can have on the Linux system.
Setting System Wide Limit for File Handles
The value in /proc/sys/fs/file-max sets the maximum number of file handles or open files that the Linux kernel will allocate. When you get error messages about running out of file handles, then you might want to raise this limit.
Change the parameters:

#vi /etc/sysctl.conf
Add the following:
kernel.shmmax = 2147483648
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536 # 512 * PROCESSES
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
net.ipv4.ip_local_port_range = 1024 65000

#vi /etc/hosts
IP-address Machinename.domainname Hostname

#vi /etc/security/limits.conf
Add or change /etc/security/limits.conf file:
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384

#vi /etc/pam.d/login
session required pam_limits.so
session required /lib/security/pam_limits.so
Very important Steps:
On secure linux has to be disabled:

#vi /etc/selinux/config
Change or add
SELINUX=disabled

The following packages(rpm) required for Enterprise Linux Disk1 :
Insert the 1st cd and mount the same: -
Go to cd /media/cdrom/Server (cd /media/cdrom/Server)
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
cd /
eject

Insert Disk2 Cd. Mount the CD: -
Go to cd /media/cdrom/Server (cd /media/cdrom/Server)
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-devel-0.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject

Insert Disk3 Cd. Mount the CD: -
Go to cd /media/cdrom/Server (cd /media/cdrom/Server)
rpm -Uvh sysstat-7.*
cd /
eject

Create user and groups: -
Adding Groups: -
groupadd oinstall
groupadd dba
groupadd asmadmin

Adding user and assign the group: -
useradd -g oinstall -G dba,oper,asmadmin oracle

Add password for Oracle user
#passwd oracle

I would like to light on the "asmadmin" group is used for ASM.
Make directories in which the Oracle Binary to be installed:

#mkdir -p /data1/app/oracle/product/11/db_1
#chown -R oracle:oinstall /data1
#chmod -R 777 /data1

#vi /home/oracle.bash_profile
Add the following: -
ORACLE_HOSTNAME=machinename.domainname
ORACLE_BASE=/data1/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11/db_1; export ORACLE_HOME
ORACLE_SID=orcl; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

Login as Oracle user: -
Run Install Steps:
Set the Display first
$export DISPLAY=hostname:0.0
Go setup Disk directory
$cd Disk1
$./runInstaller
Rest follow the GUI provided by Oracle

Wednesday, August 19, 2009

How to kill locking session: -

1) Creating .sh file to automate the job through crontab: -
kill.sh: -

# .bash_profile
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

cd $ORACLE_HOME/kill
sqlplus system/fopsdb2008@fopsfat1 @/u01/app/oracle/product/10.2.0/db_1/kill/kill.sql
exit

2) Creating .sql file to get the locked session ID in OS level: -
kill.sql: -

set heading off
set echo off
spool /u01/app/oracle/product/10.2.0/db_1/kill/kill_in.sh
select 'kill -9 '|| vp.spid from v$session vs, v$process vp
where vs.paddr = vp.addr
and vs.sid in (select s1.sid from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2);
spool off

select name from v$database;
!chmod 777 /u01/app/oracle/product/10.2.0/db_1/kill/kill_in.sh
!export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
!cd /u01/app/oracle/product/10.2.0/db_1/kill
!pwd
!./kill_in.sh
Exit

3) Making an entry to automate this job every 1 minute: -
Crontab -e: -
* * * * * /u01/app/oracle/product/10.2.0/db_1/kill/kill.sh > /dev/null 2>&1

Monday, August 17, 2009

Creating Synonyms and Granting permissions to another user: -

CREATE OR REPLACE PROCEDURE CREATE_SYNONYMS_GRANTS(P_Owners varchar2) IS
cursor obj_crs is
select owner, object_name, object_type
from all_objects
where owner = upper(P_Owners)
and upper(object_type) in ('TABLE','VIEW','PROCEDURE','FUNCTION','SEQUENCE','SYNONYM','TYPE','PACKAGE');
obj_REC obj_CRS%ROWTYPE;
l_create_synonyms varchar2(500);
l_grants_objects varchar2(500);
BEGIN
FOR obj_REC IN obj_CRS LOOP
begin
--dbms_output.put_line(obj_REC.object_type);
l_create_synonyms := 'create synonym '||obj_REC.owner||'_USER.'||obj_REC.object_name||' for '||obj_REC.owner||'.'||obj_REC.object_name;
dbms_output.put_line(l_create_synonyms||';');
EXECUTE IMMEDIATE l_create_synonyms;
Exception
when others then
dbms_output.put_line(SQLERRM);
end;

begin
CASE obj_REC.object_type
WHEN 'TABLE' THEN l_grants_objects := 'grant select, update, delete, insert on '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'VIEW' THEN l_grants_objects := 'grant select, update, delete, insert on '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'PROCEDURE' THEN l_grants_objects := 'GRANT EXECUTE ON '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'FUNCTION' THEN l_grants_objects := 'GRANT EXECUTE ON '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'PACKAGE' THEN l_grants_objects := 'GRANT EXECUTE ON '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'SEQUENCE' THEN l_grants_objects := 'GRANT SELECT, ALTER ON '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';
WHEN 'TYPE' THEN l_grants_objects := 'GRANT EXECUTE ON '||obj_REC.owner||'.'||obj_REC.object_name||' to '|| obj_REC.owner||'_user';

END CASE;
dbms_output.put_line(l_grants_objects||';');
EXECUTE IMMEDIATE l_grants_objects;
Exception
when others then
dbms_output.put_line(SQLERRM);
end;
end loop;
End;
/

Ex: -
exec CREATE_SYNONYMS_GRANTS ('OBJECTS_OWNER_NAME');

Tuesday, August 11, 2009

Crontab - Quick reference :

Setting up cronjobs in Unix and Solaris

Cron is a unix, solaris utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. These tasks are often termed as cron jobs in unix , solaris.

Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.

1. Crontab Restrictions

You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use

crontab if your name does not appear in the file /usr/lib/cron/cron.deny.

If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.

2. Crontab Commands

export EDITOR=vi ;to specify a editor to open crontab file.

crontab -e Edit your crontab file, or create one if it doesn't already exist.

crontab -l Display your crontab file.

crontab -r Remove your crontab file.

crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)

3. Crontab file

Crontab syntax :-

A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.

* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)

* in the value field above means all legal values as in braces for that column.

The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).

Note: The specification of days can be made in two fields: month day and weekday. If both are specified in an entry, they are cumulative meaning both of the entries will get executed .

4. Crontab Example

A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at 6:30 PM.

30 18 * * * rm /home/someuser/tmp/*

Changing the parameter values as below will cause this command to run at different time schedule below :

30 0 1 1,6,12 * -- 00:30 Hrs on 1st of Jan, June & Dec.

0 20 * 10 1-5 --8.00 PM every weekday (Mon-Fri) only in Oct.

0 0 1,10,15 * * -- midnight on 1st ,10th & 15th of month

5,10 0 10 * 1 -- At 12.05,12.10 every Monday & on 10th of every month

Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.

5. Crontab Environment

Cron invokes the command from the user's HOME directory with the shell, (/usr/bin/sh).

cron supplies a default environment for every shell, defining:

HOME=user's-home-directory

LOGNAME=user's-login-id

PATH=/usr/bin:/usr/sbin:.

SHELL=/usr/bin/sh

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.

6. Disable Email

By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .

>/dev/null 2>&1

7. Generate log file

To collect the cron execution execution log in a file :

30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

8. Next Steps

This article covered a significant aspect of system administration of setting up cronjobs . Unix administration involves lots of different tasks and some of these tasks are covered in this website but still there are many areas not covered here .

Following books available for online buying from Amazon.com . You should have following two books in your bookshelf for ready reference if you are involved in Unix system administration .

Essential System Administration, Third Edition by by Æleen Frisch

Solaris Operating Environment Boot Camp by David Rhodes, Dominic Butler

If you already own these books the amazon display panel below shows some of the best-selling books for System Administration and you can choose the book here or visit my other website for more selection of best selling unix system administration books by following this link - .Unix system administration books at besttechbooks.com ,from Amazon.com.

You can show your appreciation by buying the books for yourself and encouraging the friends to buy using amazon links below or anywhere at adminschoice.com or besttechbooks.com . Thanks for your appreciation in advance.

Tuesday, July 21, 2009

CREATING ASM INSTANCE USING SATA HARD DISK IN LINUX

Applies to: Linux Kernel - Version: 1.0 to 2.0 Linux x86-64 Linux x86

Goal

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. This document is a set of tips for installing the library and its supporting driver. The complete installation guide is part of the Oracle Database 10g Documentation.

This document describes the steps required to install the Linux specific ASM library and its assocated driver. This library is provide to enable ASM I/O to Linux disks without the limitations of the standard Unix I/O API. The steps below are steps that the system administrator must follow.

Solution

Locating the ASMLib Packages

T
he ASMLib software is available from the Oracle Technology Network. Go to ASMLib download page and follow the link for your platform.The link is http://www.oracle.com/tecnology/tech/linux/asmlib/index.html
Select a proper version for ur OS.
You will see 4-6 packages for your Linux platform. The oracleasmlib package provides the actual ASM library. The oracleasm-support package provides the utilities used to get the ASM driver up and running. Both of these packages need to be installed.

The remaining packages provide the kernel driver for the ASM library. Each package provides the driver for a different kernel. You must install the appropriate package for the kernel you are running. Use the "uname -r command to determine the version of the kernel. The oracleasm kerel driver package will have that version string in its name. For example, if you were running Red Hat Enterprise Linux 4 AS, and the kernel you were using was the 2.6.9-5.0.5.ELsmp kernel, you would choose the oracleasm-2.6.9-5.0.5-ELsmp package.

Installing the ASMLib Packages

So, to install these packages on RHEL4 on an Intel x86 machine, you might use the command:

rpm -Uvh oracleasm-support-2.0.0-1.i386.rpm \
oracleasm-lib-2.0.0-1.i386.rpm \
oracleasm-2.6.9-5.0.5-ELsmp-2.0.0-1.i686.rpm


If you were on a different machine, the package names would be slightly different, replacing 'i686' with the appropriate architecture. Use the package names relevant for your distribution.

NOTE: Distributions with the Linux 2.4 kernel still use the 1.0 kernel driver, while distributions based on the Linux 2.6 kernel use the 2.0 kernel driver. All distributions use version 2.0 of the support and library packages. See Note 394954.1 for more information.


Once the command completes, ASMLib is now installed on the system.

Making the ASM Driver Available

Now that the ASMLib software is installed, a few steps have to be taken by the system administrator to make the ASM driver available. The ASM driver needs to be loaded, and the driver filesystem needs to be mounted. This is taken care of by the initialization script, /etc/init.d/oracleasm.

Run the /etc/init.d/oracleasm script with the 'configure' option. It will ask for the user and group that default to owning the ASM driver access point. If the database was running as the 'oracle' user and the 'dba' group, the output would look like this:

[root@ca-test1 /]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration [ OK ]
Creating /dev/oracleasm mount point [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]


This should load the oracleasm.o driver module and mount the ASM driver filesystem. By selecting enabled = 'y' during the configuration, the system will always load the module and mount the filesystem on boot.

The automatic start can be enabled or disabled with the 'enable' and 'disable' options to /etc/init.d/oracleasm:

[root@ca-test1 /]# /etc/init.d/oracleasm disable
Writing Oracle ASM library driver configuration [ OK ]
Unmounting ASMlib driver filesystem [ OK ]
Unloading module "oracleasm" [ OK ]

[root@ca-test1 /]# /etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]


Creating/Deleting/Querying/Scanning ASM Disks

The system administrator has one last task. Every disk that ASMLib is going to be accessing needs to be made available. This is accomplished by creating an ASM disk. The /etc/init.d/oracleasm script is again used for this task:

[root@ca-test1 /]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdg1
Creating Oracle ASM disk "VOL1" [ OK ]


Disk names are ASCII capital letters, numbers, and underscores. They must start with a letter.

Disks that are no longer used by ASM can be unmarked as well:

[root@ca-test1 /]# /etc/init.d/oracleasm deletedisk VOL1
Deleting Oracle ASM disk "VOL1" [ OK ]


Any operating system disk can be queried to see if it is used by ASM:

[root@ca-test1 /]# /etc/init.d/oracleasm querydisk /dev/sdg1
Checking if device "/dev/sdg1" is an Oracle ASM disk [ OK ]
[root@ca-test1 /]# /etc/init.d/oracleasm querydisk /dev/sdh1
Checking if device "/dev/sdh1" is an Oracle ASM disk [FAILED]


Existing disks can be listed and queried:

[root@ca-test1 /]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
[root@ca-test1 /]# /etc/init.d/oracleasm querydisk VOL1
Checking for ASM disk "VOL1" [ OK ]

When a disk is added to a RAC setup, the other nodes need to be notified about it. Run the 'createdisk' command on one node, and then run 'scandisks' on every other node:

[root@ca-test1 /]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]

ASMLib uses discovery strings to determine what disks ASM is asking for. The generic Linux ASMLib uses glob strings. The string must be prefixed with "ORCL:". Disks are specified by name. A disk created with the name "VOL1" can be discovered in ASM via the discovery string "ORCL:VOL1". Similarly, all disks that start with the string "VOL" can be queried with the discovery string "ORCL:VOL*".

Disks cannot be discovered with path names in the discovery string. If the prefix is missing, the generic Linux ASMLib will ignore the discovery string completely, expecting that it is intended for a different ASMLib. The only exception is the empty string (""), which is considered a full wildcard. This is precisely equivalent to the discovery string "ORCL:*".

NOTE: Once you mark your disks with Linux ASMLib, Oracle Database 10g R1 (10.1) OUI will not be able to discover your disks. It is recommended that you complete a Software Only install and then use DBCA to create your database (or use the custom install).


3)Docid:249992.1

After this before running dbca to create database intance use these commands

You have two disk say "/dev/sdf" and "/dev/sdg"

Determine what those devices are bound as raw:

$ /usr/bin/raw -qa

If not:

Include devices in diskgroups by editing /etc/sysconfig/rawdevices :

eg /dev/raw/raw1 /dev/sdf

/dev/raw/raw2 /dev/sdg

Set owner, group and permission on device file for each raw device:

$ chown oracle:dba /dev/raw/rawn, chmod 660 /dev/raw/rawn

Bind disk devices to raw devices:

$ service rawdevices restart

So from DBCA you can see the device "raw1" and "raw2".

4)Then before running or if run dbca it will ask u to install cluster software it can be done by running script localconfig add in $ORACLE_HOME/bin by

./localconfig add

crsctl check crs - checks the viability of the CRS stack

crsctl check cssd - checks the viability of CSS

crsctl check crsd - checks the viability of CRS

crsctl check evmd - checks the viability of EVM

$asmcmd-to manage asm files through OS.

5)Then run dbca if u get the error ,cant copy /tmp/otatab to /etc/oratab then better recreate oratab by running root.sh

Other useful sites for rac,asm

http://www.oracledba.org/10g/asm/