suse 安装介质:
SLE-12-Server-DVD-x86_64-GM-DVD1.iso
1.关闭防火墙:
suse02:~ # systemctl list-unit-files|grep fire
SuSEfirewall2.service enabled
SuSEfirewall2_init.service enabled
SuSEfirewall2_setup.service enabled
suse02:~ # service SuSEfirewall2_setup stop
suse02:~ # chkconfig SuSEfirewall2_setup off
suse02:~ # service SuSEfirewall2_init stop
suse02:~ # chkconfig SuSEfirewall2_init off
2.调整ssh配置文件,root用户可远程登录:
suse02:~ # vi /etc/ssh/sshd_config
suse02:~ # service sshd restart
suse02:~ # grep PasswordAuthentication /etc/ssh/sshd_config
PasswordAuthentication yes
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication, then enable this but set PasswordAuthentication
suse02:~ # grep PermitRootLogin /etc/ssh/sshd_config
PermitRootLogin yes
# the setting of "ermitRootLogin without-password".
suse02:~ #
3.主机名称,ip地址调整采用yast2图形界面,调整,
4.编辑/etc/hosts 文件
suse01:~ # vi /etc/hosts
suse01:~ # cat /etc/hosts
#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
#
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
10.10.5.11 suse01-priv suse01-priv.abc.com
192.168.1.11 suse01 suse01.abc.com
192.168.1.13 suse01-vip suse-1-vip.abc.com
10.10.5.12 suse02-priv suse02-priv.abc.com
192.168.1.12 suse02 suse02.abc.com
192.168.1.14 suse02-vip suse02-vip.abc.com
192.168.1.15 suse-scan suse-scan.abc.com
5.检查系统包
suse02:~ # rpm -q binutils gcc gcc-32bit gcc-c++ glibc glibc-32bit glibc-devel glibc-devel-32bit ksh libaio libaio-32bit libaio-devel libaio-devel-32bit libstdc++33 libstdc++33-32bit libstdc++43 libstdc++43-32bit libstdc++43-devel libstdc++43-devel-32bit libgcc43 libstdc++-devel make sysstat unixODBC unixODBC-devel unixODBC-32bit unixODBC-devel-32bit libcap1
binutils-2.24-2.165.x86_64
gcc-4.8-6.189.x86_64
gcc-32bit-4.8-6.189.x86_64
gcc-c++-4.8-6.189.x86_64
glibc-2.19-17.72.x86_64
glibc-32bit-2.19-17.72.x86_64
glibc-devel-2.19-17.72.x86_64
glibc-devel-32bit-2.19-17.72.x86_64
package ksh is not installed
package libaio is not installed
package libaio-32bit is not installed
libaio-devel-0.3.109-17.15.x86_64
package libaio-devel-32bit is not installed
package libstdc++33 is not installed
package libstdc++33-32bit is not installed
package libstdc++43 is not installed
package libstdc++43-32bit is not installed
package libstdc++43-devel is not installed
package libstdc++43-devel-32bit is not installed
package libgcc43 is not installed
libstdc++-devel-4.8-6.189.x86_64
make-4.0-2.107.x86_64
sysstat-10.2.1-1.11.x86_64
package unixODBC is not installed
package unixODBC-devel is not installed
package unixODBC-32bit is not installed
package unixODBC-devel-32bit is not installed
package libcap1 is not installed
默认安装的 Linux 的“default-RPMs” 基础平 台中可能已经包含了一些安装指南要求的
01) binut.ils-2. 21. 1-0. 7. 25 (x86— 64)
02) glibc-2. 11.3-17.31. 1 (x86_64)
03) ksh-93u_0. 6. 1 (x86— 64)
04) libaio-0. 3. 109-0. 1. 46 (x86_64)
05) libsLdc++33-3. 3. 3-11. 9 (x86_64)
06) libstdc++33-32bit-3. 3. 3-11.9 (x86_64)
07) libstdc++46-4_ 6.1—20110701-0. 13. 9 (x86— 64)
08) libgcc46-4. 6. 1— 20ll0701-0. 13. 9 (x86— 64?
09) make-3.81 (x86— 64)
01) gcc-4. 3-62. 198 U86— 64)
02) gcc-c++_4.3-62. 198 7x86— 6.1)
03) glibc-devel-2. 11. 3-17. 31. 1 (x86_64)
04) libai?-devel-0. 3. 109-0. 1. 46 (x86— 64)
05) libst.dc++43-devel-4.3. 4— 20091019-0. 22. 17 (x86— 64)
06) sysstat-8. 1. 5-7.32. 1 (x86— 64)
07) libcapl-1. 10-6. 10 (x86_64y
after yast2 install
SuSE SLES 12 certified with Oracle Database 12.1.0.2
By Mike Dietrich-Oracle on Jan 21, 2016
Puh ... I've got many mails over several months asking about the current status of certification of SuSE SLES12 for Oracle Database 12.1.0.2. It took a while - and I believe it was not in our hands. But anyhow ... finally ...
SuSE Enterprise Linux SLES12 is now certified with Oracle Database 12.1.0.2
See Release Notes for additional package requirements
Minimum kernel version: 3.12.49-11-default
Mininum PATCHLEVEL: 1
Additional Notes
Edit CV_ASSUME_DISTID=SUSE11 parameter in database/stage/cvu/cv/admin/cvu_config & grid/stage/cvu/cv/admin/cvu_config
Apply Patch 20737462 to address CVU issues relating to lack of reference data
Install libcap1 (libcap2 libraries are installed by default); i.e. libcap1-1.10-59.61.x86_64 & libcap1-32bit-1.10-59.61.x86_64
ksh is replaced by mksh; e.g. mksh-50-2.13.x86_64
libaio has been renamed to libaio1 (i.e. libaio1-0.3.109-17.15.x86_64); ensure that libaio1 is installed
Note: OUI may be invoked with -ignoreSysPreqs to temporarily workaround ongoing CVU check failures
I had a SuSE Linux running on my previous laptop as dual-boot for quite a while. And I still like SuSE way more than any other Linux distributions potentially because of the fact that it was the Linux I started developing some basic Unix skills. I picked up my first Linux at the S.u.S.E. "headquarters" near Fürth Hauptbahnhof in 1994. I used to live just a few kilometers away and the version 0.9 a friend had given to me on a bunch of 3.5'' floppies had a disk failure. I believe the entire package did cost DM 19,90 by then - today roughly 10 Euro when you don't consider inflation - and was distributed on floppy disks. The reason for me to buy it was simply that I had no clue about Linux - but SuSE had a book delivered with the distribution.
This is a distribution I had purchased later on as well - they've had good discounts for students by then.
6.确认多路径软件已经安装:
suse01:~ # rpm -qa | grep device
device-mapper-1.02.78-21.7.x86_64
device-mapper-32bit-1.02.78-21.7.x86_64
libimobiledevice4-1.1.5-4.92.x86_64
suse02:/mnt/suse/x86_64 # cp /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/multipath.conf
suse02:/mnt/suse/x86_64 # service multipathd status
multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; disabled)
Active: inactive (dead)
suse02:/mnt/suse/x86_64 # service multipath start
service: no such service multipath
suse02:/mnt/suse/x86_64 # service multipathd start
suse02:/mnt/suse/x86_64 # service multipathd status
multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; disabled)
Active: active (running) since Mon 2016-07-04 09:51:11 EDT; 7s ago
Process: 16294 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
Main PID: 16300 (multipathd)
Status: "running"
CGroup: /system.slice/multipathd.service
└─16300 /sbin/multipathd -d -s
suse02:/mnt/suse/x86_64 # chkconfig multipathd on
7.卸载orarun 软件包:
suse02:/mnt/suse/x86_64 # env | grep ORA
ORACLE_SID=orcl
ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/12cR1/db
suse02:/mnt/suse/x86_64 # rpm -qa | grep orarun
orarun-2.0-12.7.x86_64
suse02:/mnt/suse/x86_64 # rpm -ef orarun-2.0-12.7.x86_64
error: Failed dependencies:
orarun is needed by (installed) patterns-sles-oracle_server-12-58.8.x86_64
suse02:/mnt/suse/x86_64 # rpm -e orarun-2.0-12.7.x86_64 patterns-sles-oracle_server-12-58.8.x86_64
suse02:/mnt/suse/x86_64 # rpm -qa | grep orarun
重新登录,是以上环境变量失效
suse01:~ # exit
logout
Connection closed by foreign host.
Disconnected from remote host(192.168.1.11:22) at 21:58:04.
Type `help' to learn how to use Xshell prompt.
[c:\~]$ ssh 192.168.1.11
Connecting to 192.168.1.11:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
Last login: Mon Jul 4 09:11:23 2016 from 192.168.1.1
suse01:~ # env | grep ORA
8.卸载orarun软件创建的用户及用户组:
suse02:~ # id oracle
uid=489(oracle) gid=488(oinstall) groups=487(DBA),488(oinstall)
suse02:~ # id grid
id: grid: no such user
suse02:~ # groupdel oinstall
groupdel: cannot remove the primary group of user 'oracle'
suse02:~ # userdel oracle
no crontab for oracle
suse02:~ # groupdel oinstall
suse02:~ # groupdel dba
suse02:~ #
9.suse01:~ # systemctl list-unit-files| grep scsi
iscsi.service enabled
iscsid.service disabled
iscsiuio.service disabled
iscsid.socket enabled
iscsiuio.socket disabled
suse01:~ # service iscsi status
iscsi.service - Login and scanning of iSCSI devices
Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled)
Active: active (exited) since Mon 2016-07-04 22:18:24 EDT; 2h 40min ago
Docs: man:iscsiadm(8)
man:iscsid(8)
Main PID: 2155 (code=exited, status=21)
CGroup: /system.slice/iscsi.service
Jul 04 22:18:24 suse01 iscsiadm[2155]: iscsiadm: No records found
suse01:~ # rpm -qa | grep scsi
yast2-iscsi-client-3.1.17-1.29.noarch
lsscsi-0.27-4.17.x86_64
open-iscsi-2.0.873-20.4.x86_64
yast2-iscsi-lio-server-3.1.11-1.24.noarch
iscsiuio-0.7.8.2-20.4.x86_64
suse01:~ # chkcofnig iscsid on
suse01:~ # chkconfig iscsid on
openfielr :
[root@localhost ~]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:50:56:22F:7E
inet addr:192.168.1.38 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe22:df7e/64 Scopeink
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8375 errors:0 dropped:0 overruns:0 frame:0
TX packets:7921 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:769108 (751.0 Kb) TX bytes:4309701 (4.1 Mb)
Interrupt:19 Base address:0x2000
eth1 Link encap:Ethernet HWaddr 00:0C:29:14:C1:EF
inet addr:192.168.1.39 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe14:c1ef/64 Scopeink
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16034 errors:0 dropped:0 overruns:0 frame:0
TX packets:303 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1718910 (1.6 Mb) TX bytes:10022 (9.7 Kb)
Interrupt:19 Base address:0x2080
suse01:~ # iscsiadm -m discovery -t sendtargets -p 192.168.1.38
192.168.1.38:3260,1 iqn.2006-01.com.openfiler:tsn.930df3d90ade
192.168.1.38:3260,1 iqn.2006-01.com.openfiler:tsn.c3770d9359b8
192.168.1.38:3260,1 iqn.2006-01.com.openfiler:tsn.5f68d3c86d9b
192.168.1.38:3260,1 iqn.2006-01.com.openfiler:tsn.3d7ee5761ee1
192.168.1.38:3260,1 iqn.2006-01.com.openfiler:tsn.ea0e29267c4a
suse01:~ # iscsiadm -m discovery -t sendtargets -p 192.168.1.39
192.168.1.39:3260,1 iqn.2006-01.com.openfiler:tsn.930df3d90ade
192.168.1.39:3260,1 iqn.2006-01.com.openfiler:tsn.c3770d9359b8
192.168.1.39:3260,1 iqn.2006-01.com.openfiler:tsn.5f68d3c86d9b
192.168.1.39:3260,1 iqn.2006-01.com.openfiler:tsn.3d7ee5761ee1
192.168.1.39:3260,1 iqn.2006-01.com.openfiler:tsn.ea0e29267c4a
Application-System-tools-----Yast---Network-Services-----ISCSI Initator---
service
service start when booting
ISNS Address input the ip and port 3260
Discovered target now not connect
select one and select login modify automatic
suse01:~ # lsscsi
[1:0:0:0] cd/dvd NECVMWar VMware SATA CD01 1.00 /dev/sr0
[30:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
[33:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdb
[34:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdc
[35:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdd
[36:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sde
[37:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdf
[38:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdg
[39:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdh
[40:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdi
[41:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdj
[42:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdk
suse01:~ # multipath -ll
14f504e46494c4500556f4d5667442d6a5047582d66563453 dm-4 OPNFILER,VIRTUAL-DISK
size=5.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 42:0:0:0 sdk 8:160 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 41:0:0:0 sdj 8:144 active ready running
14f504e46494c45004557397433312d547036322d69775544 dm-3 OPNFILER,VIRTUAL-DISK
size=5.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 40:0:0:0 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 39:0:0:0 sdh 8:112 active ready running
14f504e46494c45004d635370786b2d7a364f422d6c474843 dm-1 OPNFILER,VIRTUAL-DISK
size=11G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 36:0:0:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 35:0:0:0 sdd 8:48 active ready running
14f504e46494c450065464a726b4d2d74356e432d544a3565 dm-2 OPNFILER,VIRTUAL-DISK
size=5.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 38:0:0:0 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 37:0:0:0 sdf 8:80 active ready running
14f504e46494c4500594c463138612d656b46652d63505566 dm-0 OPNFILER,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 33:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 34:0:0:0 sdc 8:32 active ready running
suse01:~ #
shutdown -h now to verify the modify is useful
[c:\~]$ ssh 192.168.1.12
Connecting to 192.168.1.12:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
Last login: Tue Jul 5 05:36:35 2016 from 192.168.1.1
lsuse02:~ # lsscsi
[1:0:0:0] cd/dvd NECVMWar VMware SATA CD01 1.00 /dev/sr0
[30:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
[33:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdb
[34:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdc
[35:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdd
[36:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sde
[37:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdf
[38:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdg
[39:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdh
[40:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdi
[41:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdj
[42:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdk
suse02:~ # service iscsi status
iscsi.service - Login and scanning of iSCSI devices
Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled)
Active: active (exited) since Tue 2016-07-05 05:48:14 EDT; 1min 3s ago
Docs: man:iscsiadm(8)
man:iscsid(8)
Process: 1730 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=0/SUCCESS)
Main PID: 1730 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/iscsi.service
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.930df3d90ade, portal: 192.168.1.38,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.930df3d90ade, portal: 192.168.1.39,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.c3770d9359b8, portal: 192.168.1.38,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.c3770d9359b8, portal: 192.168.1.39,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5f68d3c86d9b, portal: 192.168.1.38,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5f68d3c86d9b, portal: 192.168.1.39,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.3d7ee5761ee1, portal: 192.168.1.38,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.3d7ee5761ee1, portal: 192.168.1.39,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.ea0e29267c4a, portal: 192.168.1.38,3260] successful.
Jul 05 05:48:14 suse02 iscsiadm[1730]: Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.ea0e29267c4a, portal: 192.168.1.39,3260] successful.
udev 方式举例:(本例未采用udev方式仅举例,此次安装采用device-mapper和asmlib结合的方式,不过查看共享盘的wwid可以采用下面的步骤查找)
node2:/etc # cat /etc/scsi_id.config
options=--whitelisted --replace-whitespace
cat: /etc/scsi_id.config: No such file or directory
suse02:~ # for i in b c d e f g h i j k; do echo "KERNEL==\"sd*\", SUBSYSTEM==\"block\", PROGRAM==\"/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" ; done
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c4500594c463138612d656b46652d63505566", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c4500594c463138612d656b46652d63505566", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45004d635370786b2d7a364f422d6c474843", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45004d635370786b2d7a364f422d6c474843", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c450065464a726b4d2d74356e432d544a3565", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c450065464a726b4d2d74356e432d544a3565", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45004557397433312d547036322d69775544", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45004557397433312d547036322d69775544", NAME="asm-diski", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c4500556f4d5667442d6a5047582d66563453", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c4500556f4d5667442d6a5047582d66563453", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin", MODE="0660"
suse02:~ # cat /etc/scsi_id.config
cat: /etc/scsi_id.config: No such file or directory
本次测试直接用多路径软件聚合后的磁盘,不用udev,不用asmlib
从上面的输出可以分析,先确认磁盘大小
suse02:~ # fdisk -l | grep dev
Disk /dev/sda: 65 GiB, 69793218560 bytes, 136314880 sectors
/dev/sda1 2048 4208639 4206592 2G 82 Linux swap / Solaris
/dev/sda2 * 4208640 57608191 53399552 25.5G 83 Linux
/dev/sda3 57608192 136314879 78706688 37.5G 83 Linux
Disk /dev/sdb: 4 GiB, 4261412864 bytes, 8323072 sectors
Disk /dev/sdc: 4 GiB, 4261412864 bytes, 8323072 sectors
Disk /dev/sde: 11 GiB, 11811160064 bytes, 23068672 sectors
Disk /dev/sdd: 11 GiB, 11811160064 bytes, 23068672 sectors
Disk /dev/sdf: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sdg: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/mapper/14f504e46494c4500594c463138612d656b46652d63505566: 4 GiB, 4261412864 bytes, 8323072 sectors
Disk /dev/sdh: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sdi: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sdj: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/sdk: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/mapper/14f504e46494c45004d635370786b2d7a364f422d6c474843: 11 GiB, 11811160064 bytes, 23068672 sectors
Disk /dev/mapper/14f504e46494c450065464a726b4d2d74356e432d544a3565: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/mapper/14f504e46494c45004557397433312d547036322d69775544: 5 GiB, 5368709120 bytes, 10485760 sectors
Disk /dev/mapper/14f504e46494c4500556f4d5667442d6a5047582d66563453: 5 GiB, 5368709120 bytes, 10485760 sectors
重新编辑multipath.conf文件
node2:# vi /etc/multipath.conf
node2:/dev/mapper # cat /etc/multipath.conf
##
## This is a template multipath-tools configuration file
## Uncomment the lines relevent to your environment
##
defaults {
udev_dir /dev
polling_interval 10
path_selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
prio const
path_checker directio
rr_min_io 100
flush_on_last_del no
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry fail
queue_without_daemon no
user_friendly_names no
# See /usr/share/doc/packages/device-mapper/12-dm-permissions.rules
# to set mode/uid/gid.
}
blacklist {
# wwid 26353900f02796769
# devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "sda"
# devnode "^hd[a-z][[0-9]*]"
# device {
# vendor DEC.*
# product MSA[15]00
}
#blacklist_exceptions {
# devnode "^dasd[c-d]+[0-9]*"
# wwid "IBM.75000000092461.4d00.34"
#}
multipaths {
multipath {
wwid 14f504e46494c4500594c463138612d656b46652d63505566
alias ocrvote1
path_grouping_policy multibus
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
rr_min_io 100
}
multipath {
wwid 14f504e46494c450065464a726b4d2d74356e432d544a3565
alias ocrvote2
path_grouping_policy multibus
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
rr_min_io 100
}
multipath {
wwid 14f504e46494c4500556f4d5667442d6a5047582d66563453
alias ocrvote3
path_grouping_policy multibus
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
rr_min_io 100
}
multipath {
wwid 14f504e46494c45004d635370786b2d7a364f422d6c474843
alias data
path_grouping_policy multibus
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
rr_min_io 100
}
multipath {
wwid 14f504e46494c45004557397433312d547036322d69775544
alias arch
path_grouping_policy multibus
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
rr_min_io 100
}
}
/etc/init.d/multipathd restart
suse01:~ # vi /etc/multipath.conf
suse01:~ # service multipathd restart
suse01:~ # multipath -ll
Jul 05 06:06:31 | multipath.conf +6, invalid keyword: udev_dir
arch (14f504e46494c45004557397433312d547036322d69775544) dm-3 OPNFILER,VIRTUAL-DISK
size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 39:0:0:0 sdh 8:112 active ready running
`- 40:0:0:0 sdi 8:128 active ready running
ocrvote3 (14f504e46494c4500556f4d5667442d6a5047582d66563453) dm-4 OPNFILER,VIRTUAL-DISK
size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 41:0:0:0 sdk 8:160 active ready running
`- 42:0:0:0 sdj 8:144 active ready running
data (14f504e46494c45004d635370786b2d7a364f422d6c474843) dm-1 OPNFILER,VIRTUAL-DISK
size=11G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 35:0:0:0 sdd 8:48 active ready running
`- 36:0:0:0 sde 8:64 active ready running
ocrvote2 (14f504e46494c450065464a726b4d2d74356e432d544a3565) dm-2 OPNFILER,VIRTUAL-DISK
size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 37:0:0:0 sdf 8:80 active ready running
`- 38:0:0:0 sdg 8:96 active ready running
ocrvote1 (14f504e46494c4500594c463138612d656b46652d63505566) dm-0 OPNFILER,VIRTUAL-DISK
size=4.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 33:0:0:0 sdb 8:16 active ready running
`- 34:0:0:0 sdc 8:32 active ready running
suse01:/etc/init.d # cd /dev/mapper
suse01:/dev/mapper # ls -l
total 0
brw-r----- 1 root disk 254, 3 Jul 5 06:06 arch
crw------- 1 root root 10, 236 Jul 5 2016 control
brw-r----- 1 root disk 254, 1 Jul 5 06:06 data
brw-r----- 1 root disk 254, 0 Jul 5 06:06 ocrvote1
brw-r----- 1 root disk 254, 2 Jul 5 06:06 ocrvote2
brw-r----- 1 root disk 254, 4 Jul 5 06:06 ocrvote3
创建用户:
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba,asmadmin -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R gridinstall /u01
mkdir -p /u01/app/oracle
chown -R oracleinstall /u01/app/oracle
chmod -R 775 /u01
suse02:~ # id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1200(asmadmin),1201(asmdba),1300(dba),1301(oper),1000(oinstall)
suse02:~ # id grid
uid=1100(grid) gid=1000(oinstall) groups=1200(asmadmin),1201(asmdba),1202(asmoper),1300(dba),1000(oinstall)
suse02:~ # passwd grid
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: password updated successfully
suse02:~ # passwd oracle
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: password updated successfully
编辑文件:
vi /etc/sysctl.conf
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.rp_filter = 1
fs.inotify.max_user_watches = 65536
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
vm.hugetlb_shm_group = 1000
sysctl -p
vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
设置用户口令:
passwd oracle
passwd grid
设置用户.profile文件:
oracle user .profile
ORACLE_SID=orcl1; export ORACLE_SID
ORACLE_UNQNAME=zwc; export ORACLE_UNQNAME
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK; export NLS_LANG
PATH=.{JAVA_HOME}/bin{PATH}HOME/binORACLE_HOME/binORACLE_HOME/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
#alias asmcmd="rlwrap asmcmd"
alias base="cd $ORACLE_BASE"
alias home="cd $ORACLE_HOME"
grid user .profile
ORACLE_SID=+ASM1; export ORACLE_SID
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK; export NLS_LANG
PATH=.{JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
#alias asmcmd="rlwrap asmcmd"
alias base="cd $ORACLE_BASE"
alias home="cd $ORACLE_HOME"
source .profile
编辑开机调整共享盘权限脚本:
node2:/etc/init.d # cat after.local
#!/bin/sh
#
# OCR disks 11gR1
chown grid:asmadmin /dev/mapper/ocrvote1
chown grid:asmadmin /dev/mapper/ocrvote2
chown grid:asmadmin /dev/mapper/ocrvote3
chown grid:asmadmin /dev/mapper/data
chown grid:asmadmin /dev/mapper/arch
chmod 0660 /dev/mapper/ocrvote1
chmod 0660 /dev/mapper/ocrvote2
chmod 0660 /dev/mapper/ocrvote3
chmod 0660 /dev/mapper/data
chmod 0660 /dev/mapper/arch
node2:/etc/init.d # chmod +x after.local
node2:/etc/init.d # ls -l after.local
-rwxr-xr-x 1 root root 225 Dec 18 19:02 after.local
查看系统默认的asmlib 需求包是否安装:
node1:~/Desktop # rpm -qa | grep asm
plasma-theme-aya-4.3.5-0.3.30
oracleasm-support-2.1.7-1.SLE11
libasm1-0.152-4.7.86
oracleasm-kmp-trace-2.0.5_3.0.13_0.27-7.24.59
plasma-addons-4.3.5-0.1.70
oracleasm-2.0.5-7.24.59
oracleasm-kmp-default-2.0.5_3.0.13_0.27-7.24.59
oracleasmlib-2.0.4-1.SLE11
oracleasm-kmp-xen-2.0.5_3.0.13_0.27-7.24.59
plasmoid-quickaccess-0.8.1-2.1.98
本次测试为使用asmlib
安装asmlib相关软件包:
系统自带:
oracleasm-kmp-default-2.0.8_k3.12.28_4-42.12.x86_64
suse01:~ # rpm -qa | grep oracle
oracleasm-support-2.1.8-1.SLE12.x86_64
oracleasm-kmp-default-2.0.8_k3.12.28_4-42.12.x86_64
patterns-sles-oracle_server-32bit-12-58.8.x86_64
suse01:~ # /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: done
Scanning the system for Oracle ASMLib disks: done
oracle 官网下载:
suse 11 本身自带asmlib所需的部分支持包,yast soft manager 搜索oracle key word ,select all to install
then
install oracle asmlib package
oracleasm-support-2.1.7-1.SLE11
oracleasmlib-2.0.4-1.SLE11
安装完成后:
node1:~ # rpm -qa | grep oracle
oracleasm-2.0.5-7.24.59
oracleasm-kmp-default-2.0.5_3.0.13_0.27-7.24.59
oracleasm-support-2.1.7-1.SLE11
oracleasmlib-2.0.4-1.SLE11
设置开机调整共享盘权限:
vi :
node2:/dev # cd /etc/init.d/
node2:/etc/init.d # cat after.local
#!/bin/sh
chown grid:asmadmin /dev/mapper/data
chown grid:asmadmin /dev/mapper/ocr
chown grid:asmadmin /dev/mapper/arch
chmod 0660 /dev/mapper/data
chmod 0660 /dev/mapper/ocr
chmod 0660 /dev/mapper/arch
共享盘分区:
fdisk /dev/mapper/ocr
fdisk /dev/mapper/data
fdisk /dev/mapper/arch
查看分区后列表:
node1:/dev/mapper # ls -l
total 0
lrwxrwxrwx 1 root root 7 Dec 18 13:34 arch -> ../dm-1
lrwxrwxrwx 1 root root 7 Dec 18 13:34 arch_part1 -> ../dm-5
crw-rw---- 1 root root 10, 236 Dec 18 13:30 control
lrwxrwxrwx 1 root root 7 Dec 18 13:34 data -> ../dm-0
lrwxrwxrwx 1 root root 7 Dec 18 13:34 data_part1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Dec 18 13:33 ocr -> ../dm-2
lrwxrwxrwx 1 root root 7 Dec 18 13:33 ocr_part1 -> ../dm-3
创建ASM disk:
/etc/init.d/oracleasm configure
grid
asmadmin
y
y
ok
node1:/dev/mapper # /etc/init.d/oracleasm createdisk ocr /dev/mapper/ocr_part1
Marking disk "ocr" as an ASM disk: done
node1:/dev/mapper # /etc/init.d/oracleasm createdisk data /dev/mapper/data_part1
Marking disk "data" as an ASM disk: done
node1:/dev/mapper # /etc/init.d/oracleasm createdisk arch /dev/mapper/arch_part1
Marking disk "arch" as an ASM disk: done
node1:/dev/mapper # /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: done
node1:/dev/mapper # /etc/init.d/oracleasm listdisks
ARCH
DATA
OCR
done
编辑/etc/sysconfig/oracleasm文件:
node1:/dev # more /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver. It is generated
# By running /etc/init.d/oracleasm configure. Please use that method
# to modify this file
#
# ORACLEASM_ENABELED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true
# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=grid
# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=asmadmin
# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning,change to dm
ORACLEASM_SCANORDER="dm"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan,change to sd
ORACLEASM_SCANEXCLUDE="sd"
编辑后重新启动:
node1:/dev # vi /etc/sysconfig/oracleasm
node1:/dev # /etc/init.d/oracleasm restart
Dropping Oracle ASMLib disks: done
Shutting down the Oracle ASMLib driver: done
Initializing the Oracle ASMLib driver:
基本上可以安装了,安装部分截图及步骤省略:
/etc/init.d/oracleasm deletedisk data 报错,need root user execute
cat /etc/oratab 检查
iscsi 配置参考suse官方存储管理文档:
Configuring iSCSI Initiator
The iSCSI initiator, also called an iSCSI client, can be used to connect to any iSCSI target. This is not restricted to the iSCSI target solution explained in Section 14.2, Setting Up an iSCSI Target. The configuration of iSCSI initiator involves two major steps: the discovery of available iSCSI targets and the setup of an iSCSI session. Both can be done with YaST.
Section 14.3.1, Using YaST for the iSCSI Initiator Configuration
Section 14.3.2, Setting Up the iSCSI Initiator Manually
Section 14.3.3, The iSCSI Client Databases
14.3.1 Using YaST for the iSCSI Initiator Configuration#
The iSCSI Initiator Overview in YaST is divided into three tabs:
Service: The Service tab can be used to enable the iSCSI initiator at boot time. It also offers to set a unique Initiator Name and an iSNS server to use for the discovery. The default port for iSNS is 3205.
Connected Targets: The Connected Targets tab gives an overview of the currently connected iSCSI targets. Like the Discovered Targets tab, it also gives the option to add new targets to the system.
On this page, you can select a target device, then toggle the start-up setting for each iSCSI target device:
Automatic: This option is used for iSCSI targets that are to be connected when the iSCSI service itself starts up. This is the typical configuration.
Onboot: This option is used for iSCSI targets that are to be connected during boot; that is, when root (/) is on iSCSI. As such, the iSCSI target device will be evaluated from the initrd on server boots.
Discovered Targets: Discovered Targets provides the possibility of manually discovering iSCSI targets in the network.
Configuring the iSCSI Initiator
Discovering iSCSI Targets by Using iSNS
Discovering iSCSI Targets Manually
Setting the Start-up Preference for iSCSI Target Devices
Configuring the iSCSI Initiator#
Launch YaST as the root user.
Select Network Services > iSCSI Initiator (you can also use the yast2 iscsi-client.
YaST opens to the iSCSI Initiator Overview page with the Service tab selected.
In the Service Start area, select one of the following:
When booting: Automatically start the initiator service on subsequent server reboots.
Manually (default): Start the service manually.
Specify or verify the Initiator Name.
Specify a well-formed iSCSI qualified name (IQN) for the iSCSI initiator on this server. The initiator name must be globally unique on your network. The IQN uses the following general format:
iqn.yyyy-mm.com.mycompany:n1:n2
where n1 and n2 are alphanumeric characters. For example:
iqn.1996-04.de.suse:01:9c83a3e15f64
The Initiator Name is automatically completed with the corresponding value from the /etc/iscsi/initiatorname.iscsi file on the server.
If the server has iBFT (iSCSI Boot Firmware Table) support, the Initiator Name is completed with the corresponding value in the IBFT, and you are not able to change the initiator name in this interface. Use the BIOS Setup to modify it instead.The iBFT is a block of information containing various parameters useful to the iSCSI boot process, including the iSCSI target and initiator descriptions for the server.
Use either of the following methods to discover iSCSI targets on the network.
iSNS: To use iSNS (Internet Storage Name Service) for discovering iSCSI targets, continue with Discovering iSCSI Targets by Using iSNS.
Discovered Targets: To discover iSCSI target devices manually, continue with Discovering iSCSI Targets Manually.
Discovering iSCSI Targets by Using iSNS#
Before you can use this option, you must have already installed and configured an iSNS server in your environment. For information, see Section 13.0, iSNS for Linux.
In YaST, select iSCSI Initiator, then select the Service tab.
Specify the IP address of the iSNS server and port.
The default port is 3205.
On the iSCSI Initiator Overview page, click Finish to save and apply your changes.
Discovering iSCSI Targets Manually#
Repeat the following process for each of the iSCSI target servers that you want to access from the server where you are setting up the iSCSI initiator.
In YaST, select iSCSI Initiator, then select the Discovered Targets tab.
Click Discovery to open the iSCSI Initiator Discovery dialog box.
Enter the IP address and change the port if needed. IPv6 addresses are supported.
The default port is 3260.
If authentication is required, deselect No Authentication, then specify the credentials the Incoming or Outgoing authentication.
Click Next to start the discovery and connect to the iSCSI target server.
If credentials are required, after a successful discovery, use Login to activate the target.
You are prompted for authentication credentials to use the selected iSCSI target.
Click Next to finish the configuration.
If everything went well, the target now appears in Connected Targets.
The virtual iSCSI device is now available.
On the iSCSI Initiator Overview page, click Finish to save and apply your changes.
You can find the local device path for the iSCSI target device by using the lsscsi command:
lsscsi
[1:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sda
Setting the Start-up Preference for iSCSI Target Devices#
In YaST, select iSCSI Initiator, then select the Connected Targets tab to view a list of the iSCSI target devices that are currently connected to the server.
Select the iSCSI target device that you want to manage.
Click Toggle Start-Up to modify the setting:
Automatic: This option is used for iSCSI targets that are to be connected when the iSCSI service itself starts up. This is the typical configuration.
Onboot: This option is used for iSCSI targets that are to be connected during boot; that is, when root (/) is on iSCSI. As such, the iSCSI target device will be evaluated from the initrd on server boots.
Click Finish to save and apply your changes.
14.3.2 Setting Up the iSCSI Initiator Manually#
Both the discovery and the configuration of iSCSI connections require a running iscsid. When running the discovery the first time, the internal database of the iSCSI initiator is created in the directory /var/lib/open-iscsi.
If your discovery is password protected, provide the authentication information to iscsid. Because the internal database does not exist when doing the first discovery, it cannot be used at this time. Instead, the configuration file /etc/iscsid.conf must be edited to provide the information. To add your password information for the discovery, add the following lines to the end of /etc/iscsid.conf:
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = <username>
discovery.sendtargets.auth.password = <password>
The discovery stores all received values in an internal persistent database. In addition, it displays all detected targets. Run this discovery with the following command:
iscsiadm -m discovery --type=st --portal=<targetip>
The output should look like the following:
10.44.171.99:3260,1 iqn.2006-02.com.example.iserv:systems
To discover the available targets on a iSNS server, use the following command:
iscsiadm --mode discovery --type isns --portal <targetip>
For each target defined on the iSCSI target, one line appears. For more information about the stored data, see Section 14.3.3, The iSCSI Client Databases.
The special --login option of iscsiadm creates all needed devices:
iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems --login
The newly generated devices show up in the output of lsscsi and can now be accessed by mount.
14.3.3 The iSCSI Client Databases#
All information that was discovered by the iSCSI initiator is stored in two database files that reside in /var/lib/open-iscsi. There is one database for the discovery of targets and one for the discovered nodes. When accessing a database, you first must select if you want to get your data from the discovery or from the node database. Do this with the -m discovery and -m node parameters of iscsiadm. Using iscsiadm just with one of these parameters gives an overview of the stored records:
iscsiadm -m discovery
10.44.171.99:3260,1 iqn.2006-02.com.example.iserv:systems
The target name in this example is iqn.2006-02.com.example.iserv:systems. This name is needed for all actions that relate to this special data set. To examine the content of the data record with the ID iqn.2006-02.com.example.iserv:systems, use the following command:
iscsiadm -m node --targetname iqn.2006-02.com.example.iserv:systems
node.name = iqn.2006-02.com.example.iserv:systems
node.transport_name = tcp
node.tpgt = 1
node.active_conn = 1
node.startup = manual
node.session.initial_cmdsn = 0
node.session.reopen_max = 32
node.session.auth.authmethod = CHAP
node.session.auth.username = joe
node.session.auth.password = ********
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 0
node.session.err_timeo.abort_timeout = 10
node.session.err_timeo.reset_timeout = 30
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
....
To edit the value of one of these variables, use the command iscsiadm with the update operation. For example, if you want iscsid to log in to the iSCSI target when it initializes, set the variable node.startup to the value automatic:
iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems -p ip:port --op=update --name=node.startup --value=automatic
Remove obsolete data sets with the delete operation If the target iqn.2006-02.com.example.iserv:systems is no longer a valid record, delete this record with the following command:
iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems -p ip:port --op=delete
IMPORTANT:Use this option with caution because it deletes the record without any additional confirmation prompt.
To get a list of all discovered targets, run the iscsiadm -m node command.
其他问题处理:
用户口令,设置oracle,grid用户口令
安装java
调整主机名称,去掉域名后缀
RHEL 7 安装oracle rac 11.2.0.4执行root.sh报错ohasd failed to start
[摘要:报错疑息: [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Remov]
报错信息:
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@zjdb1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2015-05-23 23:37:45.460:
[client(13782)]CRS-2101:The OLR was formatted using version 3.
报错原因:
因为RHEL 7使用systemd而不是initd运行进程和重启进程,而root.sh通过传统的initd运行ohasd进程。
解决方法:
在RHEL 7中ohasd需要被设置为一个服务,在运行脚本root.sh之前。
步骤如下:
1. 以root用户创建服务文件
#touch /usr/lib/systemd/system/ohas.service
#chmod 777 /usr/lib/systemd/system/ohas.service
2. 将以下内容添加到新创建的ohas.service文件中
[root@rac1 init.d]# cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
3. 以root用户运行下面的命令
systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service
4. 查看运行状态
[root@rac1 init.d]# systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: failed (Result: start-limit) since Fri 2015-09-11 16:07:32 CST; 1s ago
Process: 5734 ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple (code=exited, status=203/EXEC)
Main PID: 5734 (code=exited, status=203/EXEC)
Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: Started Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd[1]: ohas.service: main process exited, code=exited, status=203/EXEC
Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.
Sep 11 16:07:32 rac1 systemd[1]: ohas.service holdoff time over, scheduling restart.
Sep 11 16:07:32 rac1 systemd[1]: Stopping Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: ohas.service start request repeated too quickly, refusing to start.
Sep 11 16:07:32 rac1 systemd[1]: Failed to start Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.
此时状态为失败,原因是现在还没有/etc/init.d/init.ohasd文件。
下面可以运行脚本root.sh 不会再报ohasd failed to start错误了。
如果还是报ohasd failed to start错误,可能是root.sh脚本创建了init.ohasd之后,ohas.service没有马上启动,解决方法参考以下:
当运行root.sh时,一直刷新/etc/init.d ,直到出现 init.ohasd 文件,马上手动启动ohas.service服务 命令:systemctl start ohas.service
[root@rac1 init.d]# systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: active (running) since Fri 2015-09-11 16:09:05 CST; 3s ago
Main PID: 6000 (init.ohasd)
CGroup: /system.slice/ohas.service
6000 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
6026 /bin/sleep 10
Sep 11 16:09:05 rac1 systemd[1]: Starting Oracle High Availability Services...
Sep 11 16:09:05 rac1 systemd[1]: Started Oracle High Availability Services.
Sep 11 16:09:05 rac1 su[6020]: (to grid) root on none
suse01:~ # /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
suse01:~ # /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'suse01'
CRS-2676: Start of 'ora.mdnsd' on 'suse01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'suse01'
CRS-2676: Start of 'ora.gpnpd' on 'suse01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'suse01'
CRS-2672: Attempting to start 'ora.gipcd' on 'suse01'
CRS-2676: Start of 'ora.cssdmonitor' on 'suse01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'suse01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'suse01'
CRS-2672: Attempting to start 'ora.diskmon' on 'suse01'
CRS-2676: Start of 'ora.diskmon' on 'suse01' succeeded
CRS-2676: Start of 'ora.cssd' on 'suse01' succeeded
ASM created and started successfully.
Disk Group OCRVOTE created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk a3528be968ab4f1abf43574e2a81857f.
Successful addition of voting disk 6357b7756b484f48bf7bceb1e7b78e82.
Successful addition of voting disk 88228b6b3f664faebf6a09ee189412fd.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE a3528be968ab4f1abf43574e2a81857f (/dev/mapper/ocrvote1) [OCRVOTE]
2. ONLINE 6357b7756b484f48bf7bceb1e7b78e82 (/dev/mapper/ocrvote2) [OCRVOTE]
3. ONLINE 88228b6b3f664faebf6a09ee189412fd (/dev/mapper/ocrvote3) [OCRVOTE]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'suse01'
CRS-2676: Start of 'ora.asm' on 'suse01' succeeded
CRS-2672: Attempting to start 'ora.OCRVOTE.dg' on 'suse01'
CRS-2676: Start of 'ora.OCRVOTE.dg' on 'suse01' succeeded
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
suse02:~ # /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
suse02:~ # /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node suse01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
root.sh 执行至:
suse02:~ # /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
另开终端,立即
suse02:~ # cd /etc/init.d
suse02:/etc/init.d # ls -l | grep ohas
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
suse02:/etc/init.d # ls -l | grep ohasd
-rwxr-xr-x 1 root root 8782 Jul 5 10:38 init.ohasd
-rwxr-xr-x 1 root root 7034 Jul 5 10:38 ohasd
suse02:/etc/init.d # systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: failed (Result: start-limit) since Tue 2016-07-05 10:07:31 EDT; 31min ago
Main PID: 2380 (code=exited, status=203/EXEC)
Jul 05 10:07:31 suse02 systemd[1]: Failed to start Oracle High Availability Services.
suse02:/etc/init.d # systemctl start ohas.service
suse02:/etc/init.d # systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: active (running) since Tue 2016-07-05 10:39:00 EDT; 2s ago
Main PID: 18038 (init.ohasd)
CGroup: /system.slice/ohas.service
└─18038 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
然后等待root.sh执行完成 |
|