<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>dbtan 谈DB</title>
	<atom:link href="https://dbtan.com/feed" rel="self" type="application/rss+xml" />
	<link>https://dbtan.com</link>
	<description>dbtan 的生活、Oracle 及 Linux 等的学习笔记、观点。      说，永远易于做！</description>
	<lastBuildDate>Fri, 09 Aug 2024 02:50:15 +0000</lastBuildDate>
	<language>zh-Hans</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>我的半飞秒近视手术纪实</title>
		<link>https://dbtan.com/2021/06/my-fs-lasik.html</link>
					<comments>https://dbtan.com/2021/06/my-fs-lasik.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Thu, 24 Jun 2021 10:18:51 +0000</pubDate>
				<category><![CDATA[半飞秒近视手术]]></category>
		<category><![CDATA[生活]]></category>
		<category><![CDATA[FS-LASIK]]></category>
		<category><![CDATA[“半飞秒”激光手术]]></category>
		<category><![CDATA[北医三院]]></category>
		<guid isPermaLink="false">https://dbtan.com/?p=450</guid>

					<description><![CDATA[我的半飞秒近视手术纪实 写在前面 从骑行开始 从去年夏天开始“恢复减肥”，是从买了个小米体脂称“受刺激”开始的 [&#8230;]]]></description>
										<content:encoded><![CDATA[<h2>我的半飞秒近视手术纪实</h2>
<h3>写在前面</h3>
<h4>从骑行开始</h4>
<p>从去年夏天开始“恢复减肥”，是从买了个小米体脂称“受刺激”开始的。连接小米运动APP后，里面竟然有个“跑分项” -- 「比XX%的人轻」，我的跑分结果是「比2%的人轻」<img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f494.png" alt="💔" class="wp-smiley" style="height: 1em; max-height: 1em;" />...  必须得减肥了！体重太大，避免损伤膝盖，选择了骑车。说干就干，从阳台角落，把几年前淘来的战车 -- 美利达挑战者360 重新组装好，开始骑行吧。</p>
<p>幸好能有几个同事也喜欢骑行，每个月找一个周末还能一起去骑个“小长途”（全程80+km）。为了能跟上“小长途”，每个周末我都自己加练骑行60km。</p>
<p>坚持骑车8个月左右，再加上控制饮食，我已经「比38%的人轻」了。与同事和他的骑行群友一起参加了骑行群组织的新手活动 -- 戒松戒骑行，“戒台寺-松树岭-戒台寺” 全程110+km。这是我的第一次100+km骑行，而且“戒松戒”的上坡是 <strong>骑上去的！</strong> 虽说上坡速度只有10~12km/h，但也是骑上去的，没有下来推车。</p>
<p>但是，被同事抱怨我总是紧急变线，跟车太近容易出危险，没法在我后面借风跟骑。其实，我“紧急变线”是因为太累了，只能低头看地猛蹬，都没有看沿路的风景。低头看地，再加上我高度近视，戴着流汗下滑的近视眼镜，也就看个5米左右，发现障碍也就只能“紧急变线”了。</p>
<p>有点儿想要摘掉近视眼镜。等我变强了，你们就都跟我后面，由我作为破风手。哈哈哈~</p>
<h4>三十六岁，我想看清楚世界。</h4>
<p>从小学三、四年级就得了近视。从最开始的75/100度近视，到后来800度近视+150度散光。手术之前近视已经有26年左右了，做手术经过了5个月左右的心理建设（了解 --> 查资料 --> 纠结 --> 下定决心）。</p>
<p>最终选择了在北医三院做近视眼手术。一是，同仁医院得排队3~4个月，好不容易下定决心做手术，要等那么久，避免犹豫放弃了。二是，听说北医三院排队快，检查合格后一两周就能做手术，再有就是北医三院离家近，走路20分钟吧。</p>
<p>术后一个月，基本上是脱离手机、电脑等3C产品，靠听播客、听郭德纲相声度过。目前恢复良好，视力1.2/1.2。之后的3、6、12个月的复查，到时更新。此文仅献给有做近视手术打算的朋友参考，我对近视手术的态度是不鼓励也不反对，只希望对有想法要做手术的朋友有所帮助。</p>
<h3>近视手术纪实</h3>
<h4>第一次 术前 周四 上午 8:00</h4>
<p>决定去做近视手术之前，你可能已经查了攻略，想看陈跃国大夫，但是这时候不要着急，第一次去只能检查，所以不需要花100/300元挂陈大夫的号。可以早起去“111室”，跟大夫说了是打算做近视手术，今天是第一次来。大夫给了个表格，填写了防疫相关信息和个人基本信息。填写完交给大夫，大夫就给开了好几张检查单，缴费后先在门口刷一下（排检查的队），然后去门诊楼拿散瞳的药。所有的检查项要在3个地方检查：</p>
<ol>
<li>第四检查室（121室）检查<code>超声角膜测厚仪与眼前节分析仪</code></p>
</li>
<li>特殊检查室（112室）检查<code>角膜地形图</code></p>
</li>
<li>第二区检查室 <code>眼睛“拍照”</code></p>
</li>
</ol>
<p>先去第四检查室（121室）检查<code>超声角膜测厚仪与眼前节分析仪</code>；查完之后去特殊检查室（112室）检查两项角膜地形图之后，大夫会叫你滴散瞳，每10分钟一次，滴到第二次去第二区检查室 <code>眼睛“拍照”</code>，滴够四次再回特殊检查室（112室）接着做角膜地形图。以上结果都拿到之后，回到111室，会带你进里面的小屋检查，有3位大夫轮流对照 <code>眼睛“拍照”</code>给看了右眼，由于 <code>眼睛“拍照”</code>时，大夫对我的右眼“拍照”了2次（其中一张特写），经过3位大夫确认后，我的右眼没有问题。当时，3位大夫轮流查看右眼时，我还是挺紧张的，生怕我的右眼有问题，不光做不了手术，还再查出点儿眼病来... 迅速冷静了下来，想想如果真检查出什么问题来，尽早治疗避免恶化也是好的。短短1-2分钟，从紧张到自我安慰，做了飞速的思想斗争。检查完，大夫跟我说，可以做“半飞秒”或者“晶体植入”二选一，你拿着病例（检查结果）去让陈大夫看看有没有问题可不可以做。还是带着忐忑的心情到眼科大楼2层陈大夫诊室，给陈大夫看了检查结果，也用裂隙灯看了看，说没问题可以做“半飞秒”或者“晶体植入”二选一，我心里的“大石”终于放下。我本想做“全飞秒”伤口小、恢复快，但大夫说我的近视度数太深还有散光，做不了“全飞秒”。我希望尽快手术，陈大夫说最快下周三手术，给开了2种术前眼药水：术前3天开始点，4次/天，之间隔5-10分钟。</p>
<ol>
<li>可乐必妥 0.5% 左氧氟沙星滴眼液 （1支）
</li>
<li>
<p>普南扑灵 普拉洛芬滴眼液 （1支）</p>
</li>
</ol>
<p>预约了下周三陈大夫的手术，需要在手术前一天再去检查。这时候就可以走了。</p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/%E5%B7%A6%E6%B0%A7%E6%B0%9F%E6%B2%99%E6%98%9F%E6%BB%B4%E7%9C%BC%E6%B6%B2_%E6%99%AE%E6%8B%89%E6%B4%9B%E8%8A%AC%E6%BB%B4%E7%9C%BC%E6%B6%B2.jpg" alt="左氧氟沙星滴眼液_普拉洛芬滴眼液" style="zoom:25%;" /></p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/%E6%A3%80%E6%B5%8B%E6%8A%A5%E5%91%8A.jpg" alt="检测报告" /></p>
<h4>第二次 术前 周二 上午 8:00</h4>
<p>先去“111室”，取到病例，给加挂了陈大夫的特需号300元，在主楼8层。</p>
<p>跟陈大夫确认了做“半飞秒”手术，确定了手术时间：周三（明天）上午8:00。</p>
<p>还需要做2项术前检查：</p>
<ol>
<li>拍胸片：拍完不用取片子。</li>
<li>核酸检查：鼻咽拭子。</li>
</ol>
<p>做完检查完，就可以回家了，不用等检查结果。</p>
<blockquote><p>
  手机下载APP--“线上医疗服务”，登陆后 “首页”-“查询报告”，就可以在“检查报告”和“检验报告”中分别查到“胸片”和“核酸”的结果了。</p>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/APP1-4281539.jpg" alt="APP1" style="zoom:25%;" /></p>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/APP2.jpg" alt="APP2" style="zoom:25%;" /></p>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/APP3.jpg" alt="APP3" style="zoom:25%;" />
</p></blockquote>
<p>下午 15:33 接到“111室”电话，手术时间明天下午2点。</p>
<h4>第三次 手术当天 周三 下午 14:00</h4>
<p>先去“111室”，大夫给开了“手术费”单子和眼药水（潇莱威（羧甲基纤维素钠滴眼液）2支），交费取药回到“111室”，把眼药水交给大夫后，就可以在门口排队准备手术了，排到16点多才到。</p>
<p>进入手术室（简称：一层门），穿“鞋套”、戴“帽子”、穿“手术服”后，有3-4个人坐沙发上等着。大家都已经把眼镜摘了，有把眼镜拿在手里，有的已经把眼镜交给家属留在手术室外面了。大家都挺紧张，坐在沙发上闭目养神。大约等了15分钟左右，大夫打开“推拉门”，按顺序叫人进去，要把手机等电子设备到外面（一层门内）。进入“推拉门”（简称：二层门），大夫会给清洗眼睛：</p>
<ol>
<li>洗眼睛。大夫给个小盒让自己拿着，贴在眼睛下面接水，大夫<em>可能是拿个小水壶</em>给冲洗眼睛（之所以用可能，是因为当时已经摘了眼镜，什么也看不清...），这个过程就有点儿像公园喷泉里流水冲洗转动的石球。是之前还真没给体验过洗眼，开始有点儿紧张，冲着冲着感觉还挺舒服挺爽的。</li>
<li>眼部消毒。洗完眼镜，大夫用棉花沾“消毒液”给擦拭眼部。消毒后，手就不能再碰脸，且不能睁眼了。</li>
<li>点“红色眼药水”。眼部消毒后，大夫给点了“红色眼药水”是凝血功能防止血崩的。之后就闭眼休息，等待叫名字再睁眼，进入真正手术室（简称：三层门）。</li>
</ol>
<p>大夫叫到名字，确认生日，进入真正手术室（简称：三层门）。躺上手术台，大夫给点“麻药”（2次）几分钟后开始手术。手术台是电动床（可以前后左右移动）。因为我做的是“半飞秒”激光手术需要2台设备操作，所以只要躺上手术台，直到手术结束都不需要自行移动。</p>
<blockquote><p>
  “半飞秒”激光手术：先由飞秒激光设备制作掀开式角膜瓣，再用准分子激光进行角膜切削。由于准分子激光以“消融”的方式进行角膜基质切削，所以手术过程中会有焦糊味。</p>
<p><iframe width="750" height="422" src="https://www.youtube.com/embed/fZ2Rh_nPRN8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p>
</blockquote>
<p>手术过程很快，也就几分钟的样子吧。过程我非常紧张，几乎都听不到外界声音了，但牢记大夫所说，紧紧盯住上方“绿灯”不要转眼球。（其实，长年高度近视戴眼镜的朋友的眼睛都比较“发死”，只对着眼镜片中心看，镜片边缘没中间清楚。看旁边要靠转头。）</p>
<p>术后自行走出手术室（三层门），走到（二层门）大夫给贴上护眼罩，经过（一层门）走出手术室，回到“111室”大夫给讲解术后注意事项。就可以回家了。</p>
<blockquote><p>
  术后当晚不用再点眼药水，戴着护眼罩睡觉，避免睡觉时无意触碰眼睛。第二天上午复诊，再由大夫取下护眼罩进行处理。
</p></blockquote>
<p>刚手术完，<strong>眼前是雾蒙蒙的</strong>，但看东西是清楚的。就像冬天进入热屋子，透过满是哈气的镜片看东西，隔了层雾，但东西是清楚的。而摘掉眼镜是不清楚的，二者是不一样的。此时的清晰程度，还没有术前戴眼镜时候清楚。大夫说是正常的，恢复几天就会逐渐清楚了。</p>
<h4>第四次 术后第一天复诊 周四 上午 8:00</h4>
<p>先去“111室”，大夫给取下“护眼罩”，清理了眼镜后，给检查视力，近视：右眼 1.2+  左眼 1.2-  散光：右眼 25度  左眼 50度。</p>
<p>在“111室”给挂了陈大夫当天上午的号，去2层排队就诊。</p>
<p>进入陈大夫诊室，简单检查了下，给开了<strong>4种眼药水</strong>。先去主楼交付取药，再回陈大夫处，给讲解点眼药水的方法后，就可以回家了。 待术后一周后，再去复诊。</p>
<p>4种眼药水以及用法，如下：</p>
<ol>
<li>可乐必妥 0.5% 左氧氟沙星滴眼液 （1支） 4次/天。 <strong>注意</strong>：陈大夫特意强调，<strong>用新开的药！</strong>不要用术前三天滴剩下的。</li>
<li>氟美童 氟米龙滴眼液 0.1% （2支） 每周 4/3/2/1次/天 递减。</li>
<li>博士伦 唯地息 卡波姆滴眼液 （1支）其实是眼药膏，睡前1次。</li>
<li>思然 聚乙二醇滴眼液 （2支） 缓解干眼症状，眼干、酸累随时可点。</li>
</ol>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/%E6%9C%AF%E5%90%8E4%E7%A7%8D%E7%9C%BC%E8%8D%AF%E6%B0%B4.jpg" alt="术后4种眼药水" /></p>
<h4>第五次 术后一周复诊 周二 下午 13:00</h4>
<p>先去“111室”，加挂陈大夫专家号；检查视力：右眼1.2 左眼1.2 ，散光：右眼 25度 左眼 50度， 眼压：右眼 13.4 左眼 13.6</p>
<p>咨询陈大夫2个问题：1. 看近不清楚，无法对焦？ 2. 暗部视力下降？</p>
<p>陈大夫说：由于我是高度近视，这两个问题都属于“正常”现象，一般1-3个月后逐渐恢复。也没有再开眼药，继续点上次开的眼药。待术后一个月，再预约复诊。</p>
<p>咨询“111室”大夫（<em>应该</em>就是手术时，点“麻药”的大夫。之所以说“应该是”，是因为手术点“麻药”时，已经摘掉了眼镜，还没做手术，根本什么都看不清。但是“瞎子”，一般耳朵都还是可以的... 大夫说话声音还挺好听的~）：术前“洗眼睛”的是什么药水？可以开药自己洗吗？</p>
<p>大夫说：就是水（盐水），是不是“洗眼”很舒服？</p>
<p>“对对对！”</p>
<p>但是你刚手术完，不要“洗眼”。“洗眼”是消炎用的。比如：飞毛、小虫进入眼睛，很痒可以来医院“洗眼”。不建议在家自己“洗眼”，家里没有无菌环境，避免感染。</p>
<p>“哦哦。谢谢大夫。”</p>
<p>咨询完大夫，今天的复诊就结束了。可以家走了。</p>
<p>对了，术后眼睛畏光，建议戴墨镜。戴上我新买的小墨镜，骑上心爱的自行车，它永远不会堵车，骑上心爱的自行车，我马上就到家了~</p>
<h4>第六次 术后一月复诊（一） 周二 下午13:30</h4>
<p>术后一月复诊（一）由于到晚了检查没做完，周四上午还得去一趟...</p>
<p>先去“111室”，加挂陈大夫专家号；检查视力：右眼1.2 左眼1.2</p>
<p>去看陈大夫：开了2个检查：</p>
<ol>
<li>检查1：特殊检查室（112室）检查<code>角膜地形图</code></li>
</ol>
<ul>
<li>角膜地形图（Sirius）</li>
<li>角膜地形图（Topolyzer）
<p>前面排了26个人，直到 17:20 才检查完。大夫都下班了...</p>
</li>
</ul>
<ol start="2">
<li>检查2：（228房间）检查<code>显然验光组合</code> （还没有检查）</li>
</ol>
<p>周四上午早去，先去“111室”取病历，再去做「检查2」在（228房间），再去找陈大夫...</p>
<h4>第七次 术后一月复诊（二） 周四 上午 7:00</h4>
<p>有了上次晚到的教训，这次上午7点就到了，排到第3号。</p>
<ol>
<li>先去“111室”，取了病历。
</li>
<li>
<p>去“228房间”去做“显然验光组合”检查。视力：右/左眼均为 1.2</p>
</li>
<li>
<p>去看陈大夫，说恢复的还可以。又给加挂专家号，开了2盒眼药水（施图伦 七叶洋地黄双苷滴眼液）</p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/FS-LASIK/%E6%96%BD%E5%9B%BE%E4%BC%A6%20%E4%B8%83%E5%8F%B6%E6%B4%8B%E5%9C%B0%E9%BB%84%E5%8F%8C%E8%8B%B7%E6%BB%B4%E7%9C%BC%E6%B6%B2.jpg" alt="施图伦 七叶洋地黄双苷滴眼液" /></p>
</li>
</ol>
<p>待术后三个月，再预约复诊。</p>
<h4>未完待续...</h4>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2021/06/my-fs-lasik.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Oracle 19c RAC + ADG 手动 failover 角色转换步骤</title>
		<link>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html</link>
					<comments>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Tue, 17 Mar 2020 15:40:03 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Oracle 19c]]></category>
		<category><![CDATA[Oracle Data Guard]]></category>
		<category><![CDATA[Oracle MAA]]></category>
		<category><![CDATA[Oracle RAC]]></category>
		<category><![CDATA[failover]]></category>
		<category><![CDATA[tq1]]></category>
		<category><![CDATA[tqdb]]></category>
		<category><![CDATA[tqdb21]]></category>
		<category><![CDATA[tqdb22]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=417</guid>

					<description><![CDATA[Oracle 19c RAC + ADG 手动 failover 角色转换步骤 Revision V4.0 N [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>Oracle 19c RAC + ADG 手动 <code>failover</code> 角色转换步骤</h3>
<p><strong>Revision V4.0</strong></p>
<table>
<thead>
<tr>
<th align="left">No.</th>
<th>Date</th>
<th>Author/Modifier</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1.0</td>
<td>2020-03-06</td>
<td>谈权</td>
<td>初稿：<a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html">搭建 Oracle MAA: Oracle 19c RAC + ADG</a></td>
</tr>
<tr>
<td align="left">2.0</td>
<td>2020-03-10</td>
<td>谈权</td>
<td>增加：<a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html">16. 手动 <code>switchover</code> 角色转换步骤</a></td>
</tr>
<tr>
<td align="left">3.0</td>
<td>2020-03-13</td>
<td>谈权</td>
<td>增加：17. 手动 <code>failover</code> 角色转换步骤</td>
</tr>
<tr>
<td align="left">4.0</td>
<td>2020-03-14</td>
<td>谈权</td>
<td>完善：「17.3」和「17.4」</td>
</tr>
</tbody>
</table>
<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483c48b90" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483c48b90"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#Oracle_19c_RAC_ADG_%E6%89%8B%E5%8A%A8_failover_%E8%A7%92%E8%89%B2%E8%BD%AC%E6%8D%A2%E6%AD%A5%E9%AA%A4" >Oracle 19c RAC + ADG 手动 failover 角色转换步骤</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#17_%E6%89%8B%E5%8A%A8_failover_%E8%A7%92%E8%89%B2%E8%BD%AC%E6%8D%A2%E6%AD%A5%E9%AA%A4" >17. 手动 failover 角色转换步骤</a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#171_Data_Guard_Side_Standby_single-instance_%E8%BF%9B%E8%A1%8C_failover_%E5%88%87%E6%8D%A2" >17.1 Data Guard Side: Standby (single-instance) 进行 failover 切换</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-4" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#172_%E3%80%8C%E5%8E%9F%E4%B8%BB%E5%BA%93%E3%80%8Dold_Primary_RAC_%E6%81%A2%E5%A4%8D%E4%B8%BA%E3%80%8C%E6%96%B0%E4%B8%BB%E5%BA%93%E3%80%8D%E7%9A%84%E5%A4%87%E5%BA%93%E7%9A%84%E4%B8%89%E7%A7%8D%E6%96%B9%E6%B3%95" >17.2 「原主库」old Primary (RAC) 恢复为「新主库」的备库的三种方法</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-5" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#173_%E9%87%8D%E6%96%B0%E6%90%AD%E5%BB%BA%E5%9B%9E_Active_Data_Guard_%E6%9E%B6%E6%9E%84" >17.3 重新搭建回 Active Data Guard 架构</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-6" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#174_%E5%B0%86%E3%80%8C173%E3%80%8D%E8%BF%9B%E8%A1%8C_switchover_%EF%BC%8C%E6%81%A2%E5%A4%8D_data_guard_%E6%9E%B6%E6%9E%84%E4%B8%BA_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D%E2%80%93%3E%3E_%E3%80%8C%E5%A4%87%E5%BA%93_tq1%E3%80%8D" >17.4 将「17.3」进行 switchover ，恢复 data guard 架构为 「主库 RAC」&#8211;&gt;&gt; 「备库 tq1」</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-7" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/#Switchover_Steps" >Switchover Steps</a></li></ul></li></ul></nav></div>

<p>接上篇两篇文章（<a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html">搭建 Oracle MAA: Oracle 19c RAC + ADG</a> 和 <a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html">Oracle 19c RAC + ADG 手动 switchover 角色转换步骤</a>）， 本文继续完成 「17. 手动 <code>failover</code> 角色转换步骤」。</p>
<h2>17. 手动 <code>failover</code> 角色转换步骤</h2>
<h3>17.1 Data Guard Side: Standby (single-instance) 进行 <code>failover</code> 切换</h3>
<blockquote><p>如果 data guard 主数据库的情况很糟糕，或者不能用于生产，那么我们可以激活备用数据库作为主生产数据库。</p>
<p>==<strong>failover将破坏dataguard模式。需要重新配置dataguard</strong>==</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle 19c Data Guard Failover Structure-2-ok.png" alt="Oracle 19c Data Guard Failover Structure-2" width="498" height="601" /></p>
<p><strong>Data guard Configuration details:</strong></p>
<table>
<thead>
<tr>
<th>Environment Details</th>
<th>Primary (RAC)</th>
<th>Standby (single-instance)</th>
</tr>
</thead>
<tbody>
<tr>
<td>OS</td>
<td>CentOS Linux release 7.7.1908 (Core)</td>
<td>CentOS Linux release 7.7.1908 (Core)</td>
</tr>
<tr>
<td>DB Version</td>
<td>Version 19.6.0.0.0</td>
<td>Version 19.6.0.0.0</td>
</tr>
<tr>
<td>DATABASE_ROLE</td>
<td>PRIMARY</td>
<td>PHYSICAL STANDBY</td>
</tr>
<tr>
<td>DB_UNIQUE_NAME</td>
<td>tqdb</td>
<td>tqdb_adg</td>
</tr>
</tbody>
</table>
<p><strong>Failover Configuration details:</strong></p>
<table>
<thead>
<tr>
<th>Environment Details</th>
<th>Primary (RAC)</th>
<th>Standby (single-instance)</th>
</tr>
</thead>
<tbody>
<tr>
<td>OS</td>
<td>CentOS Linux release 7.7.1908 (Core)</td>
<td>CentOS Linux release 7.7.1908 (Core)</td>
</tr>
<tr>
<td>DB Version</td>
<td>Version 19.6.0.0.0</td>
<td>Version 19.6.0.0.0</td>
</tr>
<tr>
<td>DATABASE_ROLE</td>
<td>PRIMARY</td>
<td>PRIMARY</td>
</tr>
<tr>
<td>DB_UNIQUE_NAME</td>
<td>tqdb</td>
<td>tqdb_adg</td>
</tr>
</tbody>
</table>
<pre><code class="language-sql line-numbers">-- Primary Side
-- RAC 节点1 查看 `DATABASE_ROLE` 和 `OPEN_MODE`
21:49:58 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
21:49:58   2  from gv$database db, gv$instance inst
21:49:58   3  where db.INST_ID = inst.INST_ID
21:49:58   4  ;

INST_ID       DBID INSTANCE_NAME  HOST_NAME  OPEN_MODE    PROTECTION_MODE      DATABASE_ROLE  DB_UNIQUE_NAME
-------- ---------- -------------- ---------- ------------ -------------------- -------------- --------------
    1 3966209240 tqdb1          tqdb21     READ WRITE   MAXIMUM PERFORMANCE  PRIMARY        tqdb
    2 3966209240 tqdb2          tqdb22     READ WRITE   MAXIMUM PERFORMANCE  PRIMARY        tqdb

21:49:58 sys@TQDB(tqdb21)&gt; 

-- Data Guard Side:
-- 备库
-- 1. 查看备库端的 `DATABASE_ROLE` 和 `OPEN_MODE`
21:54:04 sys@TQDB(tq1)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
21:54:04   2  from gv$database db, gv$instance inst
21:54:04   3  where db.INST_ID = inst.INST_ID
21:54:04   4  ;

INST_ID       DBID INSTANCE_NAME  HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
------- ---------- -------------- ---------- -------------------- -------------------- ---------------- ---------------
   1 3966209240 tqdb_adg       tq1        READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb_adg

21:54:04 sys@TQDB(tq1)&gt; 

-- 2. Cancel the MRP process
[oracle@tq1: ~]$ ps -ef | grep mrp
oracle   11719 13271  0 22:07 pts/3    00:00:00 grep --color mrp
oracle   25724     1  0 Mar11 ?        00:04:08 ora_mrp0_tqdb_adg
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 13 22:09:18 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

22:08:16 sys@TQDB(tq1)&gt; 
22:08:17 sys@TQDB(tq1)&gt; alter database recover managed standby database cancel;

Database altered.

22:08:48 sys@TQDB(tq1)&gt; 

-- 3. 接下来的命令，将帮助将 standby 作为 primary。
23:22:55 sys@TQDB(tq1)&gt; alter database recover managed standby database finish;

Database altered.

23:23:40 sys@TQDB(tq1)&gt; 
23:25:15 sys@TQDB(tq1)&gt; col name for a10;
23:25:27 sys@TQDB(tq1)&gt; set lines 200;
23:25:30 sys@TQDB(tq1)&gt; select name, open_mode, database_role from v$database;

NAME       OPEN_MODE            DATABASE_ROLE
---------- -------------------- ----------------
TQDB       READ ONLY            PHYSICAL STANDBY

23:25:32 sys@TQDB(tq1)&gt; 
23:26:30 sys@TQDB(tq1)&gt; 
23:26:30 sys@TQDB(tq1)&gt; alter database activate standby database;

Database altered.

-- Managed recovery process has been stopped between primary and standby database and standby becomes primary database.
-- MRP(Managed recovery process)在主数据库和备用数据库之间停止，备用数据库成为主数据库。

23:27:16 sys@TQDB(tq1)&gt; select name, open_mode, database_role from v$database;

NAME       OPEN_MODE            DATABASE_ROLE
---------- -------------------- ----------------
TQDB       MOUNTED              PRIMARY

-- 4. 重启数据库实例，此时「原备库」已经成为 `primary`。 
23:27:41 sys@TQDB(tq1)&gt; 
23:29:27 sys@TQDB(tq1)&gt; 
23:29:27 sys@TQDB(tq1)&gt; conn / as sysdba
Connected.
23:29:30 idle(tq1)&gt; 
23:29:31 idle(tq1)&gt; 
23:29:31 idle(tq1)&gt; shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
23:30:14 idle(tq1)&gt; 
23:30:42 idle(tq1)&gt; 
23:30:42 idle(tq1)&gt; startup 
ORACLE instance started.

Total System Global Area 1191181696 bytes
Fixed Size                  8895872 bytes
Variable Size             318767104 bytes
Database Buffers          855638016 bytes
Redo Buffers                7880704 bytes
Database mounted.
Database opened.
23:30:57 idle(tq1)&gt; conn / as sysdba
Connected.
23:31:11 sys@TQDB(tq1)&gt; 
23:31:12 sys@TQDB(tq1)&gt; set lines 200
23:31:22 sys@TQDB(tq1)&gt; col name for a10;
23:31:30 sys@TQDB(tq1)&gt; select name, open_mode, database_role from v$database;

NAME       OPEN_MODE            DATABASE_ROLE
---------- -------------------- ----------------
TQDB       READ WRITE           PRIMARY

23:31:35 sys@TQDB(tq1)&gt; 
23:45:04 sys@TQDB(tq1)&gt; col HOST_NAME for a10;
23:45:04 sys@TQDB(tq1)&gt; --
23:45:04 sys@TQDB(tq1)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
23:45:04   2  from gv$database db, gv$instance inst
23:45:04   3  where db.INST_ID = inst.INST_ID
23:45:04   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb_adg         tq1        READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb_adg

23:45:05 sys@TQDB(tq1)&gt; 

-- 此时，「原备库」成为的 `primary`，已经与「原主库RAC」没有 data guard 关系了。
-- 「原备库」与「原主库RAC」是两套独立的 `primary`，之间没有 data guard 关系了。
-- 「原主库RAC」
23:44:49 sys@TQDB(tqdb21)&gt; --
23:44:49 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
23:44:49   2  from gv$database db, gv$instance inst
23:44:49   3  where db.INST_ID = inst.INST_ID
23:44:49   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb
      2 3966209240 tqdb2            tqdb22     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb

23:44:49 sys@TQDB(tqdb21)&gt; 


</code></pre>
</blockquote>
<h3>17.2 「原主库」old Primary (RAC) 恢复为「新主库」的备库的三种方法</h3>
<blockquote><p>在failover之后，如果原主库故障解决，可以重新上线，我们可以看到，在startup以后，它的角色仍然是Primary，很显然，一个dataguard配置中，是不可能有两个主库的。这时，我们可以将这个原主库转换为新主库的备库。</p>
<p>主要有三种方法：</p>
<p>一、按照先前的方法，利用新主库的备份，将这个原主库重新配置为备库。</p>
<pre><code class="language-sql line-numbers">生成环境下，一般建议使用「方法一」：将“原主库”重新搭建为“新主库”的`PHYSICAL STANDBY`，再进行 `switchover`（切换回“原主库” 为`PRIMARY`，原备库为“PHYSICAL STANDBY”）。

即：重新搭建回 Active Data Guard 架构。
「现主库 tq1」 --&gt;&gt; 「现备库 RAC」
其实，也就是单实例到RAC的Data Guard架构的搭建。（步骤详见： 「17.3 重新搭建回 Active Data Guard 架构」）
</code></pre>
<p>二、利用flashback。</p>
<pre><code class="language-sql line-numbers">-- Flashing Back a Failed Primary Database into a Physical Standby Database
1. 查询原备库转换成主库时的SCN   --&gt;&gt; tq1 上操作

      SQL&gt; select to_char(standby_became_primary_scn) from v$database;

      TO_CHAR(STANDBY_BECAME_PRIMARY_SCN)
      ----------------------------------------
      901719

2. Flash back原主库  --&gt;&gt; 即：RAC节点

      SQL&gt; shutdown immediate

      SQL&gt; startup mount

      SQL&gt; flashback database to scn 901719;  

      --&gt;&gt; 注意，前提是 `flashback_on` 的特性必须开启，`alter database flashback on;`
      --&gt;&gt; 在我的事例中，没有开启 `flashback_on` 的特性，也就无法使用`flashback`快速恢复dataguard了。

3. 将原主库转换为备库  --&gt;&gt; RAC节点 上操作

       SQL&gt; alter database convert to physical standby;

       SQL&gt; shutdown immediate

       SQL&gt; startup

4. 现备库上启用Redo Apply  --&gt;&gt; RAC节点 上操作

       SQL&gt; alter database recover managed standby database using current logfile disconnect from session;

基本OK！
</code></pre>
<p>三、利用rman备份。</p>
<pre><code class="language-sql line-numbers">-- Converting a Failed Primary into a Standby Database Using RMAN Backups

1. 查询原备库转换成主库时的SCN  --&gt;&gt; tq1 上操作

      SQL&gt; select to_char(standby_became_primary_scn) from v$database;

2. 恢复原主库      --&gt;&gt; 即：RAC节点

       RMAN &gt; run
               { set until scn &lt;standby_became_primary_scn+1&gt;;  
                  restore database;            
                  recover database;
                }

3. 将原主库转换为备库  --&gt;&gt; 即：RAC节点

       SQL&gt; alter database convert to physical standby;

       SQL&gt; shutdown immediate

       SQL&gt; startup mount

       SQL&gt; alter database open read only;

4. 现备库上启用Redo Apply       --&gt;&gt; 即：RAC节点

       SQL&gt; alter database recover managed standby database using current logfile disconnect from session;

基本OK！


</code></pre>
</blockquote>
<h3>17.3 重新搭建回 Active Data Guard 架构</h3>
<blockquote><p>在failover之后，如果原主库故障解决，可以重新上线，我们可以看到，在startup以后，它的角色仍然是Primary，很显然，一个dataguard配置中，是不可能有两个主库的。这时，我们可以将这个原主库转换为新主库的备库。</p>
<p>主要有三种方法：</p>
<p>一、按照先前的方法，利用新主库的备份，将这个原主库重新配置为备库。</p>
<pre><code class="language-sql line-numbers">生成环境下，一般建议使用「方法一」：将“原主库”重新搭建为“新主库”的`PHYSICAL STANDBY`，再进行 `switchover`（切换回“原主库” 为`PRIMARY`，原备库为“PHYSICAL STANDBY”）。

即：重新搭建回 Active Data Guard 架构。
「现主库 tq1」 --&gt;&gt; 「现备库 RAC」
其实，也就是单实例到RAC的Data Guard架构的搭建。（步骤详见： 「17.3 重新搭建回 Active Data Guard 架构」）
</code></pre>
<p><img loading="lazy" decoding="async" class="alignnone size-medium" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle 19c Data Guard Renew Structure-1-ok.png" alt="Oracle 19c Data Guard Renew Structure-1" width="508" height="981" /></p>
<p>操作记录：</p>
<pre><code class="language-bash line-numbers">一、按照先前的方法，利用新主库的备份，将这个原主库重新配置为备库。
生成环境下，一般建议使用「方法一」：将“原主库”重新搭建为“新主库”的`PHYSICAL STANDBY`，再进行 `switchover`（切换回“原主库” 为`PRIMARY`，原备库为“PHYSICAL STANDBY”）。



-- 「新主库 tq1」:  pfile文件不变
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ vim inittqdb_adg.ora
*.audit_file_dest='/u01/app/oracle/admin/tqdb/adump'
*.db_unique_name='tqdb_adg'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(tqdb_adg,tqdb)'
*.log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb_adg'
*.log_archive_dest_2='SERVICE=tqdb ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
*.standby_file_management='AUTO'
*.fal_server='tqdb'
*.fal_client='tqdb_adg'
*.control_files='+DATA'
*.db_create_file_dest='+DATA'
*.db_name='tqdb'
*.pga_aggregate_target=379M
*.processes=300
*.sga_target=1136M
*.db_block_size=8192
*.compatible="19.0.0"
*.audit_trail="DB"
*.open_cursors=300
*._optimizer_use_auto_indexes="OFF"
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 


---- 「新备库 RAC」 pfile=/tmp/init_tqdb21_new_ADG.sql
tqdb1.__data_transfer_cache_size=0
tqdb2.__data_transfer_cache_size=0
tqdb1.__db_cache_size=385875968
tqdb2.__db_cache_size=415236096
tqdb1.__inmemory_ext_roarea=0
tqdb2.__inmemory_ext_roarea=0
tqdb1.__inmemory_ext_rwarea=0
tqdb2.__inmemory_ext_rwarea=0
tqdb1.__java_pool_size=0
tqdb2.__java_pool_size=0
tqdb1.__large_pool_size=4194304
tqdb2.__large_pool_size=4194304
tqdb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
tqdb2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
tqdb1.__pga_aggregate_target=276824064
tqdb2.__pga_aggregate_target=276824064
tqdb1.__sga_target=822083584
tqdb2.__sga_target=822083584
tqdb1.__shared_io_pool_size=33554432
tqdb2.__shared_io_pool_size=33554432
tqdb1.__shared_pool_size=385875968
tqdb2.__shared_pool_size=356515840
tqdb1.__streams_pool_size=0
tqdb2.__streams_pool_size=0
tqdb1.__unified_pga_pool_size=0
tqdb2.__unified_pga_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/tqdb/adump'
*.db_unique_name='tqdb'
*.log_archive_config='DG_CONFIG=(tqdb,tqdb_adg)'
*.log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb'
*.log_archive_dest_2='SERVICE=tqdb_adg ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb_adg'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
*.standby_file_management='AUTO'
*.fal_client='tqdb'
*.fal_server='tqdb_adg'
*.control_files='+DATA'
*.db_create_file_dest='+DATA'
*.db_name='tqdb'
*.pga_aggregate_target=262m
*.processes=300
*.sga_target=783m
*.db_block_size=8192
*.compatible="19.0.0"
*.audit_trail="DB"
*.open_cursors=300
*._optimizer_use_auto_indexes="OFF"
*.cluster_database=TRUE
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=tqdbXDB)'
family:dw_helper.instance_mode='read-only'
tqdb2.instance_number=2
tqdb1.instance_number=1
*.local_listener='-oraagent-dummy-'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.remote_login_passwordfile='exclusive'
tqdb2.thread=2
tqdb1.thread=1
*.undo_tablespace='UNDOTBS1'
tqdb2.undo_tablespace='UNDOTBS2'
tqdb1.undo_tablespace='UNDOTBS1'

[oracle@tqdb21: /tmp]$ scp init_tqdb21_new_ADG.sql oracle@tqdb22:/tmp/init_tqdb22_new_ADG.sql  
init_tqdb21_new_ADG.sql                                                                                                                                              100% 2145     2.6MB/s   00:00    
[oracle@tqdb21: /tmp]$ 



-- 「新备库 RAC」 节点1
[oracle@tqdb21: /tmp]$ echo $ORACLE_SID
tqdb1
[oracle@tqdb21: /tmp]$ echo $DB_UNIQUE_NAME
tqdb
[oracle@tqdb21: /tmp]$ 

-- 「现备库 RAC」: 使用上面的`pfile`启动备库到 `nomount` 状态
-- 「现备库 RAC」 节点1： 先只启动一个实例 tqdb1
[oracle@tqdb21: /tmp]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 04:22:02 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.


04:22:04 idle&gt; startup nomount pfile='/tmp/init_tqdb21_new_ADG.sql'
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             390070272 bytes
Database Buffers          419430400 bytes
Redo Buffers                3678208 bytes
04:22:30 idle&gt; 
04:23:14 idle&gt; 

-- 「现备库 RAC」 节点2： 停止数据库实例
[oracle@tqdb22: /tmp]$ echo $ORACLE_SID
tqdb2
[oracle@tqdb22: /tmp]$ echo $DB_UNIQUE_NAME
tqdb
[oracle@tqdb22: /tmp]$ 
[oracle@tqdb22: /tmp]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 04:24:42 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.


04:24:48 idle&gt; 
04:24:51 idle&gt; startup nomount pfile='/tmp/init_tqdb22_new_ADG.sql';
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             360710144 bytes
Database Buffers          448790528 bytes
Redo Buffers                3678208 bytes
04:25:38 idle&gt; 
04:25:52 idle&gt; 
04:25:52 idle&gt; shutdown immeidate;





创建密码文件， SYS密码与主数据库的密码匹配。
4.密码文件
将单实例主库的密码文件orapw&lt;$ORACLE_SID&gt;拷贝至备库所有节点，并改名为`orapwtqdb1`和`orapwtqdb2`


-- 「新主库 tq1」
oracle$ orapwd file=/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb password=Oracle123 entries=10 format=12 

​```
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ cp orapwtqdb_adg orapwtqdb_adg.bak
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ mv orapwtqdb_adg.bak orapwtqdb_adg.bak_20200314
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ orapwd file=/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb_adg password=Oracle123 entries=10 format=12 

OPW-00005: File with same name exists - please delete or rename
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ rm orapwtqdb_adg
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ orapwd file=/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb_adg password=Oracle123 entries=10 format=12 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ ll -th
total 175M
-rw-r----- 1 oracle oinstall 2.0K Mar 14 04:40 orapwtqdb_adg
-rw-r----- 1 oracle oinstall 2.0K Mar 14 04:39 orapwtqdb_adg.bak_20200314
-rw-r----- 1 oracle asmadmin 9.5K Mar 14 02:59 spfiletqdb_adg.ora
-rw-rw---- 1 oracle asmadmin 1.6K Mar 14 02:59 hc_tqdb_adg.dat
-rw-r----- 1 oracle asmadmin  44M Mar 13 23:40 c-3966209240-20200313-00
-rw-r----- 1 oracle asmadmin  44M Mar 13 23:40 snapcf_tqdb_adg.f
-rw-r----- 1 oracle asmadmin  44M Mar 11 01:20 c-3966209240-20200311-01
-rw-r----- 1 oracle asmadmin  44M Mar 11 00:16 c-3966209240-20200311-00
-rw-r----- 1 oracle asmadmin   24 Mar  7 08:29 lkTQDB_ADG
-rw-r--r-- 1 oracle oinstall  784 Mar  7 07:53 inittqdb_adg.ora
-rw-rw---- 1 oracle asmadmin 1.6K Mar  7 04:15 hc_tq1.dat
-rw-r--r-- 1 oracle asmadmin  941 Feb  6 17:38 inittq1.ora
-rw-r----- 1 oracle oinstall 2.0K Jan 17 21:43 orapwtq1
-rw-r----- 1 oracle asmadmin   24 Jan 17 21:27 lkTQ1
-rw-r--r-- 1 oracle oinstall 3.1K May 14  2015 init.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ strings orapwtqdb_adg
][Z
ORACLE Remote Password file
r|qv
$3wl`B 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
​```

​```将单实例主库的密码文件orapw&lt;$ORACLE_SID&gt;拷贝至备库所有节点，并改名为`orapwtqdb1`和`orapwtqdb2`
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ scp orapwtqdb_adg oracle@tqdb21:/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb1
oracle@tqdb21's password: 
orapwtqdb_adg                                                                                                                                                        100% 2048   555.7KB/s   00:00    
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ scp orapwtqdb_adg oracle@tqdb22:/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb2
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
oracle@tqdb22's password: 
orapwtqdb_adg                                                                                                                                                        100% 2048   397.4KB/s   00:00    
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
​```




-- 「新主库 tq1」
-- 在做这一步之前确保主库的备份计划已被停止，或rman中ARCHIVELOG DELETION POLICY被设置为applied on standby;
-- `CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;`


​```配置删除归档策略
[oracle@tq1: ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 14 07:01:39 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)

RMAN&gt; show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name TQDB_ADG are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/19c/dbhome/dbs/snapcf_tqdb_adg.f'; # default

RMAN&gt; CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored

RMAN&gt; show all;

RMAN configuration parameters for database with db_unique_name TQDB_ADG are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/19c/dbhome/dbs/snapcf_tqdb_adg.f'; # default

RMAN&gt; quit


Recovery Manager complete.
[oracle@tq1: ~]$ 
​```




-- 1. 「现备库 RAC」RAC节点1: 增加备库静态监听
​```
[grid@tqdb21: /u01/app/19c/grid/network/admin]$ cat listener.ora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM))))              # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON               # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET         # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent - Disabled by Agent because REMOTE_REGISTRATION_ADDRESS is set
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET                # line added by Agent


# 增加备库静态监听
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/19c/dbhome)
(SID_NAME = tqdb)
)
)

[grid@tqdb21: /u01/app/19c/grid/network/admin]$ 
​```


-- 1. 「现备库」RAC节点2: 增加备库静态监听
​```
[grid@tqdb22: /u01/app/19c/grid/network/admin]$ cat listener.ora
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM))))              # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON               # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET         # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent - Disabled by Agent because REMOTE_REGISTRATION_ADDRESS is set



# 增加备库静态监听
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/19c/dbhome)
(SID_NAME = tqdb)
)
)


[grid@tqdb22: /u01/app/19c/grid/network/admin]$ 
​```


-- 2. 「现备库」RAC节点1 和 RAC节点2 分别重启监听
# srvctl stop listener 
# srvctl start listener 
# crsctl stat res -t

grid$ lsnrctl status
grid$ lsnrctl service



-- 「现主库 tq1」和「现备库 RAC」: 主库备库增加`tnsnames`别名
​```「现主库 tq1」
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ cat tnsnames.ora 
TQ1 =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tq1)
 )
)

# 备库（tq1）添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
(DESCRIPTION =
 #(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb21-vip)(PORT = 1521))
 ## 由于前面「新备库 RAC」只 `starup nomount;` 了一个实例 `tqdb1`，所以此时先注释掉 `tqdb22-vip` 。
 ## 避免 rman 连接 `auxiliary` 辅助实例时，`LOAD_BALANCE` 负载均衡到实例二 `tqdb2`。
 ## 如果 rman 连接 `auxiliary` 辅助实例，连接到实例二 `tqdb2`，状态为 `connected to auxiliary database (not started)`，
 ## 将无法使用 `duplicate from active database`创建 `standby database`
 ## 待恢复完 `standby database` 后，再取消 `tqdb22-vip` 的注释。
 #(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb22-vip)(PORT = 1521))
 (LOAD_BALANCE = yes)
 (FAILOVER = yes)
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb)
   (UR=A)
   (FAILOVER_MODE =
     (TYPE = SELECT)
     (METHOD = BASIC)
     (RETRIES = 180)
     (DELAY = 5)
   )
 )
)



tqdb_adg =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb_adg) 
 )
)

[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
​```

-- 「现备库 RAC 节点1」 tnsnames 别名
​```「现备库 RAC 节点1」
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ cat tnsnames.ora 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/19c/dbhome/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

#TQDB =
#  (DESCRIPTION =
#    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
#    (CONNECT_DATA =
#      (SERVER = DEDICATED)
#      (SERVICE_NAME = tqdb)
#    )
#  )


# 「新备库」添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
(DESCRIPTION =
 #(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb21-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb22-vip)(PORT = 1521))
 (LOAD_BALANCE = yes)
 (FAILOVER = yes)
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb)
   (UR=A)
   (FAILOVER_MODE =
     (TYPE = SELECT)
     (METHOD = BASIC)
     (RETRIES = 180)
     (DELAY = 5)
   )
 )
)



# 只需添加别名 `tqdb_adg`
tqdb_adg =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb_adg) 
   (UR=A)
 )
)


[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
​```

-- 「现备库 RAC 节点2」 tnsnames 别名
​```「现备库 RAC 节点2」
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ cat tnsnames.ora 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/19c/dbhome/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.



#TQDB =
#  (DESCRIPTION =
#    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
#    (CONNECT_DATA =
#      (SERVER = DEDICATED)
#      (SERVICE_NAME = tqdb)
#    )
#  )


# 「新备库」添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
(DESCRIPTION =
 #(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb21-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb22-vip)(PORT = 1521))
 (LOAD_BALANCE = yes)
 (FAILOVER = yes)
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb)
   (UR=A)
   (FAILOVER_MODE =
     (TYPE = SELECT)
     (METHOD = BASIC)
     (RETRIES = 180)
     (DELAY = 5)
   )
 )
)




# 只需添加别名 `tqdb_adg`
tqdb_adg =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb_adg) 
   (UR=A)
 )
)


[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
​```



-- 「现主库 tq1」 oracle 验证登陆
oracle$ sqlplus sys/Oracle123@tqdb as sysdba
oracle$ sqlplus sys/Oracle123@tqdb21:1521/tqdb as sysdba
oracle$ sqlplus sys/Oracle123@tqdb22:1521/tqdb as sysdba 
oracle$ sqlplus sys/Oracle123@tqdb_adg as sysdba
oracle$ sqlplus sys/Oracle123@tq1:1521/tqdb_adg as sysdba 

​```
-- 「现主库 tq1」 oracle 验证登陆
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 11:41:17 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

11:41:17 idle(tqdb21)&gt; quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb_adg as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 11:42:34 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

11:42:34 sys@TQDB(tq1)&gt; quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tq1:1521/tqdb_adg as sysdba 

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 11:42:44 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

11:42:44 sys@TQDB(tq1)&gt; quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ 

​```

-- 3.「现主库 tq1」: 主库准备连接辅助实例
-- 「现主库 tq1」tq1 : 主库准备连接辅助实例
oracle$ rman target / auxiliary sys/Oracle123@tqdb
或者
oracle$ rman target sys/Oracle123@tqdb_adg auxiliary sys/Oracle123@tqdb


​```「现主库 tq1」tq1 : 主库准备连接辅助实例
[oracle@tq1: ~]$ rman target / auxiliary sys/Oracle123@tqdb

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 14 11:43:54 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN&gt; 

RMAN&gt; quit


Recovery Manager complete.
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ rman target sys/Oracle123@tqdb_adg auxiliary sys/Oracle123@tqdb

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 14 11:44:55 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN&gt; quit


Recovery Manager complete.
[oracle@tq1: ~]$ 
​```


-- 「现主库 tq1」: 使用`DUPLICATE`开始备库创建
​```
-- 执行 RMAN 脚本
run
{ 
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate AUXILIARY channel c4 type disk;
allocate AUXILIARY channel c5 type disk;
allocate AUXILIARY channel c6 type disk;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
NOFILENAMECHECK;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
}
​```


-- 「现主库 tq1」: 使用`DUPLICATE`开始备库创建
​```
[oracle@tq1: ~]$ rman target sys/Oracle123@tqdb_adg auxiliary sys/Oracle123@tqdb

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 14 08:47:32 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN&gt; 

RMAN&gt; 

RMAN&gt; 

RMAN&gt; 

RMAN&gt; run
2&gt; { 
3&gt; allocate channel c1 type disk;
4&gt; allocate channel c2 type disk;
5&gt; allocate channel c3 type disk;
6&gt; allocate AUXILIARY channel c4 type disk;
7&gt; allocate AUXILIARY channel c5 type disk;
8&gt; allocate AUXILIARY channel c6 type disk;
9&gt; DUPLICATE TARGET DATABASE
10&gt; FOR STANDBY
11&gt; FROM ACTIVE DATABASE
12&gt; DORECOVER
13&gt; NOFILENAMECHECK;
14&gt; release channel c1;
15&gt; release channel c2;
16&gt; release channel c3;
17&gt; release channel c4;
18&gt; release channel c5;
19&gt; release channel c6;
20&gt; }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=64 device type=DISK

allocated channel: c2
channel c2: SID=89 device type=DISK

allocated channel: c3
channel c3: SID=94 device type=DISK

allocated channel: c4
channel c4: SID=423 instance=tqdb1 device type=DISK

allocated channel: c5
channel c5: SID=429 instance=tqdb1 device type=DISK

allocated channel: c6
channel c6: SID=441 instance=tqdb1 device type=DISK

Starting Duplicate Db at 2020-03-14 08:47:52
current log archived

contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format  '+DATA/TQDB/PASSWORD/pwdtqdb.257.1032337993'   ;
}
executing Memory Script

Starting backup at 2020-03-14 08:47:53
Finished backup at 2020-03-14 08:47:54
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
sql clone "alter system set  control_files = 
''+DATA/TQDB/CONTROLFILE/current.414.1035017275'' comment=
''Set by RMAN'' scope=spfile";
restore clone from service  'tqdb_adg' standby controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+DATA/TQDB/CONTROLFILE/current.414.1035017275'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 2020-03-14 08:47:54

channel c4: starting datafile backup set restore
channel c4: using network backup set from service tqdb_adg
channel c4: restoring control file
channel c4: restore complete, elapsed time: 00:00:03
output file name=+DATA/TQDB/CONTROLFILE/current.416.1035017275
Finished restore at 2020-03-14 08:47:58

contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for clone tempfile  1 to new;
switch clone tempfile all;
set newname for clone datafile  1 to new;
set newname for clone datafile  2 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  4 to new;
set newname for clone datafile  5 to new;
set newname for clone datafile  6 to new;
restore
from  nonsparse   from service 
'tqdb_adg'   clone database
;
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 2020-03-14 08:48:04

channel c4: starting datafile backup set restore
channel c4: using network backup set from service tqdb_adg
channel c4: specifying datafile(s) to restore from backup set
channel c4: restoring datafile 00001 to +DATA
channel c5: starting datafile backup set restore
channel c5: using network backup set from service tqdb_adg
channel c5: specifying datafile(s) to restore from backup set
channel c5: restoring datafile 00002 to +DATA
channel c6: starting datafile backup set restore
channel c6: using network backup set from service tqdb_adg
channel c6: specifying datafile(s) to restore from backup set
channel c6: restoring datafile 00003 to +DATA
channel c6: restore complete, elapsed time: 00:00:03
channel c6: starting datafile backup set restore
channel c6: using network backup set from service tqdb_adg
channel c6: specifying datafile(s) to restore from backup set
channel c6: restoring datafile 00004 to +DATA
channel c6: restore complete, elapsed time: 00:00:02
channel c6: starting datafile backup set restore
channel c6: using network backup set from service tqdb_adg
channel c6: specifying datafile(s) to restore from backup set
channel c6: restoring datafile 00005 to +DATA
channel c6: restore complete, elapsed time: 00:00:01
channel c6: starting datafile backup set restore
channel c6: using network backup set from service tqdb_adg
channel c6: specifying datafile(s) to restore from backup set
channel c6: restoring datafile 00006 to +DATA
channel c6: restore complete, elapsed time: 00:00:01
channel c4: restore complete, elapsed time: 00:00:33
channel c5: restore complete, elapsed time: 00:00:54
Finished restore at 2020-03-14 08:48:58

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
restore clone force from service  'tqdb_adg' 
        archivelog from scn  5829146;
switch clone datafile all;
}
executing Memory Script

Starting restore at 2020-03-14 08:48:58

channel c4: starting archived log restore to default destination
channel c4: using network backup set from service tqdb_adg
channel c4: restoring archived log
archived log thread=1 sequence=5
channel c5: starting archived log restore to default destination
channel c5: using network backup set from service tqdb_adg
channel c5: restoring archived log
archived log thread=1 sequence=6
channel c4: restore complete, elapsed time: 00:00:01
channel c5: restore complete, elapsed time: 00:00:01
Finished restore at 2020-03-14 08:49:00

datafile 1 switched to datafile copy
input datafile copy RECID=14 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/system.417.1035017285
datafile 2 switched to datafile copy
input datafile copy RECID=15 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/sysaux.418.1035017285
datafile 3 switched to datafile copy
input datafile copy RECID=16 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/undotbs1.419.1035017285
datafile 4 switched to datafile copy
input datafile copy RECID=17 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/undotbs2.420.1035017289
datafile 5 switched to datafile copy
input datafile copy RECID=18 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/users.421.1035017291
datafile 6 switched to datafile copy
input datafile copy RECID=19 STAMP=1035017340 file name=+DATA/TQDB/DATAFILE/tq.422.1035017293

contents of Memory Script:
{
set until scn  5829367;
recover
standby
clone database
 delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 2020-03-14 08:49:01

starting media recovery

archived log for thread 1 with sequence 5 is already on disk as file +DATA/archivelog/1_5_1034983633.arc
archived log for thread 1 with sequence 6 is already on disk as file +DATA/archivelog/1_6_1034983633.arc
archived log file name=+DATA/archivelog/1_5_1034983633.arc thread=1 sequence=5
archived log file name=+DATA/archivelog/1_6_1034983633.arc thread=1 sequence=6
media recovery complete, elapsed time: 00:00:00
Finished recover at 2020-03-14 08:49:02

contents of Memory Script:
{
delete clone force archivelog all;
}
executing Memory Script

deleted archived log
archived log file name=+DATA/archivelog/1_5_1034983633.arc RECID=1 STAMP=1035017339
Deleted 1 objects

deleted archived log
archived log file name=+DATA/archivelog/1_6_1034983633.arc RECID=2 STAMP=1035017339
Deleted 1 objects

Finished Duplicate Db at 2020-03-14 08:49:06

released channel: c1

released channel: c2

released channel: c3

released channel: c4

released channel: c5

released channel: c6

RMAN&gt; 

RMAN&gt; 

RMAN&gt; quit


Recovery Manager complete.
[oracle@tq1: ~]$ 
​```



-- 「现备库 RAC」RAC 节点1 ： open 数据库，开启MRP
​```
08:52:14 idle(tqdb21)&gt; conn / as sysdba
Connected.
08:52:17 idle(tqdb21)&gt; alter database open;

Database altered.

08:52:27 idle(tqdb21)&gt; conn / as sysdba
Connected.
08:52:44 sys@TQDB(tqdb21)&gt; 
08:54:18 sys@TQDB(tqdb21)&gt; -- 查看DG的基本统计信息 @standby    
08:54:25 sys@TQDB(tqdb21)&gt; set linesize 200;   
08:54:25 sys@TQDB(tqdb21)&gt; col name for a25;   
08:54:25 sys@TQDB(tqdb21)&gt; column value format a20;    
08:54:25 sys@TQDB(tqdb21)&gt; select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
       0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/14/2020 08:54:25            03/14/2020 08:54:24                     0
       0                                  apply lag                                      day(2) to second(0) interval   03/14/2020 08:54:25                                                    0
       0                                  apply finish time                              day(2) to second(3) interval   03/14/2020 08:54:25                                                    0
       0                                  estimated startup time    20                   second                         03/14/2020 08:54:25                                                    0

08:54:25 sys@TQDB(tqdb21)&gt; alter database recover managed standby database disconnect from session;

Database altered.

08:55:24 sys@TQDB(tqdb21)&gt; -- 查看DG的基本统计信息 @standby    
08:55:48 sys@TQDB(tqdb21)&gt; set linesize 200;   
08:55:48 sys@TQDB(tqdb21)&gt; col name for a25;   
08:55:48 sys@TQDB(tqdb21)&gt; column value format a20;    
08:55:48 sys@TQDB(tqdb21)&gt; select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
       0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/14/2020 08:55:48            03/14/2020 08:55:46                     0
       0                                  apply lag                 +00 00:00:00         day(2) to second(0) interval   03/14/2020 08:55:48            03/14/2020 08:55:46                     0
       0                                  apply finish time                              day(2) to second(3) interval   03/14/2020 08:55:48                                                    0
       0                                  estimated startup time    20                   second                         03/14/2020 08:55:48                                                    0

08:55:48 sys@TQDB(tqdb21)&gt; 
08:55:48 sys@TQDB(tqdb21)&gt; select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
      1          1          0  209715200        512          1 NO  CURRENT               5829146 2020-03-14 08:47:53   9.2954E+18                              0
      2          1          0  209715200        512          1 YES UNUSED                5826540 2020-03-14 08:29:01      5829146 2020-03-14 08:47:53          0
      3          2          0  209715200        512          1 YES UNUSED                5737108 2020-03-13 23:27:13      5737389 2020-03-13 23:30:57          0
      4          2          0  209715200        512          1 YES UNUSED                      0                                0                              0

08:56:59 sys@TQDB(tqdb21)&gt; select * from v$logfile;

 GROUP# STATUS          TYPE    MEMBER                                                       IS_     CON_ID
---------- --------------- ------- ------------------------------------------------------------ --- ----------
      1                 ONLINE  +DATA/TQDB/ONLINELOG/group_1.424.1035017343                  NO           0
      2                 ONLINE  +DATA/TQDB/ONLINELOG/group_2.423.1035017343                  NO           0
      3                 ONLINE  +DATA/TQDB/ONLINELOG/group_3.425.1035017345                  NO           0
      4                 ONLINE  +DATA/TQDB/ONLINELOG/group_4.426.1035017345                  NO           0
      5                 STANDBY +DATA/TQDB/ONLINELOG/group_5.427.1035017345                  NO           0
      6                 STANDBY +DATA/TQDB/ONLINELOG/group_6.428.1035017345                  NO           0
      7                 STANDBY +DATA/TQDB/ONLINELOG/group_7.429.1035017345                  NO           0
      8                 STANDBY +DATA/TQDB/ONLINELOG/group_8.430.1035017345                  NO           0
      9                 STANDBY +DATA/TQDB/ONLINELOG/group_9.431.1035017347                  NO           0
     10                 STANDBY +DATA/TQDB/ONLINELOG/group_10.432.1035017347                 NO           0

10 rows selected.

08:57:02 sys@TQDB(tqdb21)&gt; -- 数据文件存放路径 
08:57:10 sys@TQDB(tqdb21)&gt; col file_name format a70; 
08:57:10 sys@TQDB(tqdb21)&gt; set linesize 200; 
08:57:10 sys@TQDB(tqdb21)&gt; select tablespace_name, file_id, bytes / 1024 / 1024 as "Size(MB)", file_name 
08:57:10   2    from dba_data_files 
08:57:10   3   order by file_id; 

TABLESPACE_NAME                   FILE_ID   Size(MB) FILE_NAME
------------------------------ ---------- ---------- ----------------------------------------------------------------------
SYSTEM                                  1        700 +DATA/TQDB/DATAFILE/system.417.1035017285
SYSAUX                                  2       1240 +DATA/TQDB/DATAFILE/sysaux.418.1035017285
UNDOTBS1                                3        250 +DATA/TQDB/DATAFILE/undotbs1.419.1035017285
UNDOTBS2                                4        200 +DATA/TQDB/DATAFILE/undotbs2.420.1035017289
USERS                                   5          5 +DATA/TQDB/DATAFILE/users.421.1035017291
TQ                                      6         20 +DATA/TQDB/DATAFILE/tq.422.1035017293

6 rows selected.

-- 检查 `failover` 后，新创建的表 （COPY_DBA_OBJECTS_2） 的数据已经复制过来了。
08:57:10 sys@TQDB(tqdb21)&gt; conn tq/tq
Connected.

08:57:23 tq@TQDB(tqdb21)&gt; set lines 200
08:57:28 tq@TQDB(tqdb21)&gt; select * from tab;

TNAME                                         TABTYPE        CLUSTERID
--------------------------------------------- ------------- ----------
COPY_DBA_OBJECTS_2                            TABLE
COPY_DBA_OBJECTS                              TABLE

08:57:30 tq@TQDB(tqdb21)&gt; select count(*) from COPY_DBA_OBJECTS;

COUNT(*)
----------
  47158

08:57:42 tq@TQDB(tqdb21)&gt; select count(*) from COPY_DBA_OBJECTS_2;

COUNT(*)
----------
  47158

09:01:30 sys@TQDB(tqdb21)&gt; set lines 200
09:01:35 sys@TQDB(tqdb21)&gt; select * from v$controlfile;

STATUS   NAME                                               IS_ BLOCK_SIZE FILE_SIZE_BLKS     CON_ID
-------- -------------------------------------------------- --- ---------- -------------- ----------
      +DATA/TQDB/CONTROLFILE/current.416.1035017275      NO       16384           2792          0

09:01:36 sys@TQDB(tqdb21)&gt; 
09:04:40 sys@TQDB(tqdb21)&gt; create spfile='+DATA/TQDB/spfiletqdb_standby_20200314.ora' from memory;

File created.

09:06:35 sys@TQDB(tqdb21)&gt; shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
09:09:50 sys@TQDB(tqdb21)&gt; quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tqdb21: /tmp]$ 
​```


​```
10:56:27 sys@TQDB(tqdb21)&gt; show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DATA/TQDB/spfiletqdb_standby_
                                              20200314_1..ora
10:56:50 sys@TQDB(tqdb21)&gt; 
​```

-- 「新备库」RAC 节点1 pfile
​```/tmp/init_tqdb21_new_ADG.sql
[oracle@tqdb21: /tmp]$ cat /tmp/init_tqdb21_new_ADG.sql
*.audit_file_dest='/u01/app/oracle/admin/tqdb/adump'
*.db_unique_name='tqdb'
*.log_archive_config='DG_CONFIG=(tqdb,tqdb_adg)'
*.log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb'
*.log_archive_dest_2='SERVICE=tqdb_adg ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb_adg'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
*.standby_file_management='AUTO'
*.fal_client='tqdb'
*.fal_server='tqdb_adg'
*.control_files='+DATA/TQDB/CONTROLFILE/current.416.1035017275' # Restore Controlfile
*.db_create_file_dest='+DATA'
*.db_name='tqdb'
*.pga_aggregate_target=262m
*.processes=300
*.sga_target=783m
*.db_block_size=8192
*.compatible="19.0.0"
*.audit_trail="DB"
*.open_cursors=300
*._optimizer_use_auto_indexes="OFF"
*.cluster_database=TRUE
*.diagnostic_dest='/u01/app/oracle'
tqdb2.instance_number=2
tqdb1.instance_number=1
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.remote_login_passwordfile='exclusive'
tqdb2.thread=2
tqdb1.thread=1
*.undo_tablespace='UNDOTBS1'
tqdb2.undo_tablespace='UNDOTBS2'
tqdb1.undo_tablespace='UNDOTBS1'

tqdb1.instance_name=tqdb1
tqdb2.instance_name=tqdb2

tqdb1.instance_number=1
tqdb2.instance_number=2


tqdb1.local_listener='(address=(protocol=TCP)(HOST=192.168.6.23)(PORT=1521))'
tqdb1.remote_listener='(address=(protocol=TCP)(HOST=192.168.6.20)(PORT=1521))'

tqdb2.local_listener='(address=(protocol=TCP)(HOST=192.168.6.24)(PORT=1521))'
tqdb2.remote_listener='(address=(protocol=TCP)(HOST=192.168.6.20)(PORT=1521))'

[oracle@tqdb21: /tmp]$ 
​```

-- 「新备库」RAC 节点2 pfile
​```
[oracle@tqdb22: /tmp]$ cat init_tqdb22_new_ADG.sql
*.audit_file_dest='/u01/app/oracle/admin/tqdb/adump'
*.db_unique_name='tqdb'
*.log_archive_config='DG_CONFIG=(tqdb,tqdb_adg)'
*.log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb'
*.log_archive_dest_2='SERVICE=tqdb_adg ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb_adg'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
*.standby_file_management='AUTO'
*.fal_client='tqdb'
*.fal_server='tqdb_adg'
*.control_files='+DATA/TQDB/CONTROLFILE/current.416.1035017275' # Restore Controlfile
*.db_create_file_dest='+DATA'
*.db_name='tqdb'
*.pga_aggregate_target=262m
*.processes=300
*.sga_target=783m
*.db_block_size=8192
*.compatible="19.0.0"
*.audit_trail="DB"
*.open_cursors=300
*._optimizer_use_auto_indexes="OFF"
*.cluster_database=TRUE
*.diagnostic_dest='/u01/app/oracle'
tqdb2.instance_number=2
tqdb1.instance_number=1
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.remote_login_passwordfile='exclusive'
tqdb2.thread=2
tqdb1.thread=1
*.undo_tablespace='UNDOTBS1'
tqdb2.undo_tablespace='UNDOTBS2'
tqdb1.undo_tablespace='UNDOTBS1'

tqdb1.instance_name=tqdb1
tqdb2.instance_name=tqdb2

tqdb1.instance_number=1
tqdb2.instance_number=2


tqdb1.local_listener='(address=(protocol=TCP)(HOST=192.168.6.23)(PORT=1521))'
tqdb1.remote_listener='(address=(protocol=TCP)(HOST=192.168.6.20)(PORT=1521))'

tqdb2.local_listener='(address=(protocol=TCP)(HOST=192.168.6.24)(PORT=1521))'
tqdb2.remote_listener='(address=(protocol=TCP)(HOST=192.168.6.20)(PORT=1521))'


[oracle@tqdb22: /tmp]$ 
​```

-- 「现主库 tq1」 tnsnames 别名，恢复节点二 `(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb22-vip)(PORT = 1521))` 
​```
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ cat tnsnames.ora 
TQ1 =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tq1)
 )
)

# 备库（tq1）添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
(DESCRIPTION =
 #(ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb21-vip)(PORT = 1521))
 (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb22-vip)(PORT = 1521))
 (LOAD_BALANCE = yes)
 (FAILOVER = yes)
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb)
   (UR=A)
   (FAILOVER_MODE =
     (TYPE = SELECT)
     (METHOD = BASIC)
     (RETRIES = 180)
     (DELAY = 5)
   )
 )
)



tqdb_adg =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
 (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = tqdb_adg) 
 )
)

[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
​```

至此，已重新搭建 Active Data Guard 架构。
「现主库 tq1」 --&gt;&gt; 「现备库 RAC」



</code></pre>
</blockquote>
<h3>17.4 将「17.3」进行 <code>switchover</code> ，恢复 <code>data guard</code> 架构为 「主库 RAC」--&gt;&gt; 「备库 tq1」</h3>
<blockquote><p>将重新搭建 Active Data Guard 架构。<br />
「现主库 tq1」 --&gt;&gt; 「现备库 RAC」</p>
<p><code>switchover</code> 到初始 <code>data guard</code> 关系：</p>
<p>「主库 RAC」--&gt;&gt; 「备库 tq1」</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle 19c Data Guard Renew Switchover Structure-2-ok.png" alt="Oracle 19c Data Guard Renew Switchover Structure-2" width="533" height="627" /></p>
<h3><strong>Switchover Steps</strong></h3>
<p><strong>Primary Side</strong></p>
<pre><code class="language-sql line-numbers">SQL&gt; alter system archive log current;

SQL&gt; alter database commit to switchover to standby with session shutdown;

SQL&gt; shutdown immediate;

SQL&gt; startup mount;
</code></pre>
<p><strong>Data Guard Side</strong></p>
<pre><code class="language-sql line-numbers">SQL&gt; alter database recover managed standby database cancel;

SQL&gt; alter database commit to switchover to primary with session shutdown;

SQL&gt; shutdown immediate;

SQL&gt; startup;
</code></pre>
<p><strong>Primary Side</strong></p>
<pre><code class="language-sql line-numbers">SQL&gt; alter database recover managed standby database disconnect;

If you create standby log files you can use real time apply with below command.

SQL&gt; alter database open read only;

SQL&gt; alter database recover managed standby database using current logfile disconnect;
</code></pre>
<p>操作记录：</p>
<pre><code class="language-sql line-numbers">-- 0.「现主库 tq1」当前状态
12:12:57 sys@TQDB(tq1)&gt; -- 
12:12:57 sys@TQDB(tq1)&gt; set linesize 200;
12:12:57 sys@TQDB(tq1)&gt; col HOST_NAME for a10;
12:12:57 sys@TQDB(tq1)&gt; --
12:12:58 sys@TQDB(tq1)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:12:58   2  from gv$database db, gv$instance inst
12:12:58   3  where db.INST_ID = inst.INST_ID
12:12:58   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb_adg         tq1        READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb_adg

12:12:58 sys@TQDB(tq1)&gt; 


-- 0.「现备库 RAC」当前状态
12:13:33 sys@TQDB(tqdb21)&gt; -- 
12:13:34 sys@TQDB(tqdb21)&gt; set linesize 200;
12:13:34 sys@TQDB(tqdb21)&gt; col HOST_NAME for a10;
12:13:34 sys@TQDB(tqdb21)&gt; --
12:13:34 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:13:34   2  from gv$database db, gv$instance inst
12:13:34   3  where db.INST_ID = inst.INST_ID
12:13:34   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb
      2 3966209240 tqdb2            tqdb22     READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb

12:13:34 sys@TQDB(tqdb21)&gt; 


-- 1. 「现主库 tq1」当前状态
12:15:47 sys@TQDB(tq1)&gt; alter system archive log current;

System altered.

12:16:00 sys@TQDB(tq1)&gt; 
12:18:57 sys@TQDB(tq1)&gt; select * from v$log;

 GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
      1          1         11  209715200        512          1 NO  CURRENT               5863546 2020-03-14 12:18:57   9.2954E+18                              0
      2          1         10  209715200        512          1 YES ACTIVE                5863516 2020-03-14 12:18:43      5863546 2020-03-14 12:18:57          0
      3          2          1  209715200        512          1 YES INACTIVE              5737108 2020-03-13 23:27:13      5737389 2020-03-13 23:30:57          0
      4          2          0  209715200        512          1 YES UNUSED                      0                                0                              0

12:18:59 sys@TQDB(tq1)&gt; 
12:19:23 sys@TQDB(tq1)&gt; 
12:19:32 sys@TQDB(tq1)&gt; 
12:19:32 sys@TQDB(tq1)&gt; 
12:19:32 sys@TQDB(tq1)&gt; alter database commit to switchover to standby with session shutdown;
ERROR:
ORA-01034: ORACLE not available
Process ID: 4758
Session ID: 89 Serial number: 54496



Database altered.

12:20:06 sys@TQDB(tq1)&gt; conn / as sysdba
Connected to an idle instance.
12:20:50 idle(tq1)&gt; 
12:21:19 idle(tq1)&gt; shutdown immediate;
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4376
Additional information: -1183956957
Process ID: 0
Session ID: 0 Serial number: 0


12:21:27 idle(tq1)&gt; startup mount;
ORACLE instance started.

Total System Global Area 1191181696 bytes
Fixed Size                  8895872 bytes
Variable Size             318767104 bytes
Database Buffers          855638016 bytes
Redo Buffers                7880704 bytes
Database mounted.
12:21:52 idle(tq1)&gt; 
12:23:41 idle(tq1)&gt; set linesize 200;
12:23:41 idle(tq1)&gt; col HOST_NAME for a10;
12:23:41 idle(tq1)&gt; --
12:23:41 idle(tq1)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:23:41   2  from gv$database db, gv$instance inst
12:23:41   3  where db.INST_ID = inst.INST_ID
12:23:41   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb_adg         tq1        MOUNTED              MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb_adg

12:23:42 idle(tq1)&gt; 



-- 2. 「现备库 RAC」
-- 「现备库 RAC」 节点1
12:24:23 sys@TQDB(tqdb21)&gt; -- 
12:24:23 sys@TQDB(tqdb21)&gt; set linesize 200;
12:24:23 sys@TQDB(tqdb21)&gt; col HOST_NAME for a10;
12:24:23 sys@TQDB(tqdb21)&gt; --
12:24:23 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:24:23   2  from gv$database db, gv$instance inst
12:24:23   3  where db.INST_ID = inst.INST_ID
12:24:23   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb
      2 3966209240 tqdb2            tqdb22     READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb

12:24:25 sys@TQDB(tqdb21)&gt; 
12:24:56 sys@TQDB(tqdb21)&gt; alter database recover managed standby database cancel;

Database altered.

12:25:18 sys@TQDB(tqdb21)&gt; 
12:25:18 sys@TQDB(tqdb21)&gt; -- 
12:26:24 sys@TQDB(tqdb21)&gt; set linesize 200;
12:26:24 sys@TQDB(tqdb21)&gt; col HOST_NAME for a10;
12:26:24 sys@TQDB(tqdb21)&gt; --
12:26:24 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:26:24   2  from gv$database db, gv$instance inst
12:26:24   3  where db.INST_ID = inst.INST_ID
12:26:24   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     READ ONLY            MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb
      2 3966209240 tqdb2            tqdb22     READ ONLY            MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb

12:26:24 sys@TQDB(tqdb21)&gt; 

-- 「现备库 RAC」 节点2
12:27:16 sys@TQDB(tqdb22)&gt; -- 
12:27:18 sys@TQDB(tqdb22)&gt; set linesize 200;
12:27:18 sys@TQDB(tqdb22)&gt; col HOST_NAME for a10;
12:27:18 sys@TQDB(tqdb22)&gt; --
12:27:18 sys@TQDB(tqdb22)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:27:18   2  from gv$database db, gv$instance inst
12:27:18   3  where db.INST_ID = inst.INST_ID
12:27:18   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      2 3966209240 tqdb2            tqdb22     READ ONLY            MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb
      1 3966209240 tqdb1            tqdb21     READ ONLY            MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb

12:27:18 sys@TQDB(tqdb22)&gt; 

-- 「现备库 RAC」 节点1
12:28:04 sys@TQDB(tqdb21)&gt; alter database commit to switchover to primary with session shutdown;

Database altered.

12:28:44 sys@TQDB(tqdb21)&gt; 
12:29:28 sys@TQDB(tqdb21)&gt; -- 
12:29:29 sys@TQDB(tqdb21)&gt; set linesize 200;
12:29:29 sys@TQDB(tqdb21)&gt; col HOST_NAME for a10;
12:29:29 sys@TQDB(tqdb21)&gt; --
12:29:29 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:29:29   2  from gv$database db, gv$instance inst
12:29:29   3  where db.INST_ID = inst.INST_ID
12:29:29   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     MOUNTED              MAXIMUM PERFORMANCE  PRIMARY          tqdb
      2 3966209240 tqdb2            tqdb22     MOUNTED              MAXIMUM PERFORMANCE  PRIMARY          tqdb

12:29:29 sys@TQDB(tqdb21)&gt;
12:29:29 sys@TQDB(tqdb21)&gt; conn / as sysdba
Connected.
12:30:26 idle(tqdb21)&gt; 

-- 「现备库 RAC」 节点2
12:30:17 sys@TQDB(tqdb22)&gt; conn / as sysdba
Connected.
12:30:50 idle(tqdb22)&gt; 

-- 「现备库 RAC」 节点1
12:31:16 idle(tqdb21)&gt; shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
12:31:43 idle(tqdb21)&gt; 


-- 「现备库 RAC」 节点2
12:30:17 sys@TQDB(tqdb22)&gt; conn / as sysdba
Connected.
12:30:50 idle(tqdb22)&gt; shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
12:33:37 idle(tqdb22)&gt; 


-- 「现备库 RAC」 节点1
12:34:21 idle(tqdb21)&gt; startup
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             289406976 bytes
Database Buffers          520093696 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
12:35:00 idle(tqdb21)&gt; conn / as sysdba
Connected.
12:35:11 sys@TQDB(tqdb21)&gt; 
12:36:46 sys@TQDB(tqdb21)&gt; -- 
12:36:46 sys@TQDB(tqdb21)&gt; set linesize 200;
12:36:46 sys@TQDB(tqdb21)&gt; col HOST_NAME for a10;
12:36:46 sys@TQDB(tqdb21)&gt; --
12:36:46 sys@TQDB(tqdb21)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:36:47   2  from gv$database db, gv$instance inst
12:36:47   3  where db.INST_ID = inst.INST_ID
12:36:47   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb1            tqdb21     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb
      2 3966209240 tqdb2            tqdb22     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb

12:36:47 sys@TQDB(tqdb21)&gt; 


-- 「现备库 RAC」 节点2
12:34:43 idle(tqdb22)&gt; startup
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             293601280 bytes
Database Buffers          515899392 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
12:35:13 idle(tqdb22)&gt; conn / as sysdba
Connected.
12:35:35 sys@TQDB(tqdb22)&gt; 
12:37:13 sys@TQDB(tqdb22)&gt; -- 
12:37:14 sys@TQDB(tqdb22)&gt; set linesize 200;
12:37:14 sys@TQDB(tqdb22)&gt; col HOST_NAME for a10;
12:37:14 sys@TQDB(tqdb22)&gt; --
12:37:14 sys@TQDB(tqdb22)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:37:14   2  from gv$database db, gv$instance inst
12:37:14   3  where db.INST_ID = inst.INST_ID
12:37:14   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      2 3966209240 tqdb2            tqdb22     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb
      1 3966209240 tqdb1            tqdb21     READ WRITE           MAXIMUM PERFORMANCE  PRIMARY          tqdb

12:37:14 sys@TQDB(tqdb22)&gt; 

[grid@tqdb21: ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        ONLINE  ONLINE       tqdb21                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
      2        ONLINE  ONLINE       tqdb22                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[grid@tqdb21: ~]$ 

-- 3. 「现主库 tq1」
12:39:15 idle(tq1)&gt; alter database open;

Database altered.

12:39:24 idle(tq1)&gt; 
12:39:39 idle(tq1)&gt; alter database recover managed standby database disconnect from session;

Database altered.

12:40:05 idle(tq1)&gt; conn / as sysdba
Connected.
12:40:12 sys@TQDB(tq1)&gt; 
12:40:13 sys@TQDB(tq1)&gt; -- 
12:40:19 sys@TQDB(tq1)&gt; set linesize 200;
12:40:19 sys@TQDB(tq1)&gt; col HOST_NAME for a10;
12:40:19 sys@TQDB(tq1)&gt; --
12:40:19 sys@TQDB(tq1)&gt; select db.INST_ID, db.DBID, inst.INSTANCE_NAME, inst.HOST_NAME, db.OPEN_MODE, db.PROTECTION_MODE, db.DATABASE_ROLE, db.DB_UNIQUE_NAME 
12:40:19   2  from gv$database db, gv$instance inst
12:40:19   3  where db.INST_ID = inst.INST_ID
12:40:19   4  ;

INST_ID       DBID INSTANCE_NAME    HOST_NAME  OPEN_MODE            PROTECTION_MODE      DATABASE_ROLE    DB_UNIQUE_NAME
---------- ---------- ---------------- ---------- -------------------- -------------------- ---------------- ------------------------------
      1 3966209240 tqdb_adg         tq1        READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY tqdb_adg

12:40:19 sys@TQDB(tq1)&gt; 



</code></pre>
</blockquote>
<p>「17.3」将重新搭建 Active Data Guard 架构。<br />
「现主库 tq1」 --&gt;&gt; 「现备库 RAC」</p>
<p>至此，已经将 <code>switchover</code> 到初始 <code>data guard</code> 关系：</p>
<p>「主库 RAC」--&gt;&gt; 「备库 tq1」</p>
<blockquote><p>总结：我们通过下列几篇文章，演示了从 <a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html">Oracle 19c RAC 安装 以及 升级 RU</a> 到 <a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html">搭建 Oracle MAA: Oracle 19c RAC + ADG</a> 再到 <a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html">Oracle 19c RAC + ADG 手动 switchover 角色转换步骤</a> 以及 <a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html">Oracle 19c RAC + ADG 手动 failover 角色转换步骤</a> 开始一起接触了 Oracle 19c 。对 19c 的学习和研究就从此开始吧。加油！~</p></blockquote>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-failover.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Oracle 19c RAC + ADG 手动 switchover 角色转换步骤</title>
		<link>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html</link>
					<comments>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Tue, 17 Mar 2020 14:59:18 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Oracle 19c]]></category>
		<category><![CDATA[Oracle Data Guard]]></category>
		<category><![CDATA[Oracle MAA]]></category>
		<category><![CDATA[Oracle RAC]]></category>
		<category><![CDATA[switchover]]></category>
		<category><![CDATA[tq1]]></category>
		<category><![CDATA[tqdb]]></category>
		<category><![CDATA[tqdb21]]></category>
		<category><![CDATA[tqdb22]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=415</guid>

					<description><![CDATA[Oracle 19c RAC + ADG 手动 switchover 角色转换步骤 Revision V2.0 [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>Oracle 19c RAC + ADG 手动 <code>switchover</code> 角色转换步骤</h3>
<p><strong>Revision    V2.0</strong></p>
<table>
<thead>
<tr>
<th align="left">No.</th>
<th>Date</th>
<th>Author/Modifier</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1.0</td>
<td>2020-03-06</td>
<td>谈权</td>
<td>初稿：搭建 Oracle MAA: Oracle 19c RAC + Active Data Gurad</td>
</tr>
<tr>
<td align="left">2.0</td>
<td>2020-03-10</td>
<td>谈权</td>
<td>增加：16. 手动 <code>switchover</code> 角色转换步骤</td>
</tr>
</tbody>
</table>
<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483c5347b" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483c5347b"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html/#Oracle_19c_RAC_ADG_%E6%89%8B%E5%8A%A8_switchover_%E8%A7%92%E8%89%B2%E8%BD%AC%E6%8D%A2%E6%AD%A5%E9%AA%A4" >Oracle 19c RAC + ADG 手动 switchover 角色转换步骤</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html/#16_%E6%89%8B%E5%8A%A8_switchover_%E8%A7%92%E8%89%B2%E8%BD%AC%E6%8D%A2%E6%AD%A5%E9%AA%A4" >16. 手动 switchover 角色转换步骤</a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html/#Switchover_Steps" >Switchover Steps</a></li></ul></li></ul></nav></div>

<p>接上篇文章（<a class="wp-editor-md-post-content-link" href="https://www.dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html">搭建 Oracle MAA: Oracle 19c RAC + ADG</a>）， 本文继续完成 「16. 手动 <code>switchover</code> 角色转换步骤」。</p>
<h2>16. 手动 <code>switchover</code> 角色转换步骤</h2>
<blockquote><p>
  You can use the same steps to switchover for single data guard or multiple data guard configuration.</p>
<p>  Switchover operation will convert primary side to data guard and data guard to primary side.</p>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle%2019c%20Data%20Guard%20Switchover%20Structure-1-ok.png" alt="Oracle 19c Data Guard Switchover Structure-1" /></p>
<p>  You have to control listeners whether it is running before switchover steps.</p>
<h3><strong>Switchover Steps</strong></h3>
<pre><code class="language-sql line-numbers">-- 1. Primary Side
-- 此命令将提供有关数据保护当前状态的适当消息。
SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

SQL> alter system archive log current;

SQL> alter database commit to switchover to standby with session shutdown;

SQL> shutdown immediate;

SQL> startup mount;
</code></pre>
<blockquote>
<pre><code class="language-sql line-numbers">-- 此命令将提供有关数据保护当前状态的适当消息。
sys@TQDB(tqdb21)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY
</code></pre>
<p>    The <strong>switchover_status</strong> column of <strong>v$database</strong> can have the following values:</p>
<p>    <strong>Not Allowed:-</strong>Either this is a standby database and the primary database has not been switched first, or this is a primary database and there are no standby databases<br />
    <strong>Session Active:-</strong> Indicates that there are active SQL sessions attached to the primary or standby database that need to be disconnected before the switchover operation is permitted<br />
    <strong>Switchover Pending:-</strong> This is a standby database and the primary database switchover request has been received but not processed.<br />
    <strong>Switchover Latent:-</strong> The switchover was in pending mode, but did not complete and went back to the primary database<br />
    <strong>To Primary:-</strong> This is a standby database, with no active sessions, that is allowed to switch over to a primary database<br />
    <strong>To Standby:-</strong> This is a primary database, with no active sessions, that is allowed to switch over to a standby database<br />
    <strong>Recovery Needed:-</strong> This is a standby database that has not received the switchover request
  </p></blockquote>
<pre><code class="language-sql line-numbers">-- 2. Data Guard Side
-- 此命令将提供有关数据保护当前状态的适当消息。
SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database recover managed standby database cancel;

SQL> alter database commit to switchover to primary with session shutdown;

SQL> shutdown immediate;

SQL> startup;
</code></pre>
<pre><code class="language-sql line-numbers">-- 3. Primary Side
SQL> alter database recover managed standby database disconnect;

If you create standby log files you can use real time apply with below command.

SQL> alter database open read only;

SQL> alter database recover managed standby database using current logfile disconnect;
-- 在备库启动 recover 过程，应用主库传过来的日志（默认已经是real-time apply模式，因此省略using current logfile）；
SQL> ALTER DATABASE RECOVER managed standby database disconnect from session;
</code></pre>
<p>  操作记录：「RAC主库」与「备库」switchover， 切换后：「RAC」为 <code>standby</code>，「原备库」为 <code>primary</code>。</p>
<blockquote><p>
    说明： <code>switchover</code> 切换后：「备库」为 <code>primary</code> ，由于「备库」是单实例，所以切换日志<code>alter system archive log current;</code> 时，只会切换 <code>THREAD#</code> 为 <code>1</code> 的日志组。（即：对应「RAC主库」节点1 的日志组(<code>THREAD#</code> 为 <code>1</code> )）。
  </p></blockquote>
<pre><code class="language-sql line-numbers">-- 1. Primary Side
-- 此命令将提供有关数据保护当前状态的适当消息。
23:19:24 sys@TQDB(tqdb21)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

23:19:25 sys@TQDB(tqdb21)> 

The switchover_status column of v$database can have the following values:

Not Allowed:-Either this is a standby database and the primary database has not been switched first, or this is a primary database and there are no standby databases
Session Active:- Indicates that there are active SQL sessions attached to the primary or standby database that need to be disconnected before the switchover operation is permitted
Switchover Pending:- This is a standby database and the primary database switchover request has been received but not processed.
Switchover Latent:- The switchover was in pending mode, but did not complete and went back to the primary database
To Primary:- This is a standby database, with no active sessions, that is allowed to switch over to a primary database
To Standby:- This is a primary database, with no active sessions, that is allowed to switch over to a standby database
Recovery Needed:- This is a standby database that has not received the switchover request

-- 
23:27:45 sys@TQDB(tqdb21)> alter system archive log current;

System altered.

23:27:52 sys@TQDB(tqdb21)> 

-- 「主库RAC」节点1
23:42:46 sys@TQDB(tqdb21)> alter database commit to switchover to standby with session shutdown;
ERROR:
ORA-01034: ORACLE not available
Process ID: 32074
Session ID: 452 Serial number: 21981



Database altered.

23:43:25 sys@TQDB(tqdb21)> conn / as sysdba
Connected to an idle instance.
23:44:36 idle(tqdb21)> 

-- 「主库RAC」节点2
23:45:09 sys@TQDB(tqdb22)> 
23:45:09 sys@TQDB(tqdb22)> conn / as sysdba
ERROR:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 390 Serial number: 18811


Connected to an idle instance.
23:45:13 idle(tqdb22)> 

[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

此时，「主库RAC」两个节点数据库实例都已经 `shutdown` 了。

-- 「主库RAC」节点1
23:48:51 idle(tqdb21)> startup mount;
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             390070272 bytes
Database Buffers          419430400 bytes
Redo Buffers                3678208 bytes
Database mounted.
23:49:06 idle(tqdb21)> 
23:50:06 idle(tqdb21)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
RECOVERY NEEDED

23:50:08 idle(tqdb21)> 

-- 「主库RAC」节点2
23:54:45 idle(tqdb22)> startup mount;
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             360710144 bytes
Database Buffers          448790528 bytes
Redo Buffers                3678208 bytes
Database mounted.
23:56:01 idle(tqdb22)> conn / as sysdba
Connected.
23:56:22 idle(tqdb22)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
RECOVERY NEEDED

23:56:40 idle(tqdb22)> 
</code></pre>
<pre><code class="language-sql line-numbers">-- 2. Data Guard Side
23:59:18 sys@TQDB(tq1)>  select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

23:59:19 sys@TQDB(tq1)> 


00:00:10 sys@TQDB(tq1)> alter database recover managed standby database cancel;

Database altered.

00:00:44 sys@TQDB(tq1)>  

00:01:08 sys@TQDB(tq1)> 
00:01:32 sys@TQDB(tq1)> alter database commit to switchover to primary with session shutdown;

Database altered.

00:02:19 sys@TQDB(tq1)> 
00:02:43 sys@TQDB(tq1)> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
00:03:11 sys@TQDB(tq1)> 
00:05:50 sys@TQDB(tq1)> startup 
ORACLE instance started.

Total System Global Area 1191181696 bytes
Fixed Size                  8895872 bytes
Variable Size             318767104 bytes
Database Buffers          855638016 bytes
Redo Buffers                7880704 bytes
Database mounted.
Database opened.
00:06:08 sys@TQDB(tq1)> 

00:06:46 sys@TQDB(tq1)> -- 查询 Oracle ADG 保护模式
00:07:42 sys@TQDB(tq1)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PRIMARY          READ WRITE           MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

00:07:43 sys@TQDB(tq1)> 
</code></pre>
<pre><code class="language-sql line-numbers">-- 3. Primary Side
-- 「主库RAC」节点1
00:10:41 idle(tqdb21)> alter database open;

Database altered.

00:10:51 idle(tqdb21)> -- 查看DG的基本统计信息 @standby    
00:11:43 idle(tqdb21)> set linesize 200;   
00:11:43 idle(tqdb21)> col name for a25;   
00:11:43 idle(tqdb21)> column value format a20;    
00:11:43 idle(tqdb21)> select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
          0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:11:44            03/11/2020 00:11:43                     0
          0                                  apply lag                                      day(2) to second(0) interval   03/11/2020 00:11:44                                                    0
          0                                  apply finish time                              day(2) to second(3) interval   03/11/2020 00:11:44                                                    0
          0                                  estimated startup time    30                   second                         03/11/2020 00:11:44                                                    0

00:11:44 idle(tqdb21)>

-- 「主库RAC」节点2
23:56:40 idle(tqdb22)> alter database open;

Database altered.

00:15:29 idle(tqdb22)> conn / as sysdba
Connected.
00:15:49 sys@TQDB(tqdb22)>

-- 「主库RAC」节点1
00:10:51 idle(tqdb21)> conn / as sysdba
00:10:51 sys@TQDB(tqdb21)> -- 查看DG的基本统计信息 @standby    
00:11:43 sys@TQDB(tqdb21)> set linesize 200;   
00:11:43 sys@TQDB(tqdb21)> col name for a25;   
00:11:43 sys@TQDB(tqdb21)> column value format a20;    
00:11:43 sys@TQDB(tqdb21)> select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
          0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:11:44            03/11/2020 00:11:43                     0
          0                                  apply lag                                      day(2) to second(0) interval   03/11/2020 00:11:44                                                    0
          0                                  apply finish time                              day(2) to second(3) interval   03/11/2020 00:11:44                                                    0
          0                                  estimated startup time    30                   second                         03/11/2020 00:11:44                                                    0

00:11:44 sys@TQDB(tqdb21)> alter database recover managed standby database disconnect from session;

Database altered.

00:16:28 sys@TQDB(tqdb21)> select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
          0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:16:43            03/11/2020 00:16:42                     0
          0                                  apply lag                 +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:16:43            03/11/2020 00:16:42                     0
          0                                  apply finish time                              day(2) to second(3) interval   03/11/2020 00:16:43                                                    0
          0                                  estimated startup time    30                   second                         03/11/2020 00:16:43                                                    0

00:16:43 sys@TQDB(tqdb21)> 

00:27:43 sys@TQDB(tqdb21)> -- 查询 Oracle ADG 保护模式
00:28:00 sys@TQDB(tqdb21)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PHYSICAL STANDBY READ ONLY WITH APPLY MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

00:28:00 sys@TQDB(tqdb21)> 

[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        ONLINE  ONLINE       tqdb21                   Open,Readonly,HOME=/
                                                             u01/app/oracle/produ
                                                             ct/19c/dbhome,STABLE
      2        ONLINE  ONLINE       tqdb22                   Open,Readonly,HOME=/
                                                             u01/app/oracle/produ
                                                             ct/19c/dbhome,STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

-- 「主库RAC」节点2
00:15:29 idle(tqdb22)> conn / as sysdba
Connected.
00:15:49 sys@TQDB(tqdb22)> -- 查看DG的基本统计信息 @standby    
00:16:53 sys@TQDB(tqdb22)> set linesize 200;   
00:16:53 sys@TQDB(tqdb22)> col name for a25;   
00:16:53 sys@TQDB(tqdb22)> column value format a20;    
00:16:53 sys@TQDB(tqdb22)> select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
          0                                  transport lag             +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:16:53            03/11/2020 00:16:53                     0
          0                                  apply lag                 +00 00:00:00         day(2) to second(0) interval   03/11/2020 00:16:53            03/11/2020 00:16:53                     0
          0                                  apply finish time                              day(2) to second(3) interval   03/11/2020 00:16:53                                                    0
          0                                  estimated startup time    20                   second                         03/11/2020 00:16:53                                                    0

00:16:53 sys@TQDB(tqdb22)> 
00:28:51 sys@TQDB(tqdb22)> -- 查询 Oracle ADG 保护模式
00:28:51 sys@TQDB(tqdb22)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PHYSICAL STANDBY READ ONLY WITH APPLY MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

00:28:52 sys@TQDB(tqdb22)> 
</code></pre>
</blockquote>
<p>此时：「RAC」为 <code>standby</code>，「原备库」为 <code>primary</code>。 再次 <code>switchover</code> 回去，「RAC」为 <code>primary</code>，「备库」为 <code>standby</code>。</p>
<blockquote><p>
  操作记录：此时：「RAC」为 <code>standby</code>，「原备库」为 <code>primary</code>。 再次 <code>switchover</code> 回去，「RAC」为 <code>primary</code>，「备库」为 <code>standby</code>。</p>
<pre><code class="language-sql line-numbers">-- 1. Primary Side
00:44:34 sys@TQDB(tq1)> alter system archive log current;

System altered.

00:44:37 sys@TQDB(tq1)> select * from v$log;

GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
    ---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
     1          1         68  209715200        512          1 NO  CURRENT               4758229 2020-03-11 00:44:37   9.2954E+18                              0
         2          1         67  209715200        512          1 YES ACTIVE                4758149 2020-03-11 00:43:59      4758229 2020-03-11 00:44:37          0
         3          2         57  209715200        512          1 YES INACTIVE              4751656 2020-03-11 00:02:19      4751938 2020-03-11 00:06:07          0
         4          2          0  209715200        512          1 YES UNUSED                      0                                0                              0
    
00:44:40 sys@TQDB(tq1)> 

-- 此命令将提供有关数据保护当前状态的适当消息。
00:47:22 sys@TQDB(tq1)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

00:47:23 sys@TQDB(tq1)> 

00:47:52 sys@TQDB(tq1)> alter database commit to switchover to standby with session shutdown;
ERROR:
ORA-01034: ORACLE not available
Process ID: 6298
Session ID: 1 Serial number: 62412



Database altered.

00:48:31 sys@TQDB(tq1)> conn / as sysdba
Connected to an idle instance.
00:49:23 idle(tq1)> 
00:49:26 idle(tq1)> shutdown immediate;
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4376
Additional information: -1183956957
Process ID: 0
Session ID: 0 Serial number: 0


00:49:41 idle(tq1)> conn / as sysdba
Connected to an idle instance.
00:49:45 idle(tq1)> startup mount;
ORACLE instance started.

Total System Global Area 1191181696 bytes
Fixed Size                  8895872 bytes
Variable Size             318767104 bytes
Database Buffers          855638016 bytes
Redo Buffers                7880704 bytes
Database mounted.
00:50:05 idle(tq1)> 

</code></pre>
<pre><code class="language-sql line-numbers">-- 2. Data Guard Side
00:54:07 sys@TQDB(tqdb21)> -- 此命令将提供有关数据保护当前状态的适当消息。
00:54:08 sys@TQDB(tqdb21)> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

00:54:13 sys@TQDB(tqdb21)> 


-- 「RAC主库」节点1
00:57:20 sys@TQDB(tqdb21)> alter database recover managed standby database cancel;

Database altered.

00:57:50 sys@TQDB(tqdb21)> 
00:58:47 sys@TQDB(tqdb21)> alter database commit to switchover to primary with session shutdown;

Database altered.

00:59:35 sys@TQDB(tqdb21)> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
01:01:34 sys@TQDB(tqdb21)> conn / as sysdba
Connected to an idle instance.
01:01:45 idle(tqdb21)> 

-- 「RAC主库」节点2
01:02:23 sys@TQDB(tqdb22)> conn / as sysdba
Connected.
01:02:29 idle(tqdb22)> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
01:03:17 idle(tqdb22)> 

-- 「RAC主库」节点1
[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.chad
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.net1.network
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.ons
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    --------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
    ora.DATA.dg(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.LISTENER_SCAN1.lsnr
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.OCR.dg(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.asm(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.asmnet1.asmnetwork(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.cvu
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.qosmserver
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.scan1.vip
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.tqdb.db
  1        OFFLINE OFFLINE                               STABLE
      2        OFFLINE OFFLINE                               STABLE
    ora.tqdb21.vip
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.tqdb22.vip
  1        ONLINE  ONLINE       tqdb22                   STABLE
    --------------------------------------------------------------------------------
[root@tqdb21: ~]# 

01:01:45 idle(tqdb21)> startup 
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             390070272 bytes
Database Buffers          419430400 bytes
Redo Buffers                3678208 bytes
Database mounted.
Database opened.
01:04:15 idle(tqdb21)> conn / as sysdba
Connected.
01:04:24 sys@TQDB(tqdb21)> 
01:04:24 sys@TQDB(tqdb21)> -- 查询 Oracle ADG 保护模式
01:08:05 sys@TQDB(tqdb21)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PRIMARY          READ WRITE           MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

01:08:06 sys@TQDB(tqdb21)> 

-- 「RAC主库」节点2
01:05:20 sys@TQDB(tqdb22)> -- 查询 Oracle ADG 保护模式
01:08:10 sys@TQDB(tqdb22)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PRIMARY          READ WRITE           MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

01:08:11 sys@TQDB(tqdb22)> 

[root@tqdb22: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.chad
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.net1.network
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    ora.ons
           ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
    --------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
    ora.DATA.dg(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.LISTENER_SCAN1.lsnr
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.OCR.dg(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.asm(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.asmnet1.asmnetwork(ora.asmgroup)
  1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
    ora.cvu
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.qosmserver
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.scan1.vip
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.tqdb.db
  1        ONLINE  ONLINE       tqdb21                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
      2        ONLINE  ONLINE       tqdb22                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
    ora.tqdb21.vip
  1        ONLINE  ONLINE       tqdb21                   STABLE
    ora.tqdb22.vip
  1        ONLINE  ONLINE       tqdb22                   STABLE
    --------------------------------------------------------------------------------
[root@tqdb22: ~]# 

</code></pre>
<pre><code class="language-sql line-numbers">-- 3. Primary Side
01:10:13 idle(tq1)> alter database open;

Database altered.

01:10:20 idle(tq1)> conn / as sysdba
Connected.
01:10:26 sys@TQDB(tq1)> alter database recover managed standby database disconnect from session;

Database altered.

01:11:43 sys@TQDB(tq1)> -- 查看DG的基本统计信息 @standby    
01:12:01 sys@TQDB(tq1)> set linesize 200;   
01:12:02 sys@TQDB(tq1)> col name for a25;   
01:12:02 sys@TQDB(tq1)> column value format a20;    
01:12:02 sys@TQDB(tq1)> select * from v$dataguard_stats;    

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                      VALUE                UNIT                           TIME_COMPUTED                  DATUM_TIME                         CON_ID
----------- -------------------------------- ------------------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ----------
3966209240 tqdb                             transport lag             +00 00:00:00         day(2) to second(0) interval   03/11/2020 01:12:02            03/11/2020 01:12:01                     0
 3966209240 tqdb                             apply lag                 +00 00:00:00         day(2) to second(0) interval   03/11/2020 01:12:02            03/11/2020 01:12:01                     0
 3966209240 tqdb                             apply finish time                              day(2) to second(3) interval   03/11/2020 01:12:02                                                    0
       0                                  estimated startup time    26                   second                         03/11/2020 01:12:02                                                    0
    
01:12:02 sys@TQDB(tq1)>
01:16:08 sys@TQDB(tq1)> -- 查询 Oracle ADG 保护模式
01:16:08 sys@TQDB(tq1)> select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

DATABASE_ROLE    OPEN_MODE            PROTECTION_MODE      PROTECTION_LEVEL
---------------- -------------------- -------------------- --------------------
PHYSICAL STANDBY READ ONLY WITH APPLY MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

01:16:09 sys@TQDB(tq1)> 
</code></pre>
</blockquote>
<p>本文完整介绍了 Oracle 19c RAC + ADG 手动 <code>switchover</code> 角色转换步骤。希望对各位看官有所帮助。</p>
<p>下一篇，我们将介绍「 Oracle 19c RAC + ADG 手动 <code>failover</code> 角色转换步骤」 。</p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2020/03/oracle-19c-rac-adg-step-by-step-manual-data-guard-switchover.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>搭建 Oracle MAA: Oracle 19c RAC + ADG</title>
		<link>https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html</link>
					<comments>https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Tue, 17 Mar 2020 10:32:05 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Oracle 19c]]></category>
		<category><![CDATA[Oracle Data Guard]]></category>
		<category><![CDATA[Oracle MAA]]></category>
		<category><![CDATA[Oracle RAC]]></category>
		<category><![CDATA[Oracle 19c RAC + ADG]]></category>
		<category><![CDATA[tq1]]></category>
		<category><![CDATA[tqdb]]></category>
		<category><![CDATA[tqdb21]]></category>
		<category><![CDATA[tqdb22]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=412</guid>

					<description><![CDATA[Oracle MAA: Oracle 19c RAC + ADG Revision V1.0 No. Date [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>Oracle MAA: Oracle 19c RAC + ADG</h3>
<p><strong>Revision    V1.0</strong></p>
<table>
<thead>
<tr>
<th align="left">No.</th>
<th>Date</th>
<th>Author/Modifier</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1.0</td>
<td>2020-03-06</td>
<td>谈权</td>
<td>初稿：搭建 Oracle MAA:  Oracle 19c RAC + Active Data Gurad</td>
</tr>
</tbody>
</table>
<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483c6d37a" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483c6d37a"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#Oracle_MAA_Oracle_19c_RAC_ADG" >Oracle MAA: Oracle 19c RAC + ADG</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#0_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E4%B8%8E_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E7%8E%AF%E5%A2%83" >0. 「主库 RAC」 与 「备库」 环境</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#01_%E5%81%9C%E6%AD%A2tq1%E6%9C%8D%E5%8A%A1%E5%99%A8%E4%B8%8A%E5%8E%9F%E6%9D%A5%E7%9A%84_Oracle_Restart_%E5%8D%95%E5%AE%9E%E4%BE%8B" >0.1 停止tq1服务器上原来的 Oracle Restart 单实例</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-4" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#1_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E5%BC%80%E5%90%AF%E5%BD%92%E6%A1%A3%E6%97%A5%E5%BF%97" >1. 「主库 RAC」: 开启归档日志</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-5" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#2_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E5%BC%80%E5%90%AF_force_logging" >2. 「主库 RAC」: 开启 force logging</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-6" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#3_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E4%BF%AE%E6%94%B9%E4%B8%BB%E5%BA%93%E9%80%82%E5%BA%94dataguard%E7%8E%AF%E5%A2%83%E5%8F%82%E6%95%B0" >3. 「主库 RAC」: 修改主库适应dataguard环境参数</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-7" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#4_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E4%B8%BB%E5%BA%93%E5%A2%9E%E5%8A%A0standby_log%E6%97%A5%E5%BF%97%E7%BB%84" >4. 「主库 RAC」:  主库增加standby log日志组</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-8" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#5_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D%E5%92%8C%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E4%B8%BB%E5%BA%93%E5%A4%87%E5%BA%93%E5%A2%9E%E5%8A%A0tnsnames%E5%88%AB%E5%90%8D" >5. 「主库 RAC」和「备库」: 主库备库增加tnsnames别名</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-9" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#6%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D-%3E_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D%E6%8B%B7%E8%B4%9D%E4%B8%BB%E5%BA%93%E5%AF%86%E7%A0%81%E6%96%87%E4%BB%B6%E5%88%B0%E5%A4%87%E5%BA%93" >6.「主库 RAC」-> 「备库」拷贝主库密码文件到备库</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-10" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#7_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E5%A2%9E%E5%8A%A0%E5%A4%87%E5%BA%93%E9%9D%99%E6%80%81%E7%9B%91%E5%90%AC" >7. 「备库」: 增加备库静态监听</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-11" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#8_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E5%A4%87%E5%BA%93%E5%88%9B%E5%BB%BAadump%E7%9B%AE%E5%BD%95_%E5%92%8C_%E5%BD%92%E6%A1%A3%E7%9B%AE%E5%BD%95_DATAarchivelog" >8. 「备库」: 备库创建adump目录  和 归档目录 +DATA/archivelog</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-12" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#9_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E4%BF%AE%E6%94%B9%E5%A4%87%E5%BA%93%E5%AE%9E%E4%BE%8Bpfile%E6%96%87%E4%BB%B6" >9. 「备库」: 修改备库实例pfile文件</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-13" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#10_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E4%BD%BF%E7%94%A8%E4%B8%8A%E9%9D%A2%E7%9A%84pfile%E5%90%AF%E5%8A%A8%E5%A4%87%E5%BA%93%E5%88%B0_nomount_%E7%8A%B6%E6%80%81" >10. 「备库」: 使用上面的pfile启动备库到 nomount 状态</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-14" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#11_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E4%B8%BB%E5%BA%93%E5%87%86%E5%A4%87%E8%BF%9E%E6%8E%A5%E8%BE%85%E5%8A%A9%E5%AE%9E%E4%BE%8B" >11. 「主库 RAC」: 主库准备连接辅助实例</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-15" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#12_%E3%80%8C%E4%B8%BB%E5%BA%93_RAC%E3%80%8D_%E4%BD%BF%E7%94%A8DUPLICATE%E5%BC%80%E5%A7%8B%E5%A4%87%E5%BA%93%E5%88%9B%E5%BB%BA" >12. 「主库 RAC」: 使用DUPLICATE开始备库创建</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-16" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#13_%E3%80%8C%E5%A4%87%E5%BA%93%E3%80%8D_%E6%A3%80%E6%9F%A5%E5%A4%87%E5%BA%93%E5%BC%80%E5%90%AFMRP" >13. 「备库」: 检查备库开启MRP</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-17" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#14_%E6%B5%8B%E8%AF%95_ADG" >14. 测试 ADG</a></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-18" href="https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/#15_ADG_%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4" >15. ADG 常用命令</a></li></ul></nav></div>

<h2>0. 「主库 RAC」 与 「备库」 环境</h2>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle%2019c%20Data%20Guard%20Structure.png" alt="Oracle 19c Data Guard Structure" /></p>
<p>「主库 RAC」 与 「备库」 环境：</p>
<table>
<thead>
<tr>
<th align="left"></th>
<th align="left">Primary</th>
<th align="left">Standby</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">OS Version</td>
<td align="left">CentOS Linux release 7.7.1908 (Core)</td>
<td align="left">CentOS Linux release 7.7.1908 (Core)</td>
</tr>
<tr>
<td align="left">DB Version</td>
<td align="left">Version 19.6.0.0.0</td>
<td align="left">Version 19.6.0.0.0</td>
</tr>
<tr>
<td align="left">HOST IP</td>
<td align="left"># Public (enp0s8)<br />192.168.6.21 tqdb21<br />192.168.6.22 tqdb22</p>
<p># Private (enp0s9)<br />172.16.8.21 tqdb21-priv<br />172.16.8.22 tqdb22-priv</p>
<p># Virtual (enp0s8)<br />192.168.6.23 tqdb21-vip<br />192.168.6.24 tqdb22-vip</td>
<td align="left"># ADG<br />192.168.6.10    tq1</td>
</tr>
<tr>
<td align="left">SCAN IP</td>
<td align="left"># SCAN<br />192.168.6.20 tqdb-cluster       tqdb-cluster-scan</td>
<td align="left"></td>
</tr>
<tr>
<td align="left">DB_NAME</td>
<td align="left">tqdb</td>
<td align="left">tqdb</td>
</tr>
<tr>
<td align="left">DB_UNIQUE_NAME</td>
<td align="left">tqdb</td>
<td align="left">tqdb_adg</td>
</tr>
<tr>
<td align="left">Instance_Name</td>
<td align="left">tqdb1<br />tqdb2</td>
<td align="left">tqdb_adg</td>
</tr>
<tr>
<td align="left">ArchiveFile</td>
<td align="left">+DATA/archivelog</td>
<td align="left">+DATA/archivelog</td>
</tr>
<tr>
<td align="left">DB Storage</td>
<td align="left">ASM</td>
<td align="left">ASM</td>
</tr>
<tr>
<td align="left">ASM for DB files</td>
<td align="left">+DATA/TQDB/DATAFILE</td>
<td align="left">+DATA/TQDB/DATAFILE</td>
</tr>
<tr>
<td align="left">ASM for LOG files</td>
<td align="left">+DATA/TQDB/ONLINELOG</td>
<td align="left">+DATA/TQDB/ONLINELOG</td>
</tr>
<tr>
<td align="left">ASM for TEMP files</td>
<td align="left">+DATA/TQDB/TEMPFILE</td>
<td align="left">+DATA/TQDB/TEMPFILE</td>
</tr>
<tr>
<td align="left">ORACLE_HOME</td>
<td align="left">/u01/app/oracle/product/19c/dbhome</td>
<td align="left">/u01/app/oracle/product/19c/dbhome</td>
</tr>
<tr>
<td align="left">grid 用户 ORACLE_BASE</td>
<td align="left">/u01/app/grid</td>
<td align="left">/u01/app/grid</td>
</tr>
<tr>
<td align="left">grid 用户 ORACLE_HOME</td>
<td align="left">/u01/app/19c/grid</td>
<td align="left">/u01/app/19c/grid</td>
</tr>
<tr>
<td align="left">oracle 用户 ORACLE_BASE</td>
<td align="left">/u01/app/oracle</td>
<td align="left">/u01/app/oracle</td>
</tr>
<tr>
<td align="left">oracle 用户 ORACLE_HOME</td>
<td align="left">/u01/app/oracle/product/19c/dbhome</td>
<td align="left">/u01/app/oracle/product/19c/dbhome</td>
</tr>
<tr>
<td align="left"></td>
<td align="left"></td>
<td align="left"></td>
</tr>
</tbody>
</table>
<h2>0.1 停止<code>tq1</code>服务器上原来的 Oracle Restart 单实例</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 1. 关闭原数据库实例 `tq1` 的自动启动
[root@tq1: ~]# srvctl disable database -db tq1
[root@tq1: ~]# 
-- `sqlplus > shutdown immediate;` 停止数据库实例
[oracle@tq1: ~]$ sqlplus / as sysdba
sys@TQ1(tq1)> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


-- 2. 备份实例`tq1` 的环境变量
[oracle@tq1: ~]$ cp ~/.bash_profile ~/.bash_profile.bak_tq1

-- 3. 修改 oracle 用户的 `.bash_profile` 环境变量
-- DB_UNIQUE_NAME=tqdb_adg
-- ORACLE_SID=tqdb_adg
[oracle@tq1: ~]$ vim ~/.bash_profile
​```修改为: `tqdb_adg`
export ORACLE_SID=tqdb_adg
export DB_UNIQUE_NAME=tqdb_adg
​```

-- 4. 生效环境变量
[oracle@tq1: ~]$ . ~/.bash_profile
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ echo $ORACLE_SID
tqdb_adg
[oracle@tq1: ~]$ echo $DB_UNIQUE_NAME
tqdb_adg
[oracle@tq1: ~]$ 



</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 关闭原数据库实例 `tq1` 的自动启动
[root@tq1: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       tq1                      STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       tq1                      STABLE
ora.asm
               ONLINE  ONLINE       tq1                      Started,STABLE
ora.ons
               OFFLINE OFFLINE      tq1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.tq1.db
      1        ONLINE  ONLINE       tq1                      Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
--------------------------------------------------------------------------------
[root@tq1: ~]# 
[root@tq1: ~]# srvctl disable database -db tq1
[root@tq1: ~]# 
[root@tq1: ~]# srvctl status database -db tq1
Database is running.
[root@tq1: ~]# 
[root@tq1: ~]# 
[root@tq1: ~]# reboot

Last login: Sat Mar  7 00:19:24 2020 from 192.168.6.9
[root@tq1: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       tq1                      STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       tq1                      STABLE
ora.asm
               ONLINE  ONLINE       tq1                      Started,STABLE
ora.ons
               OFFLINE OFFLINE      tq1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.tq1.db
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------
[root@tq1: ~]# 

`srvctl disable database -db tq1` 后， 无法使用 `srvctl` 命令启停数据库。 
需要使用 `sqlplus > startup/shutdown` 数据库实例。
即使使用 `sqlplus > startup` 启动数据库实例，使用 `crsctl stat res -t` 查看 `ora.tq1.db` 的 `State` 依然是 `OFFLINE` 状态。
[root@tq1: ~]# srvctl start database -db tq1
PRCR-1079 : Failed to start resource ora.tq1.db
CRS-2501: Resource 'ora.tq1.db' is disabled
[root@tq1: ~]# srvctl stop database -db tq1   
PRCC-1016 : tq1 was already stopped
[root@tq1: ~]# 

-- 2. 备份实例`tq1` 的环境变量
[oracle@tq1: ~]$ cp ~/.bash_profile ~/.bash_profile.bak_tq1

-- 3. 修改 oracle 用户的 `.bash_profile` 环境变量
-- DB_UNIQUE_NAME=tqdb_adg
-- ORACLE_SID=tqdb_adg
[oracle@tq1: ~]$ vim ~/.bash_profile
​```修改为: `tqdb_adg`
export ORACLE_SID=tqdb_adg
export DB_UNIQUE_NAME=tqdb_adg


-- 4. 生效环境变量
[oracle@tq1: ~]$ . ~/.bash_profile
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ echo $ORACLE_SID
tqdb_adg
[oracle@tq1: ~]$ echo $DB_UNIQUE_NAME
tqdb_adg
[oracle@tq1: ~]$ 
</code></pre>
</blockquote>
<h2>1. 「主库 RAC」: 开启归档日志</h2>
<blockquote><p>
  -- 1. 停止两个节点的数据库实例，两个节点都到 mount 状态</p>
<p>  -- 节点1：</p>
<pre><code class="language-sql line-numbers">oracle$ sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
</code></pre>
<p>  -- 节点2：</p>
<pre><code class="language-sql line-numbers">oracle$ sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
</code></pre>
<p>  -- 2. 设置归档目录（归档目录选为共享存储）：<br />
  -- 在一个节点(节点1)执行，即可。</p>
<pre><code class="language-sql line-numbers">SQL> alter system set log_archive_dest_1='location=+DATA/ARCHIVELOG' scope=both;
</code></pre>
<p>  -- 3. 确定都启动到 <code>mount</code> 状态后，开启归档：<br />
  -- 在一个节点(节点1)执行，即可。</p>
<pre><code class="language-sql line-numbers">SQL> alter database archivelog;
</code></pre>
<p>  -- 4. 查看归档:</p>
<pre><code class="language-sql line-numbers">SQL> archive log list;
</code></pre>
<p>  -- 5. 打开数据库（两个节点都要操作）:</p>
<pre><code class="language-sql line-numbers">SQL> alter database open;
</code></pre>
<p>  关于归档日志</p>
<ul>
<li>原则上Redo不设置镜像，因为存储已经设置镜像保护，且双份REDO对存储压力更大，另外重要系统都会部署data guard灾备。</p>
</li>
<li>对于A+、A、B级别系统，为降低数据丢失，设置归档日志每小时备份一次。对于RPO接近于0的系统，可以通过灾备技术实现。</p>
</li>
<li>归档目录初始设置为数据库大小的一定比例或者数值，100G以下50G， 100G-1TB 20%，1TB以上15%，递增单位为50G</p>
</li>
<li>随着数据的增加，需要确保归档目录满足两天的归档量（以最近一月高峰为参考），初始配置不足的可以增加。</p>
</li>
<li>归档非常重要，一旦长时间备份失败，造成数据库归档目录满，数据库就会直接停止工作。所以要加强归档日志的监控。</p>
</li>
<li>在NBU中设置自动删除，对于data guard数据库，保留3小时归档日志，其他可以直接删除；</p>
</li>
<li>每个库使用独立的归档目录。</p>
</li>
</ul>
<p>当一组联机重做日志写满时，LGWR进程将开始写下一组日志文件。这被称为日志切换。此时，会产生检查（校验）点操作，还有一些信息被写到控制文件中。除了在重做日志自动切换和自动产生的检查点之外，Oracle数据库dba还可能根据管理和维护的需要，在任何时候强制性的进行重做日志切换，也可以强制性的产生校验点。</p>
<p>  强制性产生重做日志文件切换的命令为：</p>
<p>  <code>alter system switch logfile</code> 强制性产生校验点，不一定就归档当前的重做日志文件，（若自动归档打开，就归档前的重做日志，若自动归档没有打开，就不归档当前重做日志。）</p>
<p>  <code>alter system checkpoint</code></p>
<p>  <code>alter system archive log current</code> 是归档当前的重做日志文件，不管自动归档有没有打都归档。<br />
  主要的区别在于:<br />
  <code>ALTER SYSTEM SWITCH LOGFILE</code> 对单实例数据库或RAC中的当前实例执行日志切换;<br />
  而 <code>ALTER SYSTEM ARCHIVE LOG CURRENT</code> 会对数据库中的所有实例执行日志切换。</p>
<p>  为什么执行热备后要执行 <code>alter system archive log current</code> 这个语句，看到很多脚本都是这样写的。是不是必须的？</p>
<p>  一般的RMAN脚本都是这样写的，因为RMAN是可以备份归档日志的。</p>
<p>  <code>alter system archive log current</code> 这样后就可以将所有的归档都备份出来了。这样做是为了保证数据的完整和一致。
</p></blockquote>
<h2>2. 「主库 RAC」: 开启 <code>force logging</code></h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 1. 停止两个节点的数据库实例
# srvctl stop database -db tqdb
# crsctl stat res -t

-- 2. 两个节点都到 mount 状态，
-- 在一个节点执行数据库开启 force logging 即可
-- 节点1
SQL> startup mount;
-- 节点2
SQL> startup mount;
-- 节点1
SQL> alter database force logging;

-- 3. 查看 节点1 和 节点2 ，已经开启 `force logging`
SQL> select DBID, INST_ID, NAME, OPEN_MODE, DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from gv$database;

      DBID    INST_ID NAME       OPEN_MODE            DATABASE_ROLE    FORCE_LOGGING   FLASHBACK_ON
---------- ---------- ---------- -------------------- ---------------- --------------- ------------------
3966209240          1 TQDB       MOUNTED              PRIMARY          YES             NO
3966209240          2 TQDB       MOUNTED              PRIMARY          YES             NO

-- 4. 两个节点都, 开启数据库
-- 节点1
SQL> alter database open;
SQL> select force_logging from v$database;
-- 节点2
SQL> alter database open;
SQL> select force_logging from v$database;
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 停止两个节点的数据库实例
-- 节点1 
[root@tqdb21: ~]# srvctl stop database -db tqdb
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

-- 2. 两个节点都到 mount 状态，
-- 在一个节点执行数据库开启 force logging 即可
-- 节点1 到 mount 状态
[oracle@tqdb21: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 01:38:23 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

01:38:26 idle> 
01:39:25 idle> startup mount;
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             390070272 bytes
Database Buffers          419430400 bytes
Redo Buffers                3678208 bytes
Database mounted.
01:39:48 idle> 
01:39:56 idle> conn / as sysdba
Connected.
01:40:00 idle(tqdb21)> 

-- 节点2 到 mount 状态
[oracle@tqdb22: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 01:38:36 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.



Connected to an idle instance.

01:38:37 idle> 01:38:37 idle> 
01:38:37 idle> 
01:38:37 idle> 
01:40:14 idle> startup mount;
ORACLE instance started.

Total System Global Area  822080768 bytes
Fixed Size                  8901888 bytes
Variable Size             356515840 bytes
Database Buffers          452984832 bytes
Redo Buffers                3678208 bytes
Database mounted.
01:40:39 idle> 
01:42:29 idle> 
01:42:29 idle> conn / as sysdba
Connected.
01:44:57 idle(tqdb22)> 


-- 节点1 数据库开启 force logging
-- 在一个节点执行数据库开启 force logging 即可
01:45:47 idle(tqdb21)> 
01:45:47 idle(tqdb21)> alter database force logging;

Database altered.

01:46:03 idle(tqdb21)> 

-- 3. 查看 节点1 和 节点2 ，已经开启 `force logging`
-- 节点1
01:57:40 idle(tqdb21)> select DBID, INST_ID, NAME, OPEN_MODE, DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from gv$database;

      DBID    INST_ID NAME       OPEN_MODE            DATABASE_ROLE    FORCE_LOGGING   FLASHBACK_ON
---------- ---------- ---------- -------------------- ---------------- --------------- ------------------
3966209240          1 TQDB       MOUNTED              PRIMARY          YES             NO
3966209240          2 TQDB       MOUNTED              PRIMARY          YES             NO

01:57:41 idle(tqdb21)> 

-- 节点2
01:59:19 idle(tqdb22)> select DBID, INST_ID, NAME, OPEN_MODE, DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from gv$database;

      DBID    INST_ID NAME       OPEN_MODE            DATABASE_ROLE    FORCE_LOGGING   FLASHBACK_ON
---------- ---------- ---------- -------------------- ---------------- --------------- ------------------
3966209240          1 TQDB       MOUNTED              PRIMARY          YES             NO
3966209240          2 TQDB       MOUNTED              PRIMARY          YES             NO

01:59:21 idle(tqdb22)> 


-- 4. 两个节点都, 开启数据库
-- 节点1
02:05:02 idle(tqdb21)> alter database open;

Database altered.

02:05:09 idle(tqdb21)> conn / as sysdba
Connected.
02:05:28 sys@TQDB(tqdb21)> 
02:05:36 sys@TQDB(tqdb21)> col name for a10;
02:05:49 sys@TQDB(tqdb21)> COL FORCE_LOGGING FOR A15;
02:05:55 sys@TQDB(tqdb21)> set lines 200
02:06:02 sys@TQDB(tqdb21)> select DBID, INST_ID, NAME, OPEN_MODE, DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from gv$database;

      DBID    INST_ID NAME       OPEN_MODE            DATABASE_ROLE    FORCE_LOGGING   FLASHBACK_ON
---------- ---------- ---------- -------------------- ---------------- --------------- ------------------
3966209240          2 TQDB       MOUNTED              PRIMARY          YES             NO
3966209240          1 TQDB       READ WRITE           PRIMARY          YES             NO

02:06:04 sys@TQDB(tqdb21)> 

-- 节点2
02:09:21 idle(tqdb22)> alter database open;

Database altered.

02:09:33 idle(tqdb22)> conn / as sysdba
Connected.
02:09:39 sys@TQDB(tqdb22)> col name for a10;
02:09:45 sys@TQDB(tqdb22)> COL FORCE_LOGGING FOR A15;
02:09:51 sys@TQDB(tqdb22)> set lines 200
02:09:58 sys@TQDB(tqdb22)> select DBID, INST_ID, NAME, OPEN_MODE, DATABASE_ROLE, FORCE_LOGGING, FLASHBACK_ON from gv$database;

      DBID    INST_ID NAME       OPEN_MODE            DATABASE_ROLE    FORCE_LOGGING   FLASHBACK_ON
---------- ---------- ---------- -------------------- ---------------- --------------- ------------------
3966209240          2 TQDB       READ WRITE           PRIMARY          YES             NO
3966209240          1 TQDB       READ WRITE           PRIMARY          YES             NO

02:10:05 sys@TQDB(tqdb22)> 

</code></pre>
</blockquote>
<h2>3. 「主库 RAC」: 修改主库适应dataguard环境参数</h2>
<blockquote><p>
  当 「主库 RAC」 已开启<code>force logging</code>之后再进行如下操作</p>
<pre><code class="language-sql line-numbers">-- 1. 「主库 RAC」执行, 修改主库适应dataguard环境参数
alter system set db_unique_name='tqdb' scope=spfile sid='*';

alter system set log_archive_config='DG_CONFIG=(tqdb,tqdb_adg)' scope=both sid='*';
alter system set log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb' scope=spfile sid='*';
alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';
alter system set log_archive_dest_2='SERVICE=tqdb_adg ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb_adg' scope=spfile sid='*';
alter system set log_archive_dest_state_1='enable' scope=both sid='*';
alter system set log_archive_dest_state_2='enable' scope=both sid='*';

alter system set fal_server='tqdb_adg' scope=spfile sid='*';
alter system set fal_client='tqdb' scope=spfile sid='*';

alter system set standby_file_management=AUTO scope=both sid='*';


​```
-- 当目标文件位置不同时（例如：主库使用 ASM，备库 使用本地文件系统），需要定义 `DB files` 、`TEMP files` 和 `LOG files` 的转换规则：
alter system set DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/orcl/datafile','+DATA/orcl/datafile','/u01/app/oracle/oradata/orcl/tempfile','+DATA/orcl/tempfile' scope=spfile sid='*';  
alter system set LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/orcl/onlinelog','+DATA/orcl/onlinelog' scope=spfile sid='*';
​```

-- 2. 重启数据库
-- 停止两个节点的数据库实例
# srvctl stop database -db tqdb
# crsctl stat res -t
-- 启动两个节点的数据库实例
# srvctl start database -db tqdb
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 「主库 RAC」执行, 修改主库适应dataguard环境参数
-- 节点1
02:35:46 sys@TQDB(tqdb21)> 
02:35:47 sys@TQDB(tqdb21)> alter system set db_unique_name='tqdb' scope=spfile sid='*';

System altered.

02:35:51 sys@TQDB(tqdb21)> alter system set log_archive_config='DG_CONFIG=(tqdb,tqdb_adg)' scope=both sid='*';

System altered.

02:36:07 sys@TQDB(tqdb21)> alter system set log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb' scope=spfile sid='*';

System altered.

02:36:25 sys@TQDB(tqdb21)> alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';

System altered.

02:36:40 sys@TQDB(tqdb21)> alter system set log_archive_dest_2='SERVICE=tqdb_adg ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb_adg' scope=spfile sid='*';

System altered.

02:36:54 sys@TQDB(tqdb21)> alter system set log_archive_dest_state_1='enable' scope=both sid='*';

System altered.

02:37:03 sys@TQDB(tqdb21)> alter system set log_archive_dest_state_2='enable' scope=both sid='*';

System altered.

02:37:09 sys@TQDB(tqdb21)> alter system set fal_server='tqdb_adg' scope=spfile sid='*';

System altered.

02:37:20 sys@TQDB(tqdb21)> alter system set fal_client='tqdb' scope=spfile sid='*';

System altered.

02:37:28 sys@TQDB(tqdb21)> alter system set standby_file_management=AUTO scope=both sid='*';

System altered.

02:37:37 sys@TQDB(tqdb21)> 
02:37:39 sys@TQDB(tqdb21)> 


-- 2. 重启数据库
-- 停止两个节点的数据库实例
-- 节点1
[root@tqdb21: ~]# srvctl stop database -db tqdb
[root@tqdb21: ~]# 
[root@tqdb21: ~]# # crsctl stat res -t
[root@tqdb21: ~]# crsctl stat res -t  
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

-- 启动两个节点的数据库实例
-- 节点1
[root@tqdb21: ~]# srvctl start database -db tqdb
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl stat res -t            
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb.db
      1        ONLINE  ONLINE       tqdb21                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
      2        ONLINE  ONLINE       tqdb22                   Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             home,STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 


-- 查看修改的参数，已经在两个节点生效
-- 节点1
02:48:41 sys@TQDB(tqdb21)> show parameter log_archive_config

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_config                   string      DG_CONFIG=(tqdb,tqdb_adg)
02:51:21 sys@TQDB(tqdb21)> 
02:51:34 sys@TQDB(tqdb21)> 
02:51:34 sys@TQDB(tqdb21)> show parameter fal_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string      tqdb
fal_server                           string      tqdb_adg
02:51:35 sys@TQDB(tqdb21)> 
02:51:53 sys@TQDB(tqdb21)> 
02:51:53 sys@TQDB(tqdb21)> show parameter standby_file_management

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
02:51:54 sys@TQDB(tqdb21)>  
02:52:12 sys@TQDB(tqdb21)> show parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_1                   string      LOCATION=+DATA/archivelog VALI
                                                 D_FOR=(ALL_LOGFILES,ALL_ROLES)
                                                  DB_UNIQUE_NAME=tqdb

02:52:57 sys@TQDB(tqdb21)> show parameter log_archive_dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2                   string      SERVICE=tqdb_adg ASYNC LGWR VA
                                                 LID_FOR=(ONLINE_LOGFILES,PRIMA
                                                 RY_ROLE) DB_UNIQUE_NAME=tqdb_a
                                                 dg

02:53:03 sys@TQDB(tqdb21)> show parameter log_archive_dest_state_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_1             string      enable

02:53:23 sys@TQDB(tqdb21)> show parameter log_archive_dest_state_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_2             string      enable

02:53:26 sys@TQDB(tqdb21)> 
02:54:27 sys@TQDB(tqdb21)> show parameter log_archive_format

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_format                   string      %t_%s_%r.arc
02:54:31 sys@TQDB(tqdb21)> 


-- 节点2
02:50:18 sys@TQDB(tqdb22)> show parameter log_archive_config

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_config                   string      DG_CONFIG=(tqdb,tqdb_adg)
02:50:32 sys@TQDB(tqdb22)> show parameter fal_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string      tqdb
fal_server                           string      tqdb_adg
02:50:49 sys@TQDB(tqdb22)> show parameter standby_file_management

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
02:51:07 sys@TQDB(tqdb22)> show parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_1                   string      LOCATION=+DATA/archivelog VALI
                                                 D_FOR=(ALL_LOGFILES,ALL_ROLES)
                                                  DB_UNIQUE_NAME=tqdb

02:53:46 sys@TQDB(tqdb22)> show parameter log_archive_dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2                   string      SERVICE=tqdb_adg ASYNC LGWR VA
                                                 LID_FOR=(ONLINE_LOGFILES,PRIMA
                                                 RY_ROLE) DB_UNIQUE_NAME=tqdb_a
                                                 dg

02:53:49 sys@TQDB(tqdb22)> show parameter log_archive_dest_state_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_1             string      enable

02:54:00 sys@TQDB(tqdb22)> show parameter log_archive_dest_state_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_2             string      enable

02:54:03 sys@TQDB(tqdb22)> show parameter log_archive_format

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_format                   string      %t_%s_%r.arc
02:54:23 sys@TQDB(tqdb22)> 

-- 查看归档目录 `grid$ asmcmd -p`
-- 节点1
​```
-- 切换 redo log, 查看日志编号 `SEQUENCE#`
02:58:02 sys@TQDB(tqdb21)> alter system archive log current;

System altered.

02:58:47 sys@TQDB(tqdb21)>
02:59:08 sys@TQDB(tqdb21)> set lines 200
02:59:11 sys@TQDB(tqdb21)> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
         1          1         43  209715200        512          1 YES ACTIVE                3467655 2020-03-07 02:46:24      3475193 2020-03-07 02:58:46          0
         2          1         44  209715200        512          1 NO  CURRENT               3475193 2020-03-07 02:58:46   9.2954E+18                              0
         3          2         39  209715200        512          1 NO  CURRENT               3475198 2020-03-07 02:58:47   9.2954E+18                              0
         4          2         38  209715200        512          1 YES ACTIVE                3467666 2020-03-07 02:46:24      3475198 2020-03-07 02:58:47          0

02:59:12 sys@TQDB(tqdb21)> alter system checkpoint;

System altered.

02:59:25 sys@TQDB(tqdb21)> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
         1          1         43  209715200        512          1 YES INACTIVE              3467655 2020-03-07 02:46:24      3475193 2020-03-07 02:58:46          0
         2          1         44  209715200        512          1 NO  CURRENT               3475193 2020-03-07 02:58:46   9.2954E+18                              0
         3          2         39  209715200        512          1 NO  CURRENT               3475198 2020-03-07 02:58:47   9.2954E+18                              0
         4          2         38  209715200        512          1 YES INACTIVE              3467666 2020-03-07 02:46:24      3475198 2020-03-07 02:58:47          0

02:59:28 sys@TQDB(tqdb21)> 

-- `grid$ asmcmd -p`
ASMCMD [+DATA/archivelog] > ls -l *.arc
Type        Redund  Striped  Time             Sys  Name
ARCHIVELOG  UNPROT  COARSE   MAR 07 02:00:00  N    1_42_1032338008.arc => +DATA/TQDB/ARCHIVELOG/2020_03_07/thread_1_seq_42.338.1034390785
ARCHIVELOG  UNPROT  COARSE   MAR 07 02:00:00  N    1_43_1032338008.arc => +DATA/TQDB/ARCHIVELOG/2020_03_07/thread_1_seq_43.340.1034391527
ARCHIVELOG  UNPROT  COARSE   MAR 07 02:00:00  N    2_37_1032338008.arc => +DATA/TQDB/ARCHIVELOG/2020_03_07/thread_2_seq_37.339.1034390785
ARCHIVELOG  UNPROT  COARSE   MAR 07 02:00:00  N    2_38_1032338008.arc => +DATA/TQDB/ARCHIVELOG/2020_03_07/thread_2_seq_38.341.1034391527
ASMCMD [+DATA/archivelog] > 
​```
</code></pre>
</blockquote>
<h2>4. 「主库 RAC」:  主库增加<code>standby log</code>日志组</h2>
<blockquote><p>
  -- 1. 查看 redo log 信息（在一个节点执行即可）</p>
<pre><code class="language-sql line-numbers">SQL> select * from v$log;
</code></pre>
<p>  -- 2. 主库增加<code>standby logfile</code>日志组：</p>
<p>  <strong>主库是2组logfile，<code>standby logfile</code>需要至少多一组，每个thread多一组</strong></p>
<pre><code class="language-sql line-numbers">alter database add standby logfile thread 1 group 5 '+data' size 200m;
alter database add standby logfile thread 1 group 6 '+data' size 200m;
alter database add standby logfile thread 1 group 7 '+data' size 200m;
alter database add standby logfile thread 2 group 8 '+data' size 200m;
alter database add standby logfile thread 2 group 9 '+data' size 200m;
alter database add standby logfile thread 2 group 10 '+data' size 200m;
</code></pre>
<p>  执行记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 查看 redo log 信息（在一个节点执行即可）
-- 节点1
03:11:18 sys@TQDB(tqdb21)> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- --------------- ------------- ------------------- ------------ ------------------- ----------
         1          1         43  209715200        512          1 YES INACTIVE              3467655 2020-03-07 02:46:24      3475193 2020-03-07 02:58:46          0
         2          1         44  209715200        512          1 NO  CURRENT               3475193 2020-03-07 02:58:46   9.2954E+18                              0
         3          2         39  209715200        512          1 NO  CURRENT               3475198 2020-03-07 02:58:47   9.2954E+18                              0
         4          2         38  209715200        512          1 YES INACTIVE              3467666 2020-03-07 02:46:24      3475198 2020-03-07 02:58:47          0

03:11:25 sys@TQDB(tqdb21)> 

-- 2. 主库增加`standby logfile`日志组：
-- 节点1
03:13:05 sys@TQDB(tqdb21)> alter database add standby logfile thread 1 group 5 '+data' size 200m;

Database altered.

03:13:13 sys@TQDB(tqdb21)> alter database add standby logfile thread 1 group 6 '+data' size 200m;

Database altered.

03:13:20 sys@TQDB(tqdb21)> alter database add standby logfile thread 1 group 7 '+data' size 200m;

Database altered.

03:13:34 sys@TQDB(tqdb21)> alter database add standby logfile thread 2 group 8 '+data' size 200m;

Database altered.

03:13:43 sys@TQDB(tqdb21)> alter database add standby logfile thread 2 group 9 '+data' size 200m;

Database altered.

03:13:51 sys@TQDB(tqdb21)> alter database add standby logfile thread 2 group 10 '+data' size 200m;

Database altered.

03:13:58 sys@TQDB(tqdb21)> 

-- 
03:18:32 sys@TQDB(tqdb21)> select * from v$logfile;

    GROUP# STATUS          TYPE    MEMBER                                                       IS_     CON_ID
---------- --------------- ------- ------------------------------------------------------------ --- ----------
         1                 ONLINE  +DATA/TQDB/ONLINELOG/group_1.259.1032338013                  NO           0
         2                 ONLINE  +DATA/TQDB/ONLINELOG/group_2.260.1032338013                  NO           0
         3                 ONLINE  +DATA/TQDB/ONLINELOG/group_3.267.1032339499                  NO           0
         4                 ONLINE  +DATA/TQDB/ONLINELOG/group_4.268.1032339499                  NO           0
         5                 STANDBY +DATA/TQDB/ONLINELOG/group_5.342.1034392393                  NO           0
         6                 STANDBY +DATA/TQDB/ONLINELOG/group_6.343.1034392399                  NO           0
         7                 STANDBY +DATA/TQDB/ONLINELOG/group_7.344.1034392415                  NO           0
         8                 STANDBY +DATA/TQDB/ONLINELOG/group_8.345.1034392423                  NO           0
         9                 STANDBY +DATA/TQDB/ONLINELOG/group_9.346.1034392431                  NO           0
        10                 STANDBY +DATA/TQDB/ONLINELOG/group_10.347.1034392437                 NO           0

10 rows selected.

03:18:37 sys@TQDB(tqdb21)> 
03:21:18 sys@TQDB(tqdb21)> select * from v$standby_log;

    GROUP# DBID          THREAD#  SEQUENCE#      BYTES  BLOCKSIZE       USED ARC STATUS     FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME           LAST_CHANGE# LAST_TIME               CON_ID
---------- ---------- ---------- ---------- ---------- ---------- ---------- --- ---------- ------------- ---------- ------------ ------------------- ------------ ------------------- ----------
         5 UNASSIGNED          1          0  209715200        512          0 YES UNASSIGNED                                                                                                     0
         6 UNASSIGNED          1          0  209715200        512          0 YES UNASSIGNED                                                                                                     0
         7 UNASSIGNED          1          0  209715200        512          0 YES UNASSIGNED                                                                                                     0
         8 UNASSIGNED          2          0  209715200        512          0 YES UNASSIGNED                                                                                                     0
         9 UNASSIGNED          2          0  209715200        512          0 YES UNASSIGNED                                                                                                     0
        10 UNASSIGNED          2          0  209715200        512          0 YES UNASSIGNED                                                                                                     0

6 rows selected.

03:21:19 sys@TQDB(tqdb21)>

-- `grid$ asmcmd -p`
ASMCMD [+DATA/archivelog] > ls -l +DATA/TQDB/ONLINELOG/
Type       Redund  Striped  Time             Sys  Name
ONLINELOG  UNPROT  COARSE   MAR 07 02:00:00  Y    group_1.259.1032338013
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_10.347.1034392437
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_2.260.1032338013
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_3.267.1032339499
ONLINELOG  UNPROT  COARSE   MAR 07 02:00:00  Y    group_4.268.1032339499
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_5.342.1034392393
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_6.343.1034392399
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_7.344.1034392415
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_8.345.1034392423
ONLINELOG  UNPROT  COARSE   MAR 07 03:00:00  Y    group_9.346.1034392431
ASMCMD [+DATA/archivelog] > 
</code></pre>
</blockquote>
<h2>5. 「主库 RAC」和「备库」: 主库备库增加<code>tnsnames</code>别名</h2>
<blockquote><p>
  -- 1. 「主库 RAC」 tnsnamrs.ora:</p>
<p>  -- 两个节点都要执行</p>
<pre><code class="language-bash line-numbers">-- 两个节点已经自带了 `TQDB` 的别名
TQDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb)
    )
  )


-- 只需添加别名 `tqdb_adg`
tqdb_adg =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb_adg) 
    )
  )

  
</code></pre>
<p>  -- 2. 备库 tnsnames.ora:</p>
<pre><code class="language-sql line-numbers">-- 备库（tq1）添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (FAILOVER = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )



tqdb_adg =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb_adg) 
    )
  )

</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 「主库 RAC」 tnsnamrs.ora:
-- 两个节点都要执行
-- 节点1
[oracle@tqdb21: ~]$ cd $ORACLE_HOME/network/admin
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ ll
total 8
drwxr-xr-x 2 oracle oinstall   64 Apr 17  2019 samples
-rw-r--r-- 1 oracle oinstall 1536 Feb 14  2018 shrept.lst
-rw-r----- 1 oracle oinstall  331 Feb 14 08:57 tnsnames.ora
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ vim tnsnames.ora 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/19c/dbhome/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

TQDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb)
    )
  )


# 只需添加别名 `tqdb_adg`
tqdb_adg =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb_adg)
    )
  )


[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:35:57

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb)))
OK (0 msec)
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb_adg

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:37:49

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb_adg)))
OK (20 msec)
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/network/admin]$ 


-- 节点2 
[oracle@tqdb22: ~]$ cd $ORACLE_HOME/network/admin
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ ll
total 8
drwxr-xr-x 2 oracle oinstall   64 Feb 13 18:52 samples
-rw-r--r-- 1 oracle oinstall 1536 Feb 13 18:52 shrept.lst
-rw-r----- 1 oracle oinstall  331 Feb 14 08:57 tnsnames.ora
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ vim tnsnames.ora 
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/19c/dbhome/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

TQDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb)
    )
  )

# 只需添加别名 `tqdb_adg`
tqdb_adg =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb_adg)
    )
  )

[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:40:08

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb)))
OK (0 msec)
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb_adg

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:40:16

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb_adg)))
OK (10 msec)
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/network/admin]$ 


-- 2. 备库 tnsnames.ora:
[oracle@tq1: ~]$ cd $ORACLE_HOME/network/admin
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ ll
total 12
drwxr-xr-x 2 oracle oinstall 4096 Apr 17  2019 samples
-rw-r--r-- 1 oracle oinstall 1536 Feb 14  2018 shrept.lst
-rw-r--r-- 1 oracle oinstall  166 Feb  7 01:06 tnsnames.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ cat tnsnames.ora 
TQ1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tq1)
    )
  )

[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ vim tnsnames.ora 
TQ1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tq1)
    )
  )

# 备库（tq1）添加两个别名 `tqdb` 和 `tqdb_adg`
tqdb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (FAILOVER = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )



tqdb_adg =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = tqdb_adg)
    )
  )


[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:46:01

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tqdb-cluster-scan)(PORT = 1521)) (LOAD_BALANCE = yes) (FAILOVER = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))))
OK (10 msec)
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ tnsping tqdb_adg

TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 03:46:05

Copyright (c) 1997, 2019, Oracle.  All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = tqdb_adg)))
OK (0 msec)
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/network/admin]$ 
</code></pre>
</blockquote>
<h2>6.「主库 RAC」-> 「备库」拷贝主库密码文件到备库</h2>
<blockquote><p>
  -- 1. 「主库 RAC」使用 <code>asmcmd</code> 从ASM中 copy 出密码文件到 OS文件系统目录 <code>/tmp</code></p>
<p>  -- 节点1 操作即可</p>
<pre><code class="language-sql line-numbers">ASMCMD [+DATA/TQDB/PASSWORD] > pwd
+DATA/TQDB/PASSWORD

ASMCMD [+DATA/TQDB/PASSWORD] > ls -l
Type      Redund  Striped  Time             Sys  Name
PASSWORD  UNPROT  COARSE   FEB 14 08:00:00  Y    pwdtqdb.256.1032336929
PASSWORD  UNPROT  COARSE   FEB 14 08:00:00  Y    pwdtqdb.257.1032337993

ASMCMD [+DATA/TQDB/PASSWORD] > cp pwdtqdb.257.1032337993 /tmp
copying +DATA/TQDB/PASSWORD/pwdtqdb.257.1032337993 -> /tmp/pwdtqdb.257.1032337993
</code></pre>
<p>  -- 2. 将<code>/tmp</code>目录的密码文件拷贝到备库(tqdb_adg)的<code>$ORACLE_HOME/dbs</code>目录</p>
<pre><code class="language-sql line-numbers">[grid@tqdb21: /tmp]$ ll pwdtqdb.257.1032337993 
-rw-r----- 1 grid oinstall 2048 Mar  7 04:37 pwdtqdb.257.1032337993
[grid@tqdb21: /tmp]$ scp pwdtqdb.257.1032337993 oracle@tq1:/u01/app/oracle/product/19c/dbhome/dbs/
The authenticity of host 'tq1 (192.168.6.10)' can't be established.
ECDSA key fingerprint is SHA256:zSacI7xtzLJVQgn+yoHHru1SMS2F9y5w1jpSPkNIuSI.
ECDSA key fingerprint is MD5:f1:89:3e:c0:bd:2b:ea:8f:7e:9d:b1:cc:bf:05:dd:94.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tq1,192.168.6.10' (ECDSA) to the list of known hosts.
oracle@tq1's password: 
pwdtqdb.257.1032337993                                                                                                                                               100% 2048   342.6KB/s   00:00    
[grid@tqdb21: /tmp]$ 
</code></pre>
<p>  -- 3. 对参数文件，密码文件重命名符合备库实例命名规范</p>
<pre><code class="language-sql line-numbers">[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ ll -th
total 24K
-rw-r----- 1 oracle oinstall 2.0K Mar  7 04:48 pwdtqdb.257.1032337993
-rw-rw---- 1 oracle asmadmin 1.6K Mar  7 04:15 hc_tq1.dat
-rw-r--r-- 1 oracle asmadmin  941 Feb  6 17:38 inittq1.ora
-rw-r----- 1 oracle oinstall 2.0K Jan 17 21:43 orapwtq1
-rw-r----- 1 oracle asmadmin   24 Jan 17 21:27 lkTQ1
-rw-r--r-- 1 oracle oinstall 3.1K May 14  2015 init.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ mv pwdtqdb.257.1032337993 orapwtqdb_adg
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ ll -th
total 24K
-rw-r----- 1 oracle oinstall 2.0K Mar  7 04:48 orapwtqdb_adg
-rw-rw---- 1 oracle asmadmin 1.6K Mar  7 04:15 hc_tq1.dat
-rw-r--r-- 1 oracle asmadmin  941 Feb  6 17:38 inittq1.ora
-rw-r----- 1 oracle oinstall 2.0K Jan 17 21:43 orapwtq1
-rw-r----- 1 oracle asmadmin   24 Jan 17 21:27 lkTQ1
-rw-r--r-- 1 oracle oinstall 3.1K May 14  2015 init.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 「主库 RAC」使用 `asmcmd` 从ASM中 copy 出密码文件到 OS文件系统目录 `/tmp`
-- 节点1 操作即可
[grid@tqdb21: ~]$ asmcmd -p
ASMCMD [+] > ls -l
State    Type    Rebal  Name
MOUNTED  EXTERN  N      DATA/
MOUNTED  NORMAL  N      OCR/
ASMCMD [+] > cd DATA
ASMCMD [+DATA] > ls -l
Type  Redund  Striped  Time  Sys  Name
                             N    TQDB/
                             N    archivelog/
ASMCMD [+DATA] > cd TQDB
ASMCMD [+DATA/TQDB] > ls -l
Type  Redund  Striped  Time  Sys  Name
                             Y    ARCHIVELOG/
                             Y    CONTROLFILE/
                             Y    DATAFILE/
                             Y    ONLINELOG/
                             Y    PARAMETERFILE/
                             Y    PASSWORD/
                             Y    TEMPFILE/
ASMCMD [+DATA/TQDB] > cd PASSWORD
ASMCMD [+DATA/TQDB/PASSWORD] > ls -l
Type      Redund  Striped  Time             Sys  Name
PASSWORD  UNPROT  COARSE   FEB 14 08:00:00  Y    pwdtqdb.256.1032336929
PASSWORD  UNPROT  COARSE   FEB 14 08:00:00  Y    pwdtqdb.257.1032337993
ASMCMD [+DATA/TQDB/PASSWORD] > pwd
+DATA/TQDB/PASSWORD
ASMCMD [+DATA/TQDB/PASSWORD] > cp pwdtqdb.257.1032337993 /tmp
copying +DATA/TQDB/PASSWORD/pwdtqdb.257.1032337993 -> /tmp/pwdtqdb.257.1032337993
ASMCMD [+DATA/TQDB/PASSWORD] > quit
[grid@tqdb21: ~]$ 

-- 2. 将`/tmp`目录的密码文件拷贝到备库(tqdb_adg)的`$ORACLE_HOME/dbs`目录
[grid@tqdb21: /tmp]$ ll pwdtqdb.257.1032337993 
-rw-r----- 1 grid oinstall 2048 Mar  7 04:37 pwdtqdb.257.1032337993
[grid@tqdb21: /tmp]$ scp pwdtqdb.257.1032337993 oracle@tq1:/u01/app/oracle/product/19c/dbhome/dbs/
The authenticity of host 'tq1 (192.168.6.10)' can't be established.
ECDSA key fingerprint is SHA256:zSacI7xtzLJVQgn+yoHHru1SMS2F9y5w1jpSPkNIuSI.
ECDSA key fingerprint is MD5:f1:89:3e:c0:bd:2b:ea:8f:7e:9d:b1:cc:bf:05:dd:94.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tq1,192.168.6.10' (ECDSA) to the list of known hosts.
oracle@tq1's password: 
pwdtqdb.257.1032337993                                                                                                                                               100% 2048   342.6KB/s   00:00    
[grid@tqdb21: /tmp]$ 

-- 3. 对参数文件，密码文件重命名符合备库实例命名规范
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ ll -th
total 24K
-rw-r----- 1 oracle oinstall 2.0K Mar  7 04:48 pwdtqdb.257.1032337993
-rw-rw---- 1 oracle asmadmin 1.6K Mar  7 04:15 hc_tq1.dat
-rw-r--r-- 1 oracle asmadmin  941 Feb  6 17:38 inittq1.ora
-rw-r----- 1 oracle oinstall 2.0K Jan 17 21:43 orapwtq1
-rw-r----- 1 oracle asmadmin   24 Jan 17 21:27 lkTQ1
-rw-r--r-- 1 oracle oinstall 3.1K May 14  2015 init.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ mv pwdtqdb.257.1032337993 orapwtqdb_adg
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ ll -th
total 24K
-rw-r----- 1 oracle oinstall 2.0K Mar  7 04:48 orapwtqdb_adg
-rw-rw---- 1 oracle asmadmin 1.6K Mar  7 04:15 hc_tq1.dat
-rw-r--r-- 1 oracle asmadmin  941 Feb  6 17:38 inittq1.ora
-rw-r----- 1 oracle oinstall 2.0K Jan 17 21:43 orapwtq1
-rw-r----- 1 oracle asmadmin   24 Jan 17 21:27 lkTQ1
-rw-r--r-- 1 oracle oinstall 3.1K May 14  2015 init.ora
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 


</code></pre>
</blockquote>
<h2>7. 「备库」: 增加备库静态监听</h2>
<blockquote><p>
  -- 1. 「备库」: 增加备库静态监听</p>
<pre><code class="language-sql line-numbers">-- 1. 「备库」: 增加备库静态监听
[grid@tq1: /u01/app/19c/grid/network/admin]$ vim listener.ora 
​```
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

# 增加备库静态监听
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19c/dbhome)
      (SID_NAME = tqdb_adg)
    )
  )
​```
[grid@tq1: /u01/app/19c/grid/network/admin]$ 
</code></pre>
<p>  -- 2. 备库重启监听</p>
<pre><code class="language-sql line-numbers">-- 2. 备库重启监听
# srvctl stop listener 
# srvctl start listener 
# crsctl stat res -t

grid$ lsnrctl status
grid$ lsnrctl service
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 「备库」: 增加备库静态监听
[grid@tq1: /u01/app/19c/grid/network/admin]$ vim listener.ora 
#Backup file is  /u01/app/grid/crsdata/tq1/output/listener.ora.bak.tq1.grid line added by Agent
# listener.ora Network Configuration File: /u01/app/19c/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = tq1)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=ON            # line added by Agent


# 增加备库静态监听
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19c/dbhome)
      (SID_NAME = tqdb_adg)
    )
  )

[grid@tq1: /u01/app/19c/grid/network/admin]$ 


-- 2. 备库重启监听
[grid@tq1: /u01/app/19c/grid/network/admin]$ crsctl stat res -t    
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       tq1                      STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       tq1                      STABLE
ora.asm
               ONLINE  ONLINE       tq1                      Started,STABLE
ora.ons
               OFFLINE OFFLINE      tq1                      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       tq1                      STABLE
ora.tq1.db
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------
[grid@tq1: /u01/app/19c/grid/network/admin]$ 
[grid@tq1: /u01/app/19c/grid/network/admin]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 07:09:03

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tq1)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                07-MAR-2020 07:08:38
Uptime                    0 days 0 hr. 0 min. 24 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19c/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/tq1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tq1)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "tqdb_adg" has 1 instance(s).
  Instance "tqdb_adg", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[grid@tq1: /u01/app/19c/grid/network/admin]$ lsnrctl service

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2020 07:09:09

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tq1)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "tqdb_adg" has 1 instance(s).
  Instance "tqdb_adg", status UNKNOWN, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0
         LOCAL SERVER
The command completed successfully
[grid@tq1: /u01/app/19c/grid/network/admin]$ 
</code></pre>
</blockquote>
<h2>8. 「备库」: 备库创建<code>adump</code>目录  和 归档目录 <code>+DATA/archivelog</code></h2>
<blockquote><p>
  需要手工在备库 上创建 audit dump 目录。否则，duplicate 时会报错。</p>
<pre><code class="language-bash line-numbers">-- 1. 查看主库的 `adump` 目录
06:53:48 sys@TQDB(tqdb21)> show parameter audit_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------------------------
audit_file_dest                      string      /u01/app/oracle/admin/tqdb/adump
06:53:49 sys@TQDB(tqdb21)> 

-- 备库执行，创建`adump`目录 
oracle$ mkdir -p /u01/app/oracle/admin/tqdb/adump/

-- 2. 创建归档目录 `+DATA/archivelog`
grid$ asmcmd -p
ASMCMD [+DATA] > mkdir archivelog
ASMCMD [+DATA/archivelog] > ls -l
ASMCMD [+DATA/archivelog] > 
</code></pre>
<blockquote><p>
    备注： 由于备库也使用ASM磁盘管理，「数据文件目录 和 归档目录」都在 <code>+DATA</code> 磁盘组，所以不用在操作系统创建相应目录。</p>
<p>    如果是操作系统文件管理，就需要再在OS文件系统里创建「数据文件目录 和 归档目录」。例如：</p>
<pre><code class="language-sql line-numbers">mkdir -p /u01/app/oracle/admin/std/adump/

mkdir -p /u01/arch

mkdir -p /u01/app/oracle/oradata/std
mkdir -p /u01/app/oracle/oradata/std/datafile/
mkdir -p /u01/app/oracle/oradata/std/tempfile/
mkdir -p /u01/app/oracle/oradata/std/onlinelog/
</code></pre>
</blockquote>
<p>  操作记录：</p>
<pre><code class="language-bash line-numbers">-- 1. 备库执行，创建`adump`目录 
[oracle@tq1: /u01/app/oracle/admin]$ ll
total 4
drwxr-x--- 6 oracle oinstall 4096 Jan 17 21:27 tq1
[oracle@tq1: /u01/app/oracle/admin]$ 
[oracle@tq1: /u01/app/oracle/admin]$ 
[oracle@tq1: /u01/app/oracle/admin]$ mkdir -p /u01/app/oracle/admin/tqdb/adump/
[oracle@tq1: /u01/app/oracle/admin]$ ll
total 8
drwxr-x--- 6 oracle oinstall 4096 Jan 17 21:27 tq1
drwxr-xr-x 3 oracle oinstall 4096 Mar  5 04:54 tqdb
[oracle@tq1: /u01/app/oracle/admin]$ cd tqdb/
[oracle@tq1: /u01/app/oracle/admin/tqdb]$ ll
total 4
drwxr-xr-x 2 oracle oinstall 4096 Mar  5 04:54 adump
[oracle@tq1: /u01/app/oracle/admin/tqdb]$ 

-- 2. 创建归档目录 `+DATA/archivelog`
[grid@tq1: ~]$ asmcmd -p
ASMCMD [+DATA] > mkdir archivelog
</code></pre>
</blockquote>
<h2>9. 「备库」: 修改备库实例pfile文件</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 「备库」: 修改备库实例pfile文件
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ vim inittqdb_adg.ora
*.audit_file_dest='/u01/app/oracle/admin/tqdb/adump'
*.db_unique_name='tqdb_adg'
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(tqdb_adg,tqdb)'
*.log_archive_dest_1='LOCATION=+DATA/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=tqdb_adg'
*.log_archive_dest_2='SERVICE=tqdb ASYNC LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=tqdb'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
*.standby_file_management='AUTO'
*.fal_server='tqdb'
*.fal_client='tqdb_adg'
*.control_files='+DATA'
*.db_create_file_dest='+DATA'
*.db_name='tqdb'
*.pga_aggregate_target=379M
*.processes=300
*.sga_target=1136M
*.db_block_size=8192
*.compatible="19.0.0"
*.audit_trail="DB"
*.open_cursors=300
*._optimizer_use_auto_indexes="OFF"
[oracle@tq1: /u01/app/oracle/product/19c/dbhome/dbs]$ 
</code></pre>
</blockquote>
<h2>10. 「备库」: 使用上面的<code>pfile</code>启动备库到 <code>nomount</code> 状态</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 「备库」: 使用上面的`pfile`启动备库到 `nomount` 状态
[oracle@tq1: ~]$ echo $ORACLE_SID
tqdb_adg
[oracle@tq1: ~]$ echo $DB_UNIQUE_NAME
tqdb_adg
[oracle@tq1: ~]$ 
[oracle@tq1: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:15:20 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.


08:15:23 idle> startup nomount pfile='/u01/app/oracle/product/19c/dbhome/dbs/inittqdb_adg.ora';
ORACLE instance started.

Total System Global Area 1191181696 bytes
Fixed Size                  8895872 bytes
Variable Size             318767104 bytes
Database Buffers          855638016 bytes
Redo Buffers                7880704 bytes
08:16:00 idle> 
08:17:17 idle> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ 
</code></pre>
</blockquote>
<h2>11. 「主库 RAC」: 主库准备连接辅助实例</h2>
<blockquote><p>
  -- 1. 「主库 RAC」查看 RMAN 配置</p>
<pre><code class="language-sql line-numbers">oracle$ rman target /
RMAN> show all;
</code></pre>
<p>  -- 2. 备库 oracle 验证登陆</p>
<pre><code class="language-sql line-numbers">oracle$ sqlplus sys/Oracle123@tqdb as sysdba
oracle$ sqlplus sys/Oracle123@tqdb21:1521/tqdb as sysdba
oracle$ sqlplus sys/Oracle123@tqdb22:1521/tqdb as sysdba 
oracle$ sqlplus sys/Oracle123@tqdb_adg as sysdba
oracle$ sqlplus sys/Oracle123@tq1:1521/tqdb_adg as sysdba 
</code></pre>
<p>  -- 3.「主库 RAC」: 主库准备连接辅助实例</p>
<pre><code class="language-sql line-numbers">-- 节点1
[oracle@tqdb21: ~]$ rman target / auxiliary sys/Oracle123@tqdb_adg  
或者
[oracle@tqdb21: ~]$ rman target sys/Oracle123@tqdb auxiliary sys/Oracle123@tqdb_adg
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 1. 「主库 RAC」查看 RMAN 配置 
[oracle@tqdb21: ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 7 07:59:29 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name TQDB are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/19c/dbhome/dbs/snapcf_tqdb1.f'; # default

RMAN> 

-- 2. 备库 oracle 验证登陆
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 07:57:16 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

07:57:16 sys@TQDB(tqdb22)> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
quit[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 07:57:23 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

07:57:23 sys@TQDB(tqdb21)> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ 

-- 
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb21:1521/tqdb as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:04:38 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

08:04:38 sys@TQDB(tqdb21)> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb22:1521/tqdb as sysdba 

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:05:03 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

08:05:03 sys@TQDB(tqdb22)> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@tq1: ~]$ 

-- 
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tqdb_adg as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:06:44 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.


08:06:47 idle> 
08:06:49 idle> quit
Disconnected
[oracle@tq1: ~]$ sqlplus sys/Oracle123@tq1:1521/tqdb_adg as sysdba       

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:07:14 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.


08:07:20 idle> quit
Disconnected
[oracle@tq1: ~]$ 

-- 3.「主库 RAC」: 主库准备连接辅助实例
[oracle@tqdb21: ~]$ rman target sys/Oracle123@tqdb auxiliary sys/Oracle123@tqdb_adg

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 7 08:21:14 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN> quit


Recovery Manager complete.
-- 或者
[oracle@tqdb21: ~]$ rman target / auxiliary sys/Oracle123@tqdb_adg                  

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 7 08:22:45 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN> 
</code></pre>
</blockquote>
<h2>12. 「主库 RAC」: 使用<code>DUPLICATE</code>开始备库创建</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 执行 RMAN 脚本
run
{ 
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate AUXILIARY channel c4 type disk;
allocate AUXILIARY channel c5 type disk;
allocate AUXILIARY channel c6 type disk;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
NOFILENAMECHECK;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
}

</code></pre>
<blockquote><p>
    命令中各个项的简要说明如下 :</p>
<p>    · FOR STANDBY：这告诉 DUPLICATE 命令将用于备用数据库，因此它不会强制更改 DBID。</p>
<p>    · FROM ACTIVE DATABASE：DUPLICATE 将直接从源数据文件创建，无需额外的备份步骤。</p>
<p>    · DORECOVER：DUPLICATE 将包括恢复步骤，使待机状态达到当前时间点。</p>
<p>    · SPFILE：允许我们在从源服务器复制 spfile 时重置它。</p>
<p>    · NOFILENAMECHECK：不检查目标文件位置。
  </p></blockquote>
<p>  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 执行 RMAN 脚本
[oracle@tqdb21: ~]$ rman target / auxiliary sys/Oracle123@tqdb_adg

Recovery Manager: Release 19.0.0.0.0 - Production on Sat Mar 7 08:42:25 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TQDB (DBID=3966209240)
connected to auxiliary database: TQDB (not mounted)

RMAN> 

RMAN> 

RMAN> run
2> { 
3> allocate channel c1 type disk;
4> allocate channel c2 type disk;
5> allocate channel c3 type disk;
6> allocate AUXILIARY channel c4 type disk;
7> allocate AUXILIARY channel c5 type disk;
8> allocate AUXILIARY channel c6 type disk;
9> DUPLICATE TARGET DATABASE
10> FOR STANDBY
11> FROM ACTIVE DATABASE
12> DORECOVER
13> NOFILENAMECHECK;
14> release channel c1;
15> release channel c2;
16> release channel c3;
17> release channel c4;
18> release channel c5;
19> release channel c6;
20> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=32 instance=tqdb1 device type=DISK

allocated channel: c2
channel c2: SID=29 instance=tqdb1 device type=DISK

allocated channel: c3
channel c3: SID=34 instance=tqdb1 device type=DISK

allocated channel: c4
channel c4: SID=45 device type=DISK

allocated channel: c5
channel c5: SID=46 device type=DISK

allocated channel: c6
channel c6: SID=47 device type=DISK

Starting Duplicate Db at 2020-03-07 08:42:40
current log archived

contents of Memory Script:
{
   backup as copy reuse
   passwordfile auxiliary format  '/u01/app/oracle/product/19c/dbhome/dbs/orapwtqdb_adg'   ;
}
executing Memory Script

Starting backup at 2020-03-07 08:42:42
Finished backup at 2020-03-07 08:42:43
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   backup as copy current controlfile for standby auxiliary format  '+DATA/TQDB_ADG/CONTROLFILE/current.275.1034412163';
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files = 
  ''+DATA/TQDB_ADG/CONTROLFILE/current.275.1034412163'' comment=
 ''Set by RMAN'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Starting backup at 2020-03-07 08:42:44
channel c1: starting datafile copy
copying standby control file
output file name=+DATA/TQDB_ADG/CONTROLFILE/current.275.1034412163 tag=TAG20200307T084244
channel c1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 2020-03-07 08:42:48

sql statement: create spfile from memory

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1191181696 bytes

Fixed Size                     8895872 bytes
Variable Size                318767104 bytes
Database Buffers             855638016 bytes
Redo Buffers                   7880704 bytes
allocated channel: c4
channel c4: SID=40 device type=DISK
allocated channel: c5
channel c5: SID=41 device type=DISK
allocated channel: c6
channel c6: SID=46 device type=DISK

sql statement: alter system set  control_files =   ''+DATA/TQDB_ADG/CONTROLFILE/current.275.1034412163'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1191181696 bytes

Fixed Size                     8895872 bytes
Variable Size                318767104 bytes
Database Buffers             855638016 bytes
Redo Buffers                   7880704 bytes
allocated channel: c4
channel c4: SID=40 device type=DISK
allocated channel: c5
channel c5: SID=41 device type=DISK
allocated channel: c6
channel c6: SID=43 device type=DISK

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/system.270.1034411401 for datafile 1 with checkpoint SCN of 3556906
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401 for datafile 2 with checkpoint SCN of 3556891
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401 for datafile 3 with checkpoint SCN of 3556925
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437 for datafile 4 with checkpoint SCN of 3557114
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/users.274.1034411447 for datafile 5 with checkpoint SCN of 3557139
Using previous duplicated file +DATA/TQDB_ADG/DATAFILE/tq.273.1034411445 for datafile 6 with checkpoint SCN of 3557132

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   set newname for datafile  1 to 
 "+DATA/TQDB_ADG/DATAFILE/system.270.1034411401";
   set newname for datafile  2 to 
 "+DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401";
   set newname for datafile  3 to 
 "+DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401";
   set newname for datafile  4 to 
 "+DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437";
   set newname for datafile  5 to 
 "+DATA/TQDB_ADG/DATAFILE/users.274.1034411447";
   set newname for datafile  6 to 
 "+DATA/TQDB_ADG/DATAFILE/tq.273.1034411445";
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

sql statement: alter system archive log current
current log archived
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/1_45_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/2_40_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/1_46_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/2_41_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/1_47_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/2_42_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/2_43_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/1_48_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/1_49_1032338008.arc conflicts with a file used by the target database
RMAN-05158: WARNING: auxiliary (archivelog) file name +DATA/archivelog/2_44_1032338008.arc conflicts with a file used by the target database

contents of Memory Script:
{
   backup as copy reuse
   archivelog like  "+DATA/archivelog/1_45_1032338008.arc" auxiliary format 
 "+DATA/archivelog/1_45_1032338008.arc"   archivelog like 
 "+DATA/archivelog/2_40_1032338008.arc" auxiliary format 
 "+DATA/archivelog/2_40_1032338008.arc"   archivelog like 
 "+DATA/archivelog/1_46_1032338008.arc" auxiliary format 
 "+DATA/archivelog/1_46_1032338008.arc"   archivelog like 
 "+DATA/archivelog/2_41_1032338008.arc" auxiliary format 
 "+DATA/archivelog/2_41_1032338008.arc"   archivelog like 
 "+DATA/archivelog/1_47_1032338008.arc" auxiliary format 
 "+DATA/archivelog/1_47_1032338008.arc"   archivelog like 
 "+DATA/archivelog/2_42_1032338008.arc" auxiliary format 
 "+DATA/archivelog/2_42_1032338008.arc"   archivelog like 
 "+DATA/archivelog/2_43_1032338008.arc" auxiliary format 
 "+DATA/archivelog/2_43_1032338008.arc"   archivelog like 
 "+DATA/archivelog/1_48_1032338008.arc" auxiliary format 
 "+DATA/archivelog/1_48_1032338008.arc"   archivelog like 
 "+DATA/archivelog/1_49_1032338008.arc" auxiliary format 
 "+DATA/archivelog/1_49_1032338008.arc"   archivelog like 
 "+DATA/archivelog/2_44_1032338008.arc" auxiliary format 
 "+DATA/archivelog/2_44_1032338008.arc"   ;
   catalog clone archivelog  "+DATA/archivelog/1_45_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/2_40_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/1_46_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/2_41_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/1_47_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/2_42_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/2_43_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/1_48_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/1_49_1032338008.arc";
   catalog clone archivelog  "+DATA/archivelog/2_44_1032338008.arc";
   catalog clone datafilecopy  "+DATA/TQDB_ADG/DATAFILE/system.270.1034411401", 
 "+DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401", 
 "+DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401", 
 "+DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437", 
 "+DATA/TQDB_ADG/DATAFILE/users.274.1034411447", 
 "+DATA/TQDB_ADG/DATAFILE/tq.273.1034411445";
   switch clone datafile  1 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/system.270.1034411401";
   switch clone datafile  2 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401";
   switch clone datafile  3 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401";
   switch clone datafile  4 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437";
   switch clone datafile  5 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/users.274.1034411447";
   switch clone datafile  6 to datafilecopy 
 "+DATA/TQDB_ADG/DATAFILE/tq.273.1034411445";
}
executing Memory Script

Starting backup at 2020-03-07 08:44:03
channel c1: starting archived log copy
input archived log thread=1 sequence=47 RECID=78 STAMP=1034412160
channel c2: starting archived log copy
input archived log thread=2 sequence=42 RECID=79 STAMP=1034412161
channel c3: starting archived log copy
input archived log thread=1 sequence=45 RECID=74 STAMP=1034411473
output file name=+DATA/archivelog/1_47_1032338008.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
channel c1: starting archived log copy
input archived log thread=2 sequence=40 RECID=75 STAMP=1034411474
output file name=+DATA/archivelog/2_42_1032338008.arc RECID=0 STAMP=0
channel c2: archived log copy complete, elapsed time: 00:00:01
channel c2: starting archived log copy
input archived log thread=2 sequence=43 RECID=80 STAMP=1034412236
output file name=+DATA/archivelog/1_45_1032338008.arc RECID=0 STAMP=0
channel c3: archived log copy complete, elapsed time: 00:00:01
channel c3: starting archived log copy
input archived log thread=1 sequence=48 RECID=81 STAMP=1034412238
output file name=+DATA/archivelog/2_40_1032338008.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:02
channel c1: starting archived log copy
input archived log thread=1 sequence=46 RECID=76 STAMP=1034411479
output file name=+DATA/archivelog/2_43_1032338008.arc RECID=0 STAMP=0
channel c2: archived log copy complete, elapsed time: 00:00:02
channel c2: starting archived log copy
input archived log thread=2 sequence=41 RECID=77 STAMP=1034411480
output file name=+DATA/archivelog/1_48_1032338008.arc RECID=0 STAMP=0
channel c3: archived log copy complete, elapsed time: 00:00:02
channel c3: starting archived log copy
input archived log thread=2 sequence=44 RECID=83 STAMP=1034412242
output file name=+DATA/archivelog/1_46_1032338008.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
channel c1: starting archived log copy
input archived log thread=1 sequence=49 RECID=82 STAMP=1034412241
output file name=+DATA/archivelog/2_41_1032338008.arc RECID=0 STAMP=0
channel c2: archived log copy complete, elapsed time: 00:00:01
output file name=+DATA/archivelog/2_44_1032338008.arc RECID=0 STAMP=0
channel c3: archived log copy complete, elapsed time: 00:00:01
output file name=+DATA/archivelog/1_49_1032338008.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 2020-03-07 08:44:08

cataloged archived log
archived log file name=+DATA/archivelog/1_45_1032338008.arc RECID=1 STAMP=1034412248

cataloged archived log
archived log file name=+DATA/archivelog/2_40_1032338008.arc RECID=2 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/1_46_1032338008.arc RECID=3 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/2_41_1032338008.arc RECID=4 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/1_47_1032338008.arc RECID=5 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/2_42_1032338008.arc RECID=6 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/2_43_1032338008.arc RECID=7 STAMP=1034412249

cataloged archived log
archived log file name=+DATA/archivelog/1_48_1032338008.arc RECID=8 STAMP=1034412250

cataloged archived log
archived log file name=+DATA/archivelog/1_49_1032338008.arc RECID=9 STAMP=1034412250

cataloged archived log
archived log file name=+DATA/archivelog/2_44_1032338008.arc RECID=10 STAMP=1034412250

cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/system.270.1034411401 RECID=1 STAMP=1034412250
cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401 RECID=3 STAMP=1034412250
cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401 RECID=2 STAMP=1034412250
cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437 RECID=4 STAMP=1034412250
cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/users.274.1034411447 RECID=5 STAMP=1034412250
cataloged datafile copy
datafile copy file name=+DATA/TQDB_ADG/DATAFILE/tq.273.1034411445 RECID=6 STAMP=1034412250

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/system.270.1034411401

datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/sysaux.269.1034411401

datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/undotbs1.271.1034411401

datafile 4 switched to datafile copy
input datafile copy RECID=4 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/undotbs2.272.1034411437

datafile 5 switched to datafile copy
input datafile copy RECID=5 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/users.274.1034411447

datafile 6 switched to datafile copy
input datafile copy RECID=6 STAMP=1034412250 file name=+DATA/TQDB_ADG/DATAFILE/tq.273.1034411445

contents of Memory Script:
{
   set until scn  3559659;
   recover
   standby
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 2020-03-07 08:44:11

starting media recovery

archived log for thread 1 with sequence 45 is already on disk as file +DATA/archivelog/1_45_1032338008.arc
archived log for thread 1 with sequence 46 is already on disk as file +DATA/archivelog/1_46_1032338008.arc
archived log for thread 1 with sequence 47 is already on disk as file +DATA/archivelog/1_47_1032338008.arc
archived log for thread 1 with sequence 48 is already on disk as file +DATA/archivelog/1_48_1032338008.arc
archived log for thread 1 with sequence 49 is already on disk as file +DATA/archivelog/1_49_1032338008.arc
archived log for thread 2 with sequence 40 is already on disk as file +DATA/archivelog/2_40_1032338008.arc
archived log for thread 2 with sequence 41 is already on disk as file +DATA/archivelog/2_41_1032338008.arc
archived log for thread 2 with sequence 42 is already on disk as file +DATA/archivelog/2_42_1032338008.arc
archived log for thread 2 with sequence 43 is already on disk as file +DATA/archivelog/2_43_1032338008.arc
archived log for thread 2 with sequence 44 is already on disk as file +DATA/archivelog/2_44_1032338008.arc
archived log file name=+DATA/archivelog/1_45_1032338008.arc thread=1 sequence=45
archived log file name=+DATA/archivelog/2_40_1032338008.arc thread=2 sequence=40
archived log file name=+DATA/archivelog/1_46_1032338008.arc thread=1 sequence=46
archived log file name=+DATA/archivelog/2_41_1032338008.arc thread=2 sequence=41
archived log file name=+DATA/archivelog/1_47_1032338008.arc thread=1 sequence=47
archived log file name=+DATA/archivelog/2_42_1032338008.arc thread=2 sequence=42
archived log file name=+DATA/archivelog/1_48_1032338008.arc thread=1 sequence=48
archived log file name=+DATA/archivelog/2_43_1032338008.arc thread=2 sequence=43
archived log file name=+DATA/archivelog/2_44_1032338008.arc thread=2 sequence=44
archived log file name=+DATA/archivelog/1_49_1032338008.arc thread=1 sequence=49
media recovery complete, elapsed time: 00:00:01
Finished recover at 2020-03-07 08:44:13

contents of Memory Script:
{
   delete clone force archivelog all;
}
executing Memory Script

deleted archived log
archived log file name=+DATA/archivelog/1_45_1032338008.arc RECID=1 STAMP=1034412248
deleted archived log
archived log file name=+DATA/archivelog/1_46_1032338008.arc RECID=3 STAMP=1034412249
deleted archived log
archived log file name=+DATA/archivelog/1_47_1032338008.arc RECID=5 STAMP=1034412249
deleted archived log
archived log file name=+DATA/archivelog/1_48_1032338008.arc RECID=8 STAMP=1034412250
deleted archived log
archived log file name=+DATA/archivelog/1_49_1032338008.arc RECID=9 STAMP=1034412250
deleted archived log
archived log file name=+DATA/archivelog/2_40_1032338008.arc RECID=2 STAMP=1034412249
deleted archived log
archived log file name=+DATA/archivelog/2_41_1032338008.arc RECID=4 STAMP=1034412249
deleted archived log
archived log file name=+DATA/archivelog/2_42_1032338008.arc RECID=6 STAMP=1034412249
Deleted 3 objects

deleted archived log
archived log file name=+DATA/archivelog/2_43_1032338008.arc RECID=7 STAMP=1034412249
Deleted 3 objects

deleted archived log
archived log file name=+DATA/archivelog/2_44_1032338008.arc RECID=10 STAMP=1034412250
Deleted 4 objects

Finished Duplicate Db at 2020-03-07 08:44:19

released channel: c1

released channel: c2

released channel: c3

released channel: c4

released channel: c5

released channel: c6

RMAN> 

RMAN> quit


Recovery Manager complete.
[oracle@tqdb21: ~]$ 


</code></pre>
</blockquote>
<h2>13. 「备库」: 检查备库开启MRP</h2>
<blockquote><p>
  操作记录：</p>
<pre><code class="language-sql line-numbers">-- 「备库」: 检查备库开启MRP
[oracle@tq1: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 08:53:10 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

08:53:10 idle(tq1)> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

08:53:24 idle(tq1)> 
08:53:32 idle(tq1)> alter database open;

Database altered.

08:53:38 idle(tq1)> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY

08:53:51 idle(tq1)> conn / as sysdba
Connected.
08:53:55 sys@TQDB(tq1)> 
08:54:10 sys@TQDB(tq1)> alter database recover managed standby database disconnect;

Database altered.

08:54:48 sys@TQDB(tq1)> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

08:55:07 sys@TQDB(tq1)> 


</code></pre>
</blockquote>
<h2>14. 测试 ADG</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 主库创建一张表 `tq.copy_dba_objects`
09:15:49 sys@TQDB(tqdb21)> create table tq.copy_dba_objects as select * from dba_objects;

Table created.

09:16:59 sys@TQDB(tqdb21)> 

09:17:16 sys@TQDB(tqdb21)> conn tq/tq
Connected.
09:17:23 tq@TQDB(tqdb21)> select count(*) from copy_dba_objects;

  COUNT(*)
----------
     23579

09:17:25 tq@TQDB(tqdb21)> 

-- 备库查询验证数据
[oracle@tq1: ~]$ sqlplus tq/tq

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 7 09:22:47 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Sat Mar 07 2020 09:17:51 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0


09:22:50 tq@TQDB> select count(*) from copy_dba_objects;

  COUNT(*)
----------
     23579

09:22:54 tq@TQDB> 


</code></pre>
</blockquote>
<h2>15. ADG 常用命令</h2>
<blockquote>
<pre><code class="language-sql line-numbers">-- 停 MRP (managed recovery) 
alter database recover managed standby database cancel;

-- 在备库启动 recover 过程，应用主库传过来的日志（默认已经是real-time apply模式，因此省略using current logfile）；
ALTER DATABASE RECOVER managed standby database disconnect from session;

--
alter database recover managed standby database using current logfile disconnect from session;

-- 修改保护模式为最大可用性
alter database set standby database to maximize availability;

-- 查询 Oracle ADG 保护模式
select DATABASE_ROLE, open_mode, PROTECTION_MODE,PROTECTION_LEVEL from v$database;

-- 查询v$dataguard_process 视图，验证来自主库传输过来的日志过程和备库应用日志的情况（v$dataguard_process视图在12.2版本出现，取代了v$managed_standby)；
select role,thread#,sequence#,action from v$dataguard_process;

-- 查询 v$archived_log 视图，验证来自主库传输过来的日志变化情况，下面输出可以看到主库传输过来的日志在增加
select NAME, DEST_ID, THREAD#, SEQUENCE#, ARCHIVED, APPLIED，DELETED，STATUS，COMPRESSED from v$archived_log;
-- 
select THREAD#, SEQUENCE#, NAME, ARCHIVED, APPLIED, DELETED, STATUS from v$archived_log order by 1, 2;

</code></pre>
</blockquote>
<p>至此，我们已经完成搭建 Oracle MAA:  Oracle 19c RAC + Active Data Gurad。</p>
<p>接下来，我们分别进行 <code>switchover</code> 和 <code>failover</code> 角色（来回）切换。</p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2020/03/oracle-maa-oracle-19c-rac-adg.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Oracle 19c RAC 安装 以及 升级 RU</title>
		<link>https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html</link>
					<comments>https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Tue, 17 Mar 2020 09:04:39 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Oracle 19c]]></category>
		<category><![CDATA[Oracle RAC]]></category>
		<category><![CDATA[Oracle 19c RAC]]></category>
		<category><![CDATA[RAC]]></category>
		<category><![CDATA[RELEASE UPDATE]]></category>
		<category><![CDATA[tqdb]]></category>
		<category><![CDATA[tqdb21]]></category>
		<category><![CDATA[tqdb22]]></category>
		<category><![CDATA[upgrade RU]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=408</guid>

					<description><![CDATA[Oracle 19c RAC安装以及升级 RU Revision V1.0 No. Date Author/M [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>Oracle 19c RAC安装以及升级 RU</h3>
<p><strong>Revision    V1.0</strong></p>
<table>
<thead>
<tr>
<th align="left">No.</th>
<th>Date</th>
<th>Author/Modifier</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">1.0</td>
<td>2020-02-05</td>
<td>谈权</td>
<td>初稿</td>
</tr>
<tr>
<td align="left"></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td align="left"></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483caeade" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483caeade"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#Oracle_19c_RAC%E5%AE%89%E8%A3%85%E4%BB%A5%E5%8F%8A%E5%8D%87%E7%BA%A7_RU" >Oracle 19c RAC安装以及升级 RU</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#1_%E7%B3%BB%E7%BB%9F%E8%A7%84%E5%88%92" >1. 系统规划</a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#11_%E7%BD%91%E7%BB%9C%E8%A7%84%E5%88%92" >1.1 网络规划</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-4" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#12_%E5%AD%98%E5%82%A8%E8%A7%84%E5%88%92" >1.2 存储规划</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-5" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#13_%E6%93%8D%E4%BD%9C%E7%B3%BB%E7%BB%9F%E8%A7%84%E8%8C%83" >1.3 操作系统规范</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-6" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#14_%E6%95%B0%E6%8D%AE%E5%BA%93%E7%9B%B8%E5%85%B3%E4%BB%8B%E8%B4%A8" >1.4 数据库相关介质</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-7" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#15_%E5%B0%8F%E7%BB%93%EF%BC%9A%E5%8F%8C%E8%8A%82%E7%82%B9RAC%E6%95%B4%E4%BD%93%E8%A7%84%E5%88%92" >1.5 小结：双节点RAC整体规划</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-8" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#2_%E7%8E%AF%E5%A2%83%E9%85%8D%E7%BD%AE" >2. 环境配置</a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-9" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#21_%E7%BD%91%E7%BB%9C%E9%85%8D%E7%BD%AE" >2.1 网络配置</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-10" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#22_%E4%BF%AE%E6%94%B9%E5%90%AF%E5%8A%A8%E6%A8%A1%E5%BC%8F%E4%B8%BA%EF%BC%9A_%E3%80%8Cmulti-usertarget%E3%80%8D" >2.2 修改启动模式为： 「multi-user.target」</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-11" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#23_%E5%85%B3%E9%97%AD%E6%93%8D%E4%BD%9C%E7%B3%BB%E7%BB%9F_NUMA" >2.3 关闭操作系统 NUMA</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-12" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#24_%E5%85%B3%E9%97%AD%E9%98%B2%E7%81%AB%E5%A2%99" >2.4 关闭防火墙</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-13" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#25_%E5%9C%A8CentOS_7_%E4%B8%AD%E7%A6%81%E6%AD%A2_IPv6" >2.5 在CentOS 7 中禁止 IPv6</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-14" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#26_%E7%A6%81%E7%94%A8_SELINUX_%E9%85%8D%E7%BD%AE" >2.6 禁用 SELINUX 配置</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-15" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#27_%E9%85%8D%E7%BD%AE%E5%85%B3%E9%97%AD%E6%9C%8D%E5%8A%A1" >2.7 配置(关闭)服务</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-16" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#28_%E5%85%B3%E9%97%AD_THP" >2.8 关闭 THP</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-17" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#29_NOZEROCONF" >2.9 NOZEROCONF</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-18" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#210_%E8%BD%AF%E4%BB%B6%E5%8C%85%E5%AE%89%E8%A3%85" >2.10 软件包安装</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-19" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#211_%E5%AE%89%E8%A3%85_cvuqdisk_%E5%8C%85" >2.11 安装 cvuqdisk 包</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-20" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#212_%E6%97%B6%E9%97%B4%E6%9C%8D%E5%8A%A1" >2.12 时间服务</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-21" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#213_%E5%88%9B%E5%BB%BA%E7%94%A8%E6%88%B7" >2.13 创建用户</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-22" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#214_%E5%86%85%E6%A0%B8%E5%8F%82%E6%95%B0%E8%B0%83%E6%95%B4" >2.14 内核参数调整</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-23" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#215_LIMITS_%E9%85%8D%E7%BD%AE" >2.15 LIMITS 配置</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-24" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#216_%E7%9B%AE%E5%BD%95%E5%88%9B%E5%BB%BA" >2.16 目录创建</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-25" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#217_%E9%85%8D%E7%BD%AE_profile" >2.17 配置 profile</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-26" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#218_GRID_%E7%94%A8%E6%88%B7%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F" >2.18 GRID 用户环境变量</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-27" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#219_ORACLE_%E7%94%A8%E6%88%B7%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F" >2.19 ORACLE 用户环境变量</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-28" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#220_ROOT_%E7%94%A8%E6%88%B7%E6%B7%BB%E5%8A%A0_crsctl_%E5%91%BD%E4%BB%A4" >2.20 ROOT 用户添加 crsctl 命令</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-29" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#221_%E6%89%8B%E5%8A%A8%E9%85%8D%E7%BD%AE_SSH_%E7%AD%89%E6%95%88%E6%80%A7_%EF%BC%88%E4%B9%9F%E5%8F%AF%E5%9C%A8%E5%9B%BE%E5%BD%A2%E5%AE%89%E8%A3%85GRID%E3%80%81DB%E6%97%B6%EF%BC%8C%E7%82%B9%E5%87%BBSSH_connectivity%EF%BC%89" >2.21 手动配置 SSH 等效性 （也可在图形安装GRID、DB时，点击SSH connectivity）</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-30" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#2211_oracle_%E7%94%A8%E6%88%B7%E7%9A%84_SSH_%E7%AD%89%E6%95%88%E6%80%A7" >2.21.1 oracle 用户的 SSH 等效性</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-31" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#2212_grid_%E7%94%A8%E6%88%B7%E7%9A%84_SSH_%E7%AD%89%E6%95%88%E6%80%A7" >2.21.2 grid 用户的 SSH 等效性</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-32" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#222_%E9%85%8D%E7%BD%AE_udev" >2.22 配置 udev</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-33" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#2221_%E9%85%8D%E7%BD%AE_multipath_%E5%A4%9A%E8%B7%AF%E5%BE%84" >2.22.1 配置 multipath 多路径</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-34" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#2222_%E9%85%8D%E7%BD%AE_udev_99-oracle-asmdevicesrules_%E8%A7%84%E5%88%99" >2.22.2 配置 udev (99-oracle-asmdevices.rules) 规则</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-35" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#223_%E9%87%8D%E5%90%AF_OS" >2.23 重启 OS</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class="ez-toc-link ez-toc-heading-36" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#3_%E8%BD%AF%E4%BB%B6%E5%AE%89%E8%A3%85%E4%B8%8E%E9%85%8D%E7%BD%AE" >3. 软件安装与配置</a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-37" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#31_GRID_%E5%AE%89%E8%A3%85" >3.1 GRID 安装</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-38" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#311_%E5%90%AF%E5%8A%A8_sshd_%E7%9A%84_X11_%E8%BD%AC%E5%8F%91%EF%BC%8C%E5%90%AF%E5%8A%A8%E5%9B%BE%E5%BD%A2%E7%95%8C%E9%9D%A2" >3.1.1 启动 sshd 的 X11 转发，启动图形界面</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-39" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#312_%E4%BD%BF%E7%94%A8grid%E7%94%A8%E6%88%B7%E7%99%BB%E5%85%A5%E5%9B%BE%E5%BD%A2%E5%AE%89%E8%A3%85%EF%BC%9A" >3.1.2 使用grid用户登入图形安装：</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-40" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#32_%E7%A6%81%E7%94%A8_ASM_%E5%AE%9E%E4%BE%8B%E7%9A%84_AMM" >3.2 禁用 ASM 实例的 AMM</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-41" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#33_DB_%E5%AE%89%E8%A3%85%E4%B8%8E%E9%85%8D%E7%BD%AE" >3.3 DB 安装与配置</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-42" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#34_%E5%8D%87%E7%BA%A7%EF%BC%9AGI_%E5%92%8C_DB_%E6%89%93_RU_RELEASE_UPDATE%E8%A1%A5%E4%B8%81" >3.4 升级：GI 和 DB 打 RU (RELEASE UPDATE)补丁</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-43" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#341_%E6%9B%B4%E6%96%B0_OPatch" >3.4.1 更新 OPatch</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-44" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#342_%E5%BC%80%E5%A7%8B%E5%8D%87%E7%BA%A7_GI_RU_RELEASE_UPDATE_%E8%A1%A5%E4%B8%81" >3.4.2 开始升级 GI RU (RELEASE UPDATE) 补丁</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-45" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#343_%E5%BC%80%E5%A7%8B%E5%8D%87%E7%BA%A7_DB_RU_RELEASE_UPDATE_%E8%A1%A5%E4%B8%81" >3.4.3 开始升级 DB RU (RELEASE UPDATE) 补丁</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-46" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#35_%E5%88%9B%E5%BB%BA%E6%95%B0%E6%8D%AE%E5%BA%93" >3.5 创建数据库</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-47" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#351_asmca_%E5%88%9B%E5%BB%BA_DATA_%E7%A3%81%E7%9B%98%E7%BB%84" >3.5.1 asmca 创建 DATA 磁盘组</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-48" href="https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/#352_dbca_%E5%88%9B%E5%BB%BA%E6%95%B0%E6%8D%AE%E5%BA%93" >3.5.2 dbca 创建数据库</a></li></ul></li></ul></li></ul></nav></div>

<hr />
<h2>1. 系统规划</h2>
<h3>1.1 网络规划</h3>
<ul>
<li>主机名允许使用小写字母、数字和中横线（<code>-</code>），并且只能以小写字母开头。</li>
<li>两节点rac建议DNS服务器环境配置3个SCANIP，否则配置1个SCANIP。</li>
<li>私网需要使用独立交换机，而不是网线对联。</li>
<li>多套RAC使用同一私网交换机，需划分成不同VLAN，或者使用不同网段。</li>
</ul>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/virtual-rac.jpg" alt="virtual-rac" /></p>
<h3>1.2 存储规划</h3>
<p>存储使用Oracle ASM管理。操作系统层通过udev绑定。如果使用afd参考以下文档</p>
<ul>
<li>1.ASMFD (ASM Filter Driver) Support on OS Platforms (Certification Matrix).
<p>(文档 ID 2034681.1)</p>
</li>
<li>2.How to configure and Create a Disk group using ASMFD
<p>(文档 ID 2053045.1)</p>
</li>
</ul>
<h3>1.3 操作系统规范</h3>
<p>操作系统：CentOS Linux 7.7</p>
<p>磁盘分区：(内存 4G)</p>
<table>
<thead>
<tr>
<th>分区</th>
<th>大小 (Size)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SWAP</td>
<td>8G</td>
</tr>
<tr>
<td>/</td>
<td>100G</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h3>1.4 数据库相关介质</h3>
<table>
<thead>
<tr>
<th>介质</th>
<th>文件名</th>
</tr>
</thead>
<tbody>
<tr>
<td>Oracle grid</td>
<td>LINUX.X64_193000_grid_home.zip</td>
</tr>
<tr>
<td>Oracle database</td>
<td>LINUX.X64_193000_db_home.zip</td>
</tr>
<tr>
<td>Patch 30501910: GI RELEASE UPDATE 19.6.0.0.0<br />说明：由于 GI RU 包含 DB RU，所以 RAC 环境升级 DB 时，还将使用此 Patch。</td>
<td>p30501910_190000_Linux-x86-64.zip</td>
</tr>
<tr>
<td>Patch 30557433: DATABASE RELEASE UPDATE 19.6.0.0.0<br />说明：单实例升级 DB RU 时，使用此 Patch。</td>
<td>p30557433_190000_Linux-x86-64.zip</td>
</tr>
<tr>
<td>OPatch</td>
<td>p6880880_190000_Linux-x86-64.zip</td>
</tr>
</tbody>
</table>
<h3>1.5 小结：双节点RAC整体规划</h3>
<p>rac1/rac2 主机名：</p>
<table>
<thead>
<tr>
<th></th>
<th>rac1</th>
<th>rac2</th>
</tr>
</thead>
<tbody>
<tr>
<td>操作系统</td>
<td>CentOS 7.7</td>
<td>CentOS 7.7</td>
</tr>
<tr>
<td>主机名</td>
<td>tqdb21</td>
<td>tqdb22</td>
</tr>
<tr>
<td>IP地址 （Public）(enp0s8)</td>
<td>192.168.6.21</td>
<td>192.168.6.22</td>
</tr>
<tr>
<td>IP地址 （Private）(enp0s9)</td>
<td>172.16.8.21</td>
<td>172.16.8.22</td>
</tr>
<tr>
<td>IP地址 （Virtual）(enp0s8)</td>
<td>192.168.6.23</td>
<td>192.168.6.24</td>
</tr>
<tr>
<td>IP地址 （SCAN）(enp0s8)</td>
<td>192.168.6.20</td>
<td>192.168.6.20</td>
</tr>
<tr>
<td>GRID 用户环境变量</td>
<td>export ORACLE_SID=+ASM1</td>
<td>export ORACLE_SID=+ASM2</td>
</tr>
<tr>
<td></td>
<td>export ORACLE_BASE=/u01/app/grid</td>
<td>export ORACLE_BASE=/u01/app/grid</td>
</tr>
<tr>
<td></td>
<td>export ORACLE_HOME=/u01/app/19c/grid</td>
<td>export ORACLE_HOME=/u01/app/19c/grid</td>
</tr>
<tr>
<td></td>
<td>export TNS_ADMIN=$ORACLE_HOME/network/admin</td>
<td>export TNS_ADMIN=$ORACLE_HOME/network/admin</td>
</tr>
<tr>
<td>ORACLE 用户环境变量</td>
<td>export ORACLE_SID=tqdb1</td>
<td>export ORACLE_SID=tqdb2</td>
</tr>
<tr>
<td></td>
<td>export DB_UNIQUE_NAME=tqdb</td>
<td>export DB_UNIQUE_NAME=tqdb</td>
</tr>
<tr>
<td></td>
<td>export ORACLE_UNQNAME=tqdb</td>
<td>export ORACLE_UNQNAME=tqdb</td>
</tr>
<tr>
<td></td>
<td>export ORACLE_BASE=/u01/app/oracle</td>
<td>export ORACLE_BASE=/u01/app/oracle</td>
</tr>
<tr>
<td></td>
<td>export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome</td>
<td>export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome</td>
</tr>
<tr>
<td></td>
<td>export TNS_ADMIN=$ORACLE_HOME/network/admin</td>
<td>export TNS_ADMIN=$ORACLE_HOME/network/admin</td>
</tr>
<tr>
<td>GRID Version</td>
<td>19.6.0.0.0</td>
<td>19.6.0.0.0</td>
</tr>
<tr>
<td>DB Version</td>
<td>19.6.0.0.0</td>
<td>19.6.0.0.0</td>
</tr>
<tr>
<td>共享存储 OCR & voting disk</td>
<td>2G * 3</td>
<td>2G * 3</td>
</tr>
<tr>
<td>共享存储 ASM: DATA DiskGroup</td>
<td>50G * 2</td>
<td>50G * 2</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h2>2. 环境配置</h2>
<h3>2.1 网络配置</h3>
<p>rac1/rac2 主机名：</p>
<table>
<thead>
<tr>
<th></th>
<th>rac1</th>
<th>rac2</th>
</tr>
</thead>
<tbody>
<tr>
<td>主机名</td>
<td>tqdb21</td>
<td>tqdb22</td>
</tr>
<tr>
<td>IP地址 （Public）(enp0s8)</td>
<td>192.168.6.21</td>
<td>192.168.6.22</td>
</tr>
<tr>
<td>IP地址 （Private）(enp0s9)</td>
<td>172.16.8.21</td>
<td>172.16.8.22</td>
</tr>
<tr>
<td>IP地址 （Virtual）(enp0s8)</td>
<td>192.168.6.23</td>
<td>192.168.6.24</td>
</tr>
<tr>
<td>IP地址 （SCAN）(enp0s8)</td>
<td>192.168.6.20</td>
<td>192.168.6.20</td>
</tr>
</tbody>
</table>
<p>修改 HOSTS 文件实例：</p>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/hosts
​```
127.0.0.1 localhost
# Public (enp0s8)
192.168.6.21 tqdb21
192.168.6.22 tqdb22
# Private (enp0s9)
172.16.8.21 tqdb21-priv
172.16.8.22 tqdb22-priv
# Virtual (enp0s8)
192.168.6.23 tqdb21-vip
192.168.6.24 tqdb22-vip
# SCAN
192.168.6.20 tqdb-cluster tqdb-cluster-scan
​```

</code></pre>
</blockquote>
<h3>2.2 修改启动模式为： 「multi-user.target」</h3>
<blockquote>
<pre><code class="language-bash line-numbers">-- 查看当前启动模式：
​```
systemctl get-default
​```

-- 设置(修改)启动模式： 「multi-user.target」
​```
systemctl set-default multi-user.target
​```
</code></pre>
</blockquote>
<h3>2.3 关闭操作系统 NUMA</h3>
<blockquote>
<pre><code class="language-bash line-numbers">​```
1. 编辑 `/etc/default/grub` 文件，在 `GRUB_CMDLINE_LINUX=` 加上：numa=off
2. 重新生成 /etc/grub2.cfg 配置文件：
    `grub2-mkconfig -o /etc/grub2.cfg`
3. 重启操作系统
    `reboot`
4. 重启之后进行确认：
    `dmesg | grep -i numa`
再次确认： `cat /proc/cmdline`
​```
</code></pre>
</blockquote>
<h3>2.4 关闭防火墙</h3>
<blockquote><p>
  原来使用iptables，现在在CentOS 7中失效。关闭防火墙使用<code>chkconfig iptables off</code>，是会报错<code>error reading information on service iptables: No such file or directory</code>。</p>
<p>  需要:</p>
<pre><code class="language-bash line-numbers"># systemctl stop firewalld.service
# systemctl disable firewalld.service
# systemctl status firewalld.service
</code></pre>
</blockquote>
<h3>2.5 在CentOS 7 中禁止 IPv6</h3>
<blockquote>
<pre><code class="language-bash line-numbers">vim /etc/default/grub 增加 `ipv6.disable=1`
​```
[root@tqdb21: ~]#  vim /etc/default/grub   
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="ipv6.disable=1 spectre_v2=retpoline rhgb quiet numa=off"
GRUB_DISABLE_RECOVERY="true"
​```
# grub2-mkconfig -o /boot/grub2/grub.cfg

# reboot

# lsmod | grep ipv6
</code></pre>
</blockquote>
<h3>2.6 禁用 SELINUX 配置</h3>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/selinux/config
​```
SELINUX=disabled
​```
</code></pre>
</blockquote>
<h3>2.7 配置(关闭)服务</h3>
<blockquote>
<pre><code class="language-bash line-numbers"># systemctl disable firewalld
# systemctl disable avahi-daemon
# systemctl disable bluetooth
# systemctl disable cpuspeed
# systemctl disable cups
# systemctl disable firstboot
# systemctl disable ip6tables
# systemctl disable iptables
# systemctl disable pcmcia
</code></pre>
</blockquote>
<h3>2.8 关闭 THP</h3>
<p>关闭 <code>Transparent HugePages</code> 特性(RHEL7/OL7)</p>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/rc.d/rc.local

# 增加下列内容：
​```
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
​```
</code></pre>
</blockquote>
<p>授权、执行、查看：</p>
<blockquote>
<pre><code class="language-bash line-numbers"># chmod +x /etc/rc.d/rc.local
# source /etc/rc.d/rc.local
# cat /sys/kernel/mm/transparent_hugepage/enabled 
​```
always madvise [never]  <<--- THP Disabled
​```
</code></pre>
<pre><code class="language-bash line-numbers"># cat /sys/kernel/mm/transparent_hugepage/defrag
​```
always defer defer+madvise madvise [never]
​```
</code></pre>
</blockquote>
<h3>2.9 NOZEROCONF</h3>
<p>12c RAC 配置</p>
<p>CSSD Fails to Join the Cluster After Private Network Recovered if avahi Daemon is up and Running (Doc ID 1501093.1)文档中建议</p>
<blockquote>
<pre><code class="language-bash line-numbers"># echo "NOZEROCONF=yes" >> /etc/sysconfig/network
</code></pre>
</blockquote>
<h3>2.10 软件包安装</h3>
<p>配置扩展 YUM 源：</p>
<blockquote>
<pre><code class="language-bash line-numbers">RHEL六版用户
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
# rpm -Uvh epel-release-latest-6.noarch.rpm

RHEL七版用户
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -Uvh epel-release-latest-7.noarch.rpm
</code></pre>
</blockquote>
<p>安装系统包:</p>
<blockquote>
<pre><code class="language-bash line-numbers"># rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
compat-libcap1 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libXext \
libXtst \
kde-l10n-Chinese.noarch \
libstdc++-devel \
make \
xclock \
sysstat \
man \
nfs-utils \
lsof \
expect \
unzip \
redhat-lsb \
openssh-clients \
smartmontools \
unixODBC \
perl \
telnet \
vsftpd \
ntsysv \
lsscsi \
libX11 \
libxcb \
libXau \
libXi \
strace \
sg3_utils \
kexec-tools \
net-tools \
unixODBC-devel |grep "not installed" |awk '{print $2}' |xargs yum install -y
</code></pre>
</blockquote>
<h3>2.11 安装 cvuqdisk 包</h3>
<p>安装包位置在：解压 grid 安装包后</p>
<p>（在步骤：「3.1 GRID 安装」解压 grid 安装包后，安装 <code>cvuqdisk-1.0.10-1.rpm</code> 包）</p>
<blockquote>
<pre><code class="language-bash line-numbers"># cd $ORACLE_HOME/cv/rpm
# rpm –ivh cvuqdisk*.rpm
</code></pre>
</blockquote>
<h3>2.12 时间服务</h3>
<p>使用 Chrony 服务</p>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/chrony.conf
</code></pre>
<blockquote><p>
    server <NTP_SERVER_ADDR> iburst
  </p></blockquote>
<p>  重启时间同步服务:</p>
<pre><code class="language-bash line-numbers"># systemctl restart chronyd.service
# systemctl enable chronyd.service
</code></pre>
<p>  查看时间同步源:</p>
<pre><code class="language-bash line-numbers"># chronyc sources -v
</code></pre>
<p>  查看时间同步状态:</p>
<pre><code class="language-bash line-numbers"># chronyc sourcestats
</code></pre>
</blockquote>
<h3>2.13 创建用户</h3>
<blockquote><p>
  创建用户组：</p>
<pre><code class="language-bash line-numbers"># groupadd -g 600 oinstall 
# groupadd -g 601 dba 
# groupadd -g 602 oper 
# groupadd -g 603 asmadmin 
# groupadd -g 604 asmoper 
# groupadd -g 605 asmdba  
# groupadd -g 606 backupdba
# groupadd -g 607 dgdba
# groupadd -g 608 kmdba
# groupadd -g 609 racdba
</code></pre>
<p>  创建 oracle 和 grid 用户：</p>
<pre><code class="language-bash line-numbers"># useradd -u 600 -g oinstall -G asmadmin,dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle 
# useradd -u 601 -g oinstall -G oper,asmadmin,asmdba,asmoper,dba,racdba grid 
</code></pre>
<p>  设置 oracle 和 grid 用户的密码：</p>
<pre><code class="language-bash line-numbers">-- oracle 口令
# passwd oracle

-- grid 口令
# passwd grid
</code></pre>
</blockquote>
<h3>2.14 内核参数调整</h3>
<blockquote><p>
  修改内核参数(结合实际环境修改)：</p>
<pre><code class="language-bash line-numbers"># vim /etc/sysctl.conf
​```
# oracle-database-preinstall-19c 
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

# SGA 1(G)*1024/2 +20 =HugePages_Total
vm.nr_hugepages = 532
vm.swappiness=5
​```
</code></pre>
<blockquote><p>
    参数名         参数说明                  建议值</p>
<p>    kernel.shmmax=单个共享内存最大值单位b，一般大于等于sga_max_target</p>
<p>    kernel.shmall=控制内存页数，一页是4kb，一般是kernel.shmmax/4</p>
<p>    kernel.shmmni     共享内存段的最大数量      4096</p>
<p>    kernel.shmall     控制共享内存页数            根据计算公式进行调整，物理内存<em>0.7</em>1024*1024KB/4KB</p>
<p>    kernel.shmmax     单个共享内存段的最大值   根据计算公式进行调整，物理内存<em>0.7</em>1024<em>1024</em>1024</p>
<p>    kernel.sem  每个进程通讯需要的信号灯</p>
<p>    计算Hugepagesize的方法</p>
<p>    vm.nr_hugepages = sga_max_size / Hugepagesize = 12GB/2048KB = 6144 (can be set slightly bigger than this figure)
  </p></blockquote>
<p>  执行命令生效:</p>
<pre><code class="language-bash line-numbers"># sysctl -p
</code></pre>
</blockquote>
<h3>2.15 LIMITS 配置</h3>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/security/limits.conf
​```
# oracle-database-preinstall-19c
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728

grid   soft   nofile    1024
grid   hard   nofile    65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768
grid   hard   memlock    134217728
grid   soft   memlock    134217728
​```
</code></pre>
<blockquote><p>
    备注：Memlock在HugePage环境开启，单位为KB。
  </p></blockquote>
<p>  <strong>值得一提的是</strong>，Linux 7 的 limit 配置已经不在是 <code>/etc/security/limits.conf</code> 了，而是在 <code>/etc/security/limits.d</code> 目录下面。</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: /etc/security/limits.d]# cat oracle-database-19c.conf 
​```
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728

grid   soft   nofile    1024
grid   hard   nofile    65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768
grid   hard   memlock    134217728
grid   soft   memlock    134217728
​```
[root@tqdb21: /etc/security/limits.d]# 
</code></pre>
</blockquote>
<h3>2.16 目录创建</h3>
<blockquote>
<pre><code class="language-bash line-numbers"># mkdir -p /u01/app/grid
# mkdir -p /u01/app/oraInventory
# mkdir -p /u01/app/19c/grid
# mkdir -p /u01/app/oracle
# mkdir -p /u01/app/oracle/product/19c/dbhome
# chown -R grid:oinstall /u01
# chown -R grid:oinstall /u01/app/oraInventory
# chown -R oracle:oinstall /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle/product
# chown -R oracle:oinstall /u01/app/oracle/product/19c/dbhome
# chmod -R 775 /u01
</code></pre>
</blockquote>
<h3>2.17 配置 profile</h3>
<blockquote>
<pre><code class="language-bash line-numbers"># vim /etc/profile

# 增加下列内容：
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
 if [ $SHELL = "/bin/ksh" ]; then 
     ulimit -p 16384
     ulimit -n 65536
   else
         ulimit -u 16384 -n 65536
 fi
 umask 022 
fi
</code></pre>
</blockquote>
<h3>2.18 GRID 用户环境变量</h3>
<blockquote><p>
  rac1 节点 1：</p>
<pre><code class="language-bash line-numbers">-- rac1 节点 1: GRID用户环境变量
# su - grid
$ vim .bash_profile 

​```
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
     ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias asmcmd='rlwrap asmcmd'
​```
</code></pre>
<p>  rac2 节点 2：</p>
<pre><code class="language-bash line-numbers">-- rac2 节点 2: GRID用户环境变量
# su - grid
$ vim .bash_profile 

​```
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
  ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias asmcmd='rlwrap asmcmd'
​```
</code></pre>
</blockquote>
<h3>2.19 ORACLE 用户环境变量</h3>
<blockquote><p>
  rac1 节点 1：</p>
<pre><code class="language-bash line-numbers">-- rac1 节点 1：ORACLE用户环境变量
# su - oracle
$ vim .bash_profile
​```
export LANG=en_US
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=tqdb1
export DB_UNIQUE_NAME=tqdb
export ORACLE_UNQNAME=tqdb
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then 
     ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'
</code></pre>
<p>  rac2 节点 2：</p>
<pre><code class="language-bash line-numbers">-- rac2 节点 2：ORACLE用户环境变量
# su - oracle
$ vim .bash_profile
​```
export LANG=en_US
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=tqdb2
export DB_UNIQUE_NAME=tqdb
export ORACLE_UNQNAME=tqdb
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then 
  ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'
</code></pre>
</blockquote>
<h3>2.20 ROOT 用户添加 crsctl 命令</h3>
<blockquote><p>
  <em>-- ROOT用户添加crsctl命令</em></p>
<pre><code class="language-bash line-numbers"># vim /etc/profile
​```
# ROOT用户添加 `crsctl` 命令
export PATH=/u01/app/19c/grid/bin:$PATH
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid

# 图形化界面相关
export PATH=/usr/bin/xdpyinfo:$PATH

# history: 显示历史时间
export HISTSIZE=4096
export HISTTIMEFORMAT="%F %T `whoami` "
​```
</code></pre>
</blockquote>
<h3>2.21 手动配置 SSH 等效性 （也可在图形安装GRID、DB时，点击<code>SSH connectivity</code>）</h3>
<h4>2.21.1 oracle 用户的 SSH 等效性</h4>
<blockquote><p>
  命令: (建立oracle 用户的SSH等效性（两个节点在oracle用户下执行）)</p>
<pre><code class="language-bash line-numbers">-- 1. 在 tqdb21 节点1 执行:
root# su - oracle
oracle$ mkdir ~/.ssh
oracle$ chmod 700 ~/.ssh/
oracle$ ssh-keygen -t rsa
oracle$ ssh-keygen -t dsa

-- 2. 在 tqdb22 节点2 执行:
root# su - oracle
oracle$ mkdir ~/.ssh
oracle$ chmod 700 ~/.ssh/
oracle$ ssh-keygen -t rsa
oracle$ ssh-keygen -t dsa

-- 3. 在 tqdb21 节点1 执行:
oracle$ cd ~/.ssh
oracle$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
oracle$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle$ ssh oracle@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
oracle$ ssh oracle@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
oracle$ scp /home/oracle/.ssh/authorized_keys oracle@tqdb22:~/.ssh/authorized_keys

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
oracle$ cd ~/.ssh
oracle$ ll
oracle$ cat authorized_keys

在每个节点上测试连接。验证当您再次运行以下命令时，系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
oracle$ date; ssh tqdb22 date
oracle$ date; ssh tqdb22-priv date

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
oracle$ date; ssh tqdb22 date
oracle$ date; ssh tqdb22-priv date
</code></pre>
<p>  执行记录:</p>
<pre><code class="language-bash line-numbers">建立SSH等效性（两个节点在oracle用户下执行）
在 tqdb21 节点1 执行
​```
[root@tqdb21: ~]# su - oracle
Last login: Tue Feb 11 23:23:44 CST 2020 on pts/0
[oracle@tqdb21: ~]$ la
bash: la: command not found...
[oracle@tqdb21: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[oracle@tqdb21: ~]$ mkdir ~/.ssh
[oracle@tqdb21: ~]$ chmod 700 ~/.ssh/
[oracle@tqdb21: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:sKgc0lXiMqxIoB1c++ANDWQbG7um0Tbz+nM8PE1LpBc oracle@tqdb21
The key's randomart image is:
+---[RSA 2048]----+
|...oO .          |
|oo.+ %           |
|..= X o          |
|oo * B o  E      |
|+ + X + So .     |
| o B +  . +      |
|  +   .o = .     |
|     .. * o      |
|    ...o o       |
+----[SHA256]-----+
[oracle@tqdb21: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:5TXoyCtm2RREUNtssej/nesLCv3DCgYr572hXJ48vzQ oracle@tqdb21
The key's randomart image is:
+---[DSA 1024]----+
|      .++ .      |
|       . = +     |
|        + B o    |
|       o B . .   |
|      . S o      |
|       * +       |
|    . B O E..    |
|     B O.* =oo . |
|      + *+=ooo*o |
+----[SHA256]-----+
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd .ssh/
[oracle@tqdb21: ~/.ssh]$ ls
id_dsa  id_dsa.pub  id_rsa  id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ ll
total 16
-rw-------. 1 oracle oinstall  668 Feb 11 23:31 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:31 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:30 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:30 id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ 
​```

在 tqdb22 节点2 执行
​```
[root@tqdb22: ~]# su - oracle
Last login: Tue Feb 11 23:25:32 CST 2020 on pts/0
[oracle@tqdb22: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[oracle@tqdb22: ~]$ mkdir ~/.ssh
[oracle@tqdb22: ~]$ chmod 700 ~/.ssh/
[oracle@tqdb22: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TwfmNqXPaYMjwC4dvIy4Qf+R5m4l57iee8qlOvxmeJA oracle@tqdb22
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|          o .    |
|     o   o +     |
|  .  .= S * .    |
| . oE=.=o+ * .   |
|  o.+oO*o + *    |
|   o+**=o. o .   |
|  . .@&=         |
+----[SHA256]-----+
[oracle@tqdb22: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:JS6Z/5ztai4ZGn6jeprpJ7LO0GJMwzAPDduW92t7tY4 oracle@tqdb22
The key's randomart image is:
+---[DSA 1024]----+
|.                |
| = .             |
|= = .   . .      |
|o= . . + o       |
| +.   = S        |
|o..   .+. .      |
|oo.  .oo.+ .     |
|.+. .+=.*+oo     |
| .++**o+E*Boo    |
+----[SHA256]-----+
[oracle@tqdb22: ~]$ ll .ssh/
total 16
-rw-------. 1 oracle oinstall  668 Feb 11 23:33 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:33 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:33 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:33 id_rsa.pub
[oracle@tqdb22: ~]$ 
​```

在 tqdb21 节点1 执行:
​```
[oracle@tqdb21: ~/.ssh]$ cat id_dsa.pub 
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ cat id_rsa.pub    
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cd
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd -
/home/oracle/.ssh
[oracle@tqdb21: ~/.ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@tqdb21: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:38 authorized_keys
-rw-------. 1 oracle oinstall  668 Feb 11 23:31 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:31 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:30 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:30 id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ less authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ ssh oracle@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
oracle@tqdb22's password: 
Permission denied, please try again.
oracle@tqdb22's password: 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ ssh oracle@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
oracle@tqdb22's password: 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBAJ5Uuh7XD2mJhJUeIP3r46augiN0wc3Qksou9DA+v2QXaoUzBFrdeIdQ7wSYuLkp/1rZm6imS8PlFBd8uVPudybmh+jNwVtk3d18eYgJ8lunY115/7yhsDvS7yt+cYSIVFqoiGQUWPBfXM/oGUnT+RzPqMdrEz0K7mrWpMJffFh5AAAAFQCd/xbvKCr5cYNWwqF/WUQ0mQ0U6QAAAIBwBDpTCszu1WFeYzX1o2WVSEtnnaIX+BkeHELXa90Co1F2EPTNqoA1KDoCalw0dPKyyQYeG4SDXQ7AhSSAuvIc+xherUciFDjtNYW+uVGNot+++1zMVwKaj5T0EWmoNsw60ALKeLbWniBKKahwwbRKsUL7A49D0iaqRDX6d2X2IwAAAIBztUTFji8KD/0j2N9D4pa+opeKjz571i88Iy/R9JpN8XRz1XBxP/dkfPIOTXebaY7vFSeHb0HSP2Fd70yFhqIm14Kn0A2Uf7XnSjTRvDTub51XLKI2cJKi16EwgcMOnFFJBD+A9HfYlXtVGBl+uag07sEenLW4F2FWK57TkrDRcQ== oracle@tqdb22
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ scp /home/oracle/.ssh/authorized_keys oracle@tqdb22:~/.ssh/authorized_keys
oracle@tqdb22's password: 
authorized_keys                                                                                                                                                      100% 1996     2.5MB/s   00:00    
[oracle@tqdb21: ~/.ssh]$ 
​```

在 tqdb22 节点2 执行:  ( 查看 `authorized_keys` )
​```
[oracle@tqdb22: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 oracle oinstall 1996 Feb 11 23:46 authorized_keys
-rw-------. 1 oracle oinstall  668 Feb 11 23:33 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:33 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:33 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:33 id_rsa.pub
[oracle@tqdb22: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBAJ5Uuh7XD2mJhJUeIP3r46augiN0wc3Qksou9DA+v2QXaoUzBFrdeIdQ7wSYuLkp/1rZm6imS8PlFBd8uVPudybmh+jNwVtk3d18eYgJ8lunY115/7yhsDvS7yt+cYSIVFqoiGQUWPBfXM/oGUnT+RzPqMdrEz0K7mrWpMJffFh5AAAAFQCd/xbvKCr5cYNWwqF/WUQ0mQ0U6QAAAIBwBDpTCszu1WFeYzX1o2WVSEtnnaIX+BkeHELXa90Co1F2EPTNqoA1KDoCalw0dPKyyQYeG4SDXQ7AhSSAuvIc+xherUciFDjtNYW+uVGNot+++1zMVwKaj5T0EWmoNsw60ALKeLbWniBKKahwwbRKsUL7A49D0iaqRDX6d2X2IwAAAIBztUTFji8KD/0j2N9D4pa+opeKjz571i88Iy/R9JpN8XRz1XBxP/dkfPIOTXebaY7vFSeHb0HSP2Fd70yFhqIm14Kn0A2Uf7XnSjTRvDTub51XLKI2cJKi16EwgcMOnFFJBD+A9HfYlXtVGBl+uag07sEenLW4F2FWK57TkrDRcQ== oracle@tqdb22
[oracle@tqdb22: ~/.ssh]$ 
​```



在每个节点上测试连接。验证当您再次运行以下命令时，系统是否不提示您输入口令。

在 tqdb21 节点1 执行:
​```
[oracle@tqdb21: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:48:12 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22 date
Tue Feb 11 23:48:22 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:48:42 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22-priv date 
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:49:06 CST 2020
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ ssh tqdb21 date
Tue Feb 11 23:49:35 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22 date
Tue Feb 11 23:49:39 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb21-priv date
Tue Feb 11 23:49:48 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22-priv date 
Tue Feb 11 23:49:52 CST 2020
[oracle@tqdb21: ~]$ date; ssh tqdb22 date
Tue Feb 11 23:50:15 CST 2020
Tue Feb 11 23:50:15 CST 2020
[oracle@tqdb21: ~]$ date; ssh tqdb22-priv date
Tue Feb 11 23:50:40 CST 2020
Tue Feb 11 23:50:40 CST 2020
[oracle@tqdb21: ~]$ 
​```

在 tqdb22 节点2 执行:
​```
[oracle@tqdb22: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:12 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22 date
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:25 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:35 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22-priv date 
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:42 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21 date
Tue Feb 11 23:53:59 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22 date
Tue Feb 11 23:54:03 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21-priv date
Tue Feb 11 23:54:08 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22-priv date 
Tue Feb 11 23:54:12 CST 2020
[oracle@tqdb22: ~]$ date; ssh tqdb21 date
Tue Feb 11 23:54:29 CST 2020
Tue Feb 11 23:54:29 CST 2020
[oracle@tqdb22: ~]$ date; ssh tqdb21-priv date 
Tue Feb 11 23:54:41 CST 2020
Tue Feb 11 23:54:41 CST 2020
[oracle@tqdb22: ~]$ 
​```


</code></pre>
</blockquote>
<h4>2.21.2 grid 用户的 SSH 等效性</h4>
<blockquote><p>
  命令: (建立grid 用户的SSH等效性（两个节点在grid用户下执行）)</p>
<pre><code class="language-bash line-numbers">-- 1. 在 tqdb21 节点1 执行:
root# su - grid
grid$ mkdir ~/.ssh
grid$ chmod 700 ~/.ssh/
grid$ ssh-keygen -t rsa
grid$ ssh-keygen -t dsa

-- 2. 在 tqdb22 节点2 执行:
root# su - grid
grid$ mkdir ~/.ssh
grid$ chmod 700 ~/.ssh/
grid$ ssh-keygen -t rsa
grid$ ssh-keygen -t dsa

-- 3. 在 tqdb21 节点1 执行:
grid$ cd ~/.ssh
grid$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
grid$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid$ ssh grid@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
grid$ ssh grid@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
grid$ scp /home/grid/.ssh/authorized_keys grid@tqdb22:~/.ssh/authorized_keys

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
grid$ cd ~/.ssh
grid$ ll
grid$ cat authorized_keys

在每个节点上测试连接。验证当您再次运行以下命令时，系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
grid$ date; ssh tqdb22 date
grid$ date; ssh tqdb22-priv date

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
grid$ date; ssh tqdb22 date
grid$ date; ssh tqdb22-priv date
</code></pre>
<p>  执行记录:</p>
<pre><code class="language-bash line-numbers">建立SSH等效性（两个节点在grid用户下执行）
-- 1. 在 tqdb21 节点1 执行:
​```
[root@tqdb21: ~]# su - grid
Last login: Tue Feb 11 23:19:09 CST 2020 on pts/0
[grid@tqdb21: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[grid@tqdb21: ~]$ mkdir ~/.ssh
[grid@tqdb21: ~]$ chmod 700 ~/.ssh/
[grid@tqdb21: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:+eIXXZgW7lXkgp8YS/AWr4NEqsk5a7brYSc4q8aODWw grid@tqdb21
The key's randomart image is:
+---[RSA 2048]----+
|          o .  ..|
|         o o.+ ..|
|        . ..*+o..|
|     . + o +=*oo |
|      * S .+=oo  |
|.    . o .. o.   |
|.E  o B o ..     |
|.+o  * * ..      |
|.o+...+...       |
+----[SHA256]-----+
[grid@tqdb21: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:HIEiASc3W7qUMbqeHzLY22EyTtIRRg8ktp0zJ0ISax4 grid@tqdb21
The key's randomart image is:
+---[DSA 1024]----+
|**@..  ..        |
|+O+@o .  .       |
|oEBB.o  .        |
|o+oo=  . .       |
|..o     S        |
|oo..             |
|o=*.o            |
| ++*..           |
|  o..            |
+----[SHA256]-----+
[grid@tqdb21: ~]$ ll .ssh/
total 16
-rw-------. 1 grid oinstall  668 Feb 12 00:37 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:37 id_dsa.pub
-rw-------. 1 grid oinstall 1679 Feb 12 00:36 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:36 id_rsa.pub
[grid@tqdb21: ~]$ 
​```

-- 2. 在 tqdb22 节点2 执行:
​```
[root@tqdb22: ~]# su - grid  
Last login: Wed Feb 12 00:38:22 CST 2020 on pts/0
[grid@tqdb22: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[grid@tqdb22: ~]$ mkdir ~/.ssh
[grid@tqdb22: ~]$ chmod 700 ~/.ssh/
[grid@tqdb22: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cUEInunmtLCI/lu1MEHLYMXHw3Jr02INyvrgKocA2NU grid@tqdb22
The key's randomart image is:
+---[RSA 2048]----+
|   oo++. oo      |
|  . =+EX.  .     |
|.. ..+O B .      |
|o .  +.* =       |
|.   oo*.S        |
|.. + *+..        |
|o.o +.o.         |
|+ ....           |
|.+oo.            |
+----[SHA256]-----+
[grid@tqdb22: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:Pss+L5zH1Gbyr+c8u3i79/R3UK0SJDzb6jzO6aPPplw grid@tqdb22
The key's randomart image is:
+---[DSA 1024]----+
|         .       |
|          + .    |
|           *    .|
|          . o   o|
|        S  o . o |
|       .  + = o  |
|       .o*E= . ..|
|       o=*O..o+.=|
|       .B%Ooo*OB*|
+----[SHA256]-----+
[grid@tqdb22: ~]$ ll .ssh/
total 16
-rw-------. 1 grid oinstall  668 Feb 12 00:39 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:39 id_dsa.pub
-rw-------. 1 grid oinstall 1675 Feb 12 00:39 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:39 id_rsa.pub
[grid@tqdb22: ~]$ 
​```

-- 3. 在 tqdb21 节点1 执行:
​```
[grid@tqdb21: ~]$ cd ~/.ssh
[grid@tqdb21: ~/.ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@tqdb21: ~/.ssh]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@tqdb21: ~/.ssh]$ ssh grid@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
grid@tqdb22's password: 
[grid@tqdb21: ~/.ssh]$ ssh grid@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid@tqdb22's password: 
[grid@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyn//M6EKUXPcLk9lcennl3mN4CxFHupvFa6vR2eoGn40SNA1Yqtzhm+EI5I7O5JLT7zCUHu3aV8kIoXZdnivrw93VB8INeUIi9ZN98KDakq+nrORC3c9fZLnwCsNti+4Qy6vl2gy+fH0yR/vyhv8AVmIUe86jl8ql6TX5Xo2aS0YidD7okLumnKbCCK8HF58sPvr5j5fyMrq/w7xNhuq8jdrjOurVjcutu7u/xfSXCHvYnqbQDJbRj03bBOlhk63HYIgZYsB04/aMvngaq0lZ2XrXjBqfOq8OYBKLATgPyhoFDoD4IDymvrAm3Jsc3ZEy2wV3oOhiV895Kkvp4DtH grid@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAMmx0zcjtMOo8cIDWQamFqTKYN/ac0dHmRzpd2XeKTOUe7l2TeRiGilMIeyH+5i9CQ2bZzCPszd+KyJpf2BpKotsRKKlM+P09sDtDyteoTvCMyIGIvT24yQfriSFP3R5yKo0XJv0NEI6VX00wnsG5wzyaEad0FaSYdi/HlYar3MBAAAAFQCfQMu2F1kOm86SraJ2dL2m8ZZ8lwAAAIB9Tu2oCjL8wr2mDhBfaM7vg0bl6XEyYVm0b71uEzkklLKingyP38Jr3BIQ4+DSUv3+anfpJFGz1FouKI6Sow8oJCstJAx5CYM1AiHHVn5wTwBNW475kqWHTYaqwldYGjj2GwnB81QhVz+i4k4RLoMDi3BTOwoHC+hIwX6CrwumEgAAAIAWWDUYYg0b1ppVijCqU7VdXeS9FIdnlaA8puhVccSF5mzHcg4x0cQ1mWEnQjpFv1+/NsGdUoPidHWV2YQY6CRKP0xDN4nae/Cw1tKHOhnICGvrMVG8nZmDTnBUGDEGn22v4Mn5YGMAo+AclxcfYbpRnITCi4lqIHqgfm/YYWL1rA== grid@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyTO5PjRN3LAYgvXN6vVgTuRT6/ZMIi1ZwKiKYAzfDECwfVMZpgO27/67JhsS4QYkf0qpAHrRrNsTLiMJJ2n0xAQ4y1PcGeKLttsUH3w5+bkwJtIyLN2uVdne6Knc/WOwB4RjNO/iNSxIQhwSiuxdp2vZqfHI0R7hAu4x7gSwMUhGTyu5kOlzpb5G0Y8bdr+U41Cc4hgSFeRkTs2fBe4aMZTz3tmX0Qie7xEZm8RtAoq3D0lEjNGq+YziBlrDLhwTldaiR3itsQdqhXVweEq0b6jotou4eFPYwya+DayQikFGXVczwaWEFNRljNzMZ3MWQSbghLFJ3ZGwwrgSBk6yx grid@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBALXYa30uXp5U1GHO64E/5hXPBxnfs6yq3Snzz4omSSPbrTkXodWhXsUx5ywCEDC9j35KdjP6yFpzQgL5f+5/8PLsxuBoC/m/r3sMrjsTYXA/302WpalZuAXFUU8EkRdSYlpMhLB3PfWgwn9XYfAJJ3G1bDXmmAFECmlMhruGGI2DAAAAFQC4BY4Fjoh8CHdzZHEZwbC+iYYpgQAAAIBF5XDGw0oBoSAOpNysk+rI4AZSQmTwVhuNVnJIFkARdbW1rHLLALFak+BdOSgwg0JkPqPAA3l/cthT8TdzxDKE5H4WhQVpM8noYYo8V5MuE78vtHObRVwZ8APOr9NAbQ8QvdgG5huhnMx1M6esWFJ8GORtZ1r/pcyfHf1oDubOrwAAAIAjU9QOuuwNyKQaJZM2v+8l6T1Qv8psAtve1nHGOk0repiBvG5B6ucmB7e3Ae6EMj5Gw/M8jhocs+uspB1FcKNhHyT/SW7lMoAfFKtT+PzZmaWTKsNZSGQ/HVCWwUr8o3uIgcnW0SpEDrthfsApEM+d5Mpr7Hxuz2vyccBU9g0WJA== grid@tqdb22
[grid@tqdb21: ~/.ssh]$ scp /home/grid/.ssh/authorized_keys grid@tqdb22:~/.ssh/authorized_keys
grid@tqdb22's password: 
authorized_keys                                                                                                                                                      100% 1988     2.3MB/s   00:00    
[grid@tqdb21: ~/.ssh]$ 
​```

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
​```
[grid@tqdb22: ~]$ cd ~/.ssh/
[grid@tqdb22: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 grid oinstall 1988 Feb 12 00:43 authorized_keys
-rw-------. 1 grid oinstall  668 Feb 12 00:39 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:39 id_dsa.pub
-rw-------. 1 grid oinstall 1675 Feb 12 00:39 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:39 id_rsa.pub
[grid@tqdb22: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyn//M6EKUXPcLk9lcennl3mN4CxFHupvFa6vR2eoGn40SNA1Yqtzhm+EI5I7O5JLT7zCUHu3aV8kIoXZdnivrw93VB8INeUIi9ZN98KDakq+nrORC3c9fZLnwCsNti+4Qy6vl2gy+fH0yR/vyhv8AVmIUe86jl8ql6TX5Xo2aS0YidD7okLumnKbCCK8HF58sPvr5j5fyMrq/w7xNhuq8jdrjOurVjcutu7u/xfSXCHvYnqbQDJbRj03bBOlhk63HYIgZYsB04/aMvngaq0lZ2XrXjBqfOq8OYBKLATgPyhoFDoD4IDymvrAm3Jsc3ZEy2wV3oOhiV895Kkvp4DtH grid@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAMmx0zcjtMOo8cIDWQamFqTKYN/ac0dHmRzpd2XeKTOUe7l2TeRiGilMIeyH+5i9CQ2bZzCPszd+KyJpf2BpKotsRKKlM+P09sDtDyteoTvCMyIGIvT24yQfriSFP3R5yKo0XJv0NEI6VX00wnsG5wzyaEad0FaSYdi/HlYar3MBAAAAFQCfQMu2F1kOm86SraJ2dL2m8ZZ8lwAAAIB9Tu2oCjL8wr2mDhBfaM7vg0bl6XEyYVm0b71uEzkklLKingyP38Jr3BIQ4+DSUv3+anfpJFGz1FouKI6Sow8oJCstJAx5CYM1AiHHVn5wTwBNW475kqWHTYaqwldYGjj2GwnB81QhVz+i4k4RLoMDi3BTOwoHC+hIwX6CrwumEgAAAIAWWDUYYg0b1ppVijCqU7VdXeS9FIdnlaA8puhVccSF5mzHcg4x0cQ1mWEnQjpFv1+/NsGdUoPidHWV2YQY6CRKP0xDN4nae/Cw1tKHOhnICGvrMVG8nZmDTnBUGDEGn22v4Mn5YGMAo+AclxcfYbpRnITCi4lqIHqgfm/YYWL1rA== grid@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyTO5PjRN3LAYgvXN6vVgTuRT6/ZMIi1ZwKiKYAzfDECwfVMZpgO27/67JhsS4QYkf0qpAHrRrNsTLiMJJ2n0xAQ4y1PcGeKLttsUH3w5+bkwJtIyLN2uVdne6Knc/WOwB4RjNO/iNSxIQhwSiuxdp2vZqfHI0R7hAu4x7gSwMUhGTyu5kOlzpb5G0Y8bdr+U41Cc4hgSFeRkTs2fBe4aMZTz3tmX0Qie7xEZm8RtAoq3D0lEjNGq+YziBlrDLhwTldaiR3itsQdqhXVweEq0b6jotou4eFPYwya+DayQikFGXVczwaWEFNRljNzMZ3MWQSbghLFJ3ZGwwrgSBk6yx grid@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBALXYa30uXp5U1GHO64E/5hXPBxnfs6yq3Snzz4omSSPbrTkXodWhXsUx5ywCEDC9j35KdjP6yFpzQgL5f+5/8PLsxuBoC/m/r3sMrjsTYXA/302WpalZuAXFUU8EkRdSYlpMhLB3PfWgwn9XYfAJJ3G1bDXmmAFECmlMhruGGI2DAAAAFQC4BY4Fjoh8CHdzZHEZwbC+iYYpgQAAAIBF5XDGw0oBoSAOpNysk+rI4AZSQmTwVhuNVnJIFkARdbW1rHLLALFak+BdOSgwg0JkPqPAA3l/cthT8TdzxDKE5H4WhQVpM8noYYo8V5MuE78vtHObRVwZ8APOr9NAbQ8QvdgG5huhnMx1M6esWFJ8GORtZ1r/pcyfHf1oDubOrwAAAIAjU9QOuuwNyKQaJZM2v+8l6T1Qv8psAtve1nHGOk0repiBvG5B6ucmB7e3Ae6EMj5Gw/M8jhocs+uspB1FcKNhHyT/SW7lMoAfFKtT+PzZmaWTKsNZSGQ/HVCWwUr8o3uIgcnW0SpEDrthfsApEM+d5Mpr7Hxuz2vyccBU9g0WJA== grid@tqdb22
[grid@tqdb22: ~/.ssh]$ 
​```

在每个节点上测试连接。验证当您再次运行以下命令时，系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
​```
[grid@tqdb21: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:45:51 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22 date
Wed Feb 12 00:45:59 CST 2020
[grid@tqdb21: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:46:14 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22-priv date
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:46:25 CST 2020
[grid@tqdb21: ~]$ 
​```
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
​```
[grid@tqdb21: ~]$ ssh tqdb21 date
Wed Feb 12 00:47:08 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22 date
Wed Feb 12 00:47:13 CST 2020
[grid@tqdb21: ~]$ ssh tqdb21-priv date
Wed Feb 12 00:47:18 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22-priv date 
Wed Feb 12 00:47:24 CST 2020
[grid@tqdb21: ~]$ date; ssh tqdb22 date
Wed Feb 12 00:47:36 CST 2020
Wed Feb 12 00:47:36 CST 2020
[grid@tqdb21: ~]$ date; ssh tqdb22-priv date
Wed Feb 12 00:47:52 CST 2020
Wed Feb 12 00:47:52 CST 2020
[grid@tqdb21: ~]$ 
​```

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
​```
[grid@tqdb22: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:04 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22 date
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:14 CST 2020
[grid@tqdb22: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:24 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22-priv date
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:34 CST 2020
[grid@tqdb22: ~]$ 
​```
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
​```
[grid@tqdb22: ~]$ ssh tqdb21 date
Wed Feb 12 00:49:58 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22 date
Wed Feb 12 00:50:02 CST 2020
[grid@tqdb22: ~]$ ssh tqdb21-priv date
Wed Feb 12 00:50:08 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22-priv date 
Wed Feb 12 00:50:13 CST 2020
[grid@tqdb22: ~]$ date; ssh tqdb22 date
Wed Feb 12 00:50:27 CST 2020
Wed Feb 12 00:50:27 CST 2020
[grid@tqdb22: ~]$ date; ssh tqdb22-priv date
Wed Feb 12 00:50:39 CST 2020
Wed Feb 12 00:50:39 CST 2020
[grid@tqdb22: ~]$ 
​```
</code></pre>
</blockquote>
<h3>2.22 配置 udev</h3>
<blockquote><p>
  说明: <code>/dev/sda</code> 为本地系统盘; <code>/dev/sdb</code> <code>/dev/sdc</code> <code>/dev/sdd</code> <code>/dev/sde</code> <code>/dev/sdf</code> 为共享磁盘.<br />
  其中:<br />
  <code>/dev/sdb</code> <code>/dev/sdc</code> <code>/dev/sdd</code> 大小为 2G, 用于: OCR & voting disk<br />
  <code>/dev/sde</code> <code>/dev/sdf</code> 大小为 50G, 用于: DATA DiskGroup
</p></blockquote>
<h4>2.22.1 配置 multipath 多路径</h4>
<blockquote>
<pre><code class="language-bash line-numbers"># 获取本地硬盘在系统中的`scsi id`
[root@tqdb21: /dev]# /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sda
1ATA_VBOX_HARDDISK_VB83906d1c-ce109a80
[root@tqdb21: /dev]# 


# 修改配置文件，将上面 `/dev/sda` 本地系统盘的 `scsi id` 加入黑名单
# 将 sdb ~ sdf 配置多路径别名
root# vim /etc/multipath.conf
​```
## virtualbox 虚拟环境中，这里必须添加getuid_callout字段中的"--replace-whitespace"，因为上面的scsi_id拥有"_"
# 注释掉 getuid_callout 参数
# vbox中，wwid 为 `multipath -v3` 命令中 `paths list` 的 `uuid` （如：`VBOX_HARDDISK_VB043f2aa4-f6c46e2f`）
# 而使用 `/lib/udev/scsi_id` 命令输出的结果为 `1ATA_VBOX_HARDDISK_VB043f2aa4-f6c46e2f` (多了 `1ATA_`)，所以如下报错。
# ```
#[root@tq1: /etc/multipath]# multipath -ll
#Jan 15 17:20:22 | /etc/multipath.conf line 3, invalid keyword: getuid_callout
# ```
#

# 
defaults {
  #getuid_callout          "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
  #getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
  user_friendly_names     no
}

# 禁掉本地磁盘 (/dev/sda)
blacklist {
  wwid VBOX_HARDDISK_VB83906d1c-ce109a80
  devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
  devnode "^hd[a-z]"
  devnode "^cciss.*"
}



# 多路径别名
multipaths {
  multipath {
          wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
          alias                   ocr1
  }
  multipath {
          wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
          alias                   ocr2
  }
  multipath {
          wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
          alias                   ocr3
  }
  multipath {
          wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
          alias                   data01
  }
  multipath {
          wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
          alias                   data02
  }
}
​```

-- 开机启动 multipathd.service
# systemctl enable multipathd.service
# systemctl restart multipathd.service
# systemctl status multipathd.service
</code></pre>
</blockquote>
<h4>2.22.2 配置 udev (<code>99-oracle-asmdevices.rules</code>) 规则</h4>
<blockquote>
<pre><code class="language-bash line-numbers">​```
-- 两个节点的 `99-oracle-asmdevices.rules` 规则内容相同.
[root@tqdb21: /etc/udev/rules.d]# vim 99-oracle-asmdevices.rules 
​```
# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"
​```
[root@tqdb21: /etc/udev/rules.d]# 
​```

-- 加载规则并重启 udev 规则
# udevadm control --reload-rules 
# udevadm trigger 
# systemctl status systemd-udevd.service
# systemctl enable systemd-udevd.service

-- 查看 `99-oracle-asmdevices.rules` 规则是否生效 (包括: 规则名 和 权限)
# ll /dev/asm-*
# ll /dev/dm-*

-- 查看块设备相关存储信息
(echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
(echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
(echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
(echo -e "\n4. '/dev/sdb'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
(echo -e "\n   '/dev/sdc'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
(echo -e "\n   '/dev/sdd'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
(echo -e "\n   '/dev/sde'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
(echo -e "\n   '/dev/sdf'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)
</code></pre>
<p>  tqdb21 操作记录:</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: /etc/udev/rules.d]# cat 99-oracle-asmdevices.rules 
# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"

[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# udevadm control --reload-rules 
[root@tqdb21: /etc/udev/rules.d]# udevadm trigger 
[root@tqdb21: /etc/udev/rules.d]# systemctl status systemd-udevd.service
● systemd-udevd.service - udev Kernel Device Manager
Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static; vendor preset: disabled)
Active: active (running) since Tue 2020-02-11 22:06:25 CST; 4h 34min ago
  Docs: man:systemd-udevd.service(8)
        man:udev(7)
Main PID: 484 (systemd-udevd)
Status: "Processing with 10 children at max"
 Tasks: 1
CGroup: /system.slice/systemd-udevd.service
        └─484 /usr/lib/systemd/systemd-udevd

Feb 11 22:06:25 tqdb21 systemd[1]: Starting udev Kernel Device Manager...
Feb 11 22:06:25 tqdb21 systemd-udevd[484]: starting version 219
Feb 11 22:06:25 tqdb21 systemd[1]: Started udev Kernel Device Manager.
Feb 11 22:06:36 tqdb21 kvm[1374]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1375]: 0 guests now active
Feb 11 22:06:36 tqdb21 kvm[1377]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1379]: 0 guests now active
Feb 11 22:06:36 tqdb21 kvm[1381]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1386]: 0 guests now active
[root@tqdb21: /etc/udev/rules.d]# systemctl enable systemd-udevd.service
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# ll /dev/asm-*
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr3 -> dm-2
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# ll /dev/dm-*
brw-rw----. 1 grid asmadmin 253, 0 Feb 12 02:40 /dev/dm-0
brw-rw----. 1 grid asmadmin 253, 1 Feb 12 02:40 /dev/dm-1
brw-rw----. 1 grid asmadmin 253, 2 Feb 12 02:40 /dev/dm-2
brw-rw----. 1 grid asmadmin 253, 3 Feb 12 02:40 /dev/dm-3
brw-rw----. 1 grid asmadmin 253, 4 Feb 12 02:40 /dev/dm-4
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: ~]# (echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
> (echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
> (echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
> (echo -e "\n4. '/dev/sdb'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
> (echo -e "\n   '/dev/sdc'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
> (echo -e "\n   '/dev/sdd'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
> (echo -e "\n   '/dev/sde'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
> (echo -e "\n   '/dev/sdf'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)

输出结果: 
1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr3 -> dm-2
brw-rw----. 1 grid asmadmin 253,  0 Feb 12 03:24 /dev/dm-0
brw-rw----. 1 grid asmadmin 253,  1 Feb 12 03:24 /dev/dm-1
brw-rw----. 1 grid asmadmin 253,  2 Feb 12 03:24 /dev/dm-2
brw-rw----. 1 grid asmadmin 253,  3 Feb 12 03:24 /dev/dm-3
brw-rw----. 1 grid asmadmin 253,  4 Feb 12 03:24 /dev/dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/data01 -> ../dm-3
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/data02 -> ../dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr1 -> ../dm-0
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr2 -> ../dm-1
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr3 -> ../dm-2
brw-rw----. 1 root disk       8, 16 Feb 12 02:40 /dev/sdb
brw-rw----. 1 root disk       8, 32 Feb 12 02:40 /dev/sdc
brw-rw----. 1 root disk       8, 48 Feb 12 02:40 /dev/sdd
brw-rw----. 1 root disk       8, 64 Feb 12 02:40 /dev/sde
brw-rw----. 1 root disk       8, 80 Feb 12 02:40 /dev/sdf

2. 查看块(block)设备: 
NAME     FSTYPE       LABEL UUID                                 MOUNTPOINT
sda                                                              
├─sda1   swap               323f3142-ccef-4b0d-a799-04007c4aa0a6 [SWAP]
└─sda2   xfs                2579915f-aead-4a30-977c-8e39f5f4d491 /
sdb      mpath_member                                            
└─ocr1                                                           
sdc      mpath_member                                            
└─ocr2                                                           
sdd      mpath_member                                            
└─ocr3                                                           
sde      mpath_member                                            
└─data01                                                         
sdf      mpath_member                                            
└─data02                                                         
sr0                                                              

3. 查看多路径配置
        multipath {
                wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
                alias                   ocr1
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
                alias                   ocr2
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
                alias                   ocr3
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
                alias                   data01
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
                alias                   data02
        }

4. '/dev/sdb'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b

   '/dev/sdc'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d

   '/dev/sdd'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981

   '/dev/sde'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBa1560564-d49dac72

   '/dev/sdf'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b
[root@tqdb21: ~]# 
</code></pre>
<p>  tqdb22 操作记录:</p>
<pre><code class="language-bash line-numbers">[root@tqdb22: /etc/udev/rules.d]# cat 99-oracle-asmdevices.rules 

# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"

[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: /etc/udev/rules.d]# udevadm control --reload-rules 
[root@tqdb22: /etc/udev/rules.d]# udevadm trigger 
[root@tqdb22: /etc/udev/rules.d]# systemctl status systemd-udevd.service
● systemd-udevd.service - udev Kernel Device Manager
   Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static; vendor preset: disabled)
   Active: active (running) since Tue 2020-02-11 22:06:28 CST; 4h 44min ago
     Docs: man:systemd-udevd.service(8)
           man:udev(7)
 Main PID: 486 (systemd-udevd)
   Status: "Processing with 10 children at max"
    Tasks: 1
   CGroup: /system.slice/systemd-udevd.service
           └─486 /usr/lib/systemd/systemd-udevd

Feb 11 22:06:28 tqdb22 systemd[1]: Starting udev Kernel Device Manager...
Feb 11 22:06:28 tqdb22 systemd-udevd[486]: starting version 219
Feb 11 22:06:28 tqdb22 systemd[1]: Started udev Kernel Device Manager.
Feb 11 22:06:39 tqdb22 kvm[1426]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1432]: 0 guests now active
Feb 11 22:06:39 tqdb22 kvm[1436]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1441]: 0 guests now active
Feb 11 22:06:39 tqdb22 kvm[1444]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1448]: 0 guests now active
[root@tqdb22: /etc/udev/rules.d]# systemctl enable systemd-udevd.service
[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: /etc/udev/rules.d]# ll /dev/asm-*
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr3 -> dm-2
[root@tqdb22: /etc/udev/rules.d]# ll /dev/dm-*
brw-rw----. 1 grid asmadmin 253, 0 Feb 12 02:50 /dev/dm-0
brw-rw----. 1 grid asmadmin 253, 1 Feb 12 02:50 /dev/dm-1
brw-rw----. 1 grid asmadmin 253, 2 Feb 12 02:50 /dev/dm-2
brw-rw----. 1 grid asmadmin 253, 3 Feb 12 02:50 /dev/dm-3
brw-rw----. 1 grid asmadmin 253, 4 Feb 12 02:50 /dev/dm-4
[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: ~]# (echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
> (echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
> (echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
> (echo -e "\n4. '/dev/sdb'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
> (echo -e "\n   '/dev/sdc'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
> (echo -e "\n   '/dev/sdd'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
> (echo -e "\n   '/dev/sde'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
> (echo -e "\n   '/dev/sdf'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)

输出结果: 
1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr3 -> dm-2
brw-rw----. 1 grid asmadmin 253,  0 Feb 12 03:30 /dev/dm-0
brw-rw----. 1 grid asmadmin 253,  1 Feb 12 03:30 /dev/dm-1
brw-rw----. 1 grid asmadmin 253,  2 Feb 12 03:30 /dev/dm-2
brw-rw----. 1 grid asmadmin 253,  3 Feb 12 03:30 /dev/dm-3
brw-rw----. 1 grid asmadmin 253,  4 Feb 12 03:30 /dev/dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/data01 -> ../dm-3
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/data02 -> ../dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr1 -> ../dm-0
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr2 -> ../dm-1
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr3 -> ../dm-2
brw-rw----. 1 root disk       8, 16 Feb 12 02:50 /dev/sdb
brw-rw----. 1 root disk       8, 32 Feb 12 02:50 /dev/sdc
brw-rw----. 1 root disk       8, 48 Feb 12 02:50 /dev/sdd
brw-rw----. 1 root disk       8, 64 Feb 12 02:50 /dev/sde
brw-rw----. 1 root disk       8, 80 Feb 12 02:50 /dev/sdf

2. 查看块(block)设备: 
NAME     FSTYPE       LABEL UUID                                 MOUNTPOINT
sda                                                              
├─sda1   swap               3114372b-8427-47ed-b2a6-092d33efcf5a [SWAP]
└─sda2   xfs                4e2d3b8d-2afa-447c-ae56-1cc0e2d39fe2 /
sdb      mpath_member                                            
└─ocr1                                                           
sdc      mpath_member                                            
└─ocr2                                                           
sdd      mpath_member                                            
└─ocr3                                                           
sde      mpath_member                                            
└─data01                                                         
sdf      mpath_member                                            
└─data02                                                         
sr0                                                              

3. 查看多路径配置
        multipath {
                wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
                alias                   ocr1
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
                alias                   ocr2
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
                alias                   ocr3
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
                alias                   data01
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
                alias                   data02
        }

4. '/dev/sdb'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b

   '/dev/sdc'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d

   '/dev/sdd'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981

   '/dev/sde'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBa1560564-d49dac72

   '/dev/sdf'的设备ID: (注意：'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b
[root@tqdb22: ~]# 
</code></pre>
</blockquote>
<h3>2.23 重启 OS</h3>
<blockquote><p>
  当上述操作执行完毕， 重启操作系统。
</p></blockquote>
<h2>3. 软件安装与配置</h2>
<h3>3.1 GRID 安装</h3>
<blockquote><p>
  将介质解压到 <code>GRID</code> 的 <code>$ORACLE_HOME</code> 中：这里一定要将文件解压到 <code>$OACLE_HOME</code> 中不然将会将当前目录设置为 <code>$ORACLE_HOME</code></p>
<pre><code class="language-bash line-numbers">[grid@tqdb21: /Software]$ ll
total 5922412
drwxr-xr-x 2 root root             47 Feb 12 18:53 DB RU 19.6.0.0.200114
drwxr-xr-x 2 root root             47 Feb 12 18:53 GI RU 19.6.0.0.200114
-rwx------ 1 root root     3059705302 Feb 12 18:13 LINUX.X64_193000_db_home.zip
-rwx------ 1 grid oinstall 2889184573 Feb 12 18:13 LINUX.X64_193000_grid_home.zip
-rwx------ 1 root root      115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[grid@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb21: /Software]$ unzip LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME
[grid@tqdb21: /Software]$ 
</code></pre>
<p>  安装 <code>cvuqdisk-1.0.10-1.rpm</code> 包，这个包linux的光盘内并不包含，需要到解压后的grid的安装文件中去找，在cv目录下面的rpm目录里面。</p>
<p>  （即：步骤「2.11 安装 cvuqdisk 包」）</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# cd $ORACLE_HOME/cv/rpm
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# ll cvuqdisk-1.0.10-1.rpm 
-rw-r--r-- 1 grid oinstall 11412 Mar 13  2019 cvuqdisk-1.0.10-1.rpm
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# yum install cvuqdisk-1.0.10-1.rpm 
Loaded plugins: fastestmirror, langpacks
Examining cvuqdisk-1.0.10-1.rpm: cvuqdisk-1.0.10-1.x86_64
Marking cvuqdisk-1.0.10-1.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package cvuqdisk.x86_64 0:1.0.10-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================================================
Package                                       Arch                                        Version                                       Repository                                               Size
=======================================================================================================================================================================================================
Installing:
cvuqdisk                                      x86_64                                      1.0.10-1                                      /cvuqdisk-1.0.10-1                                       22 k

Transaction Summary
=======================================================================================================================================================================================================
Install  1 Package

Total size: 22 k
Installed size: 22 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Using default group oinstall to install package
Installing : cvuqdisk-1.0.10-1.x86_64                                                                                                                                                            1/1 
Verifying  : cvuqdisk-1.0.10-1.x86_64                                                                                                                                                            1/1 

Installed:
cvuqdisk.x86_64 0:1.0.10-1                                                                                                                                                                           

Complete!
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# 
</code></pre>
</blockquote>
<h4>3.1.1 启动 sshd 的 X11 转发，启动图形界面</h4>
<blockquote><p>
  启动 sshd 的 X11 转发，启动图形界面</p>
<pre><code class="language-bash line-numbers">开启 ssh 的 X11 转发，用于开启图形界面
# vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```

重启 sshd 服务
# systemctl restart sshd.service  
# systemctl status sshd.service  
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```
[root@tqdb21: ~]# systemctl restart sshd.service             
[root@tqdb21: ~]# systemctl status sshd.service  
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-02-12 23:18:33 CST; 2s ago
  Docs: man:sshd(8)
        man:sshd_config(5)
Main PID: 5748 (sshd)
 Tasks: 1
CGroup: /system.slice/sshd.service
        └─5748 /usr/sbin/sshd -D

Feb 12 23:18:33 tqdb21 systemd[1]: Starting OpenSSH server daemon...
Feb 12 23:18:33 tqdb21 sshd[5748]: Server listening on 0.0.0.0 port 22.
Feb 12 23:18:33 tqdb21 systemd[1]: Started OpenSSH server daemon.
[root@tqdb21: ~]# 
</code></pre>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# 
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Wed Feb 12 23:24:52 CST 2020 on pts/1
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ 
[grid@tqdb21: /u01/app/19c/grid]$ ./gridSetup.sh 
Launching Oracle Grid Infrastructure Setup Wizard...


</code></pre>
<p>  macOS 使用 XQuartz 启动图形说明</p>
<pre><code class="language-bash line-numbers">--------------------------------------------------------------------------------
-- 启动 sshd 的 X11 转发，启动图形界面 -- Begin ------------------------------------
--------------------------------------------------------------------------------
开启 ssh 的 X11 转发，用于开启图形界面
vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```

重启 sshd 服务
systemctl restart sshd.service  
systemctl status sshd.service  
​```
[root@tq1: ~]# systemctl status sshd.service  
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-01-17 12:47:42 CST; 5s ago
  Docs: man:sshd(8)
        man:sshd_config(5)
Main PID: 18971 (sshd)
 Tasks: 1
CGroup: /system.slice/sshd.service
        └─18971 /usr/sbin/sshd -D

Jan 17 12:47:42 tq1 systemd[1]: Stopped OpenSSH server daemon.
Jan 17 12:47:42 tq1 systemd[1]: Starting OpenSSH server daemon...
Jan 17 12:47:42 tq1 sshd[18971]: Server listening on 0.0.0.0 port 22.
Jan 17 12:47:42 tq1 systemd[1]: Started OpenSSH server daemon.
[root@tq1: ~]# 
​```


​```
[root@tq1: ~]# xdpyinfo | head
name of display:    192.168.6.10:10.0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    11804000
X.Org version: 1.18.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tq1: ~]# su - grid
Last login: Fri Jan 17 11:44:18 CST 2020 on pts/4
[grid@tq1: ~]$ export DISPLAY=192.168.6.10:10.0
[grid@tq1: ~]$ echo $DISPLAY
192.168.6.10:10.0
[grid@tq1: ~]$ 
​```

实际操作：
1. 使用 XQuartz 登陆服务器 root 用户，执行 `xhost +`
â /Users/tq > ssh -X root@192.168.6.10
​```
â /Users/tq > ssh -X root@192.168.6.10
root@192.168.6.10's password: 
Last login: Fri Jan 17 11:43:40 2020 from 192.168.6.6
[root@tq1: ~]# xhost +
access control disabled, clients can connect from any host
[root@tq1: ~]# 
​```

2. 使用 XQuartz 登陆服务器 grid/oracle 用户，就可以直接启用图形了。
​```
â /Users/tq > ssh -X grid@192.168.6.10
grid@192.168.6.10's password: 
Last login: Fri Jan 17 12:41:26 2020 from 192.168.6.6
[grid@tq1: ~]$ echo $DISPLAY
192.168.6.10:10.0
[grid@tq1: ~]$ xauth list
tq1:10  MIT-MAGIC-COOKIE-1  62299f9d0c67b1804c36fe7ea6783fda
[grid@tq1: ~]$ 
[grid@tq1: ~]$ xclock 
[grid@tq1: ~]$ xeyes 
[grid@tq1: ~]$
​```
--------------------------------------------------------------------------------
-- 启动 sshd 的 X11 转发，启动图形界面 -- End --------------------------------------
--------------------------------------------------------------------------------
</code></pre>
</blockquote>
<h4>3.1.2 使用grid用户登入图形安装：</h4>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# 
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Wed Feb 12 23:24:52 CST 2020 on pts/1
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ cd $ORACLE_HOME
[grid@tqdb21: /u01/app/19c/grid]$ ./gridSetup.sh
</code></pre>
<p>GRID 安装截图：</p>
<ul>
<li>19c RAC GRID 安装 01<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2001.png" alt="19cRACGRID安装01" /></p>
</li>
<li>
<p>19c RAC GRID 安装 02<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2002.png" alt="19cRACGRID安装02" /></p>
</li>
<li>
<p>19c RAC GRID 安装 03<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2003.png" alt="19cRACGRID安装03" /></p>
</li>
<li>
<p>19c RAC GRID 安装 04 ==注意 SCAN Name==<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2004%20注意%20SCAN%20Name.png" alt="19cRACGRID安装04注意SCANName" /></p>
</li>
<li>
<p>19c RAC GRID 安装 05<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2005.png" alt="19cRACGRID安装05" /></p>
</li>
<li>
<p>19c RAC GRID 安装 06<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2006.png" alt="19cRACGRID安装06" /></p>
</li>
<li>
<p>19c RAC GRID 安装 07<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2007.png" alt="19cRACGRID安装07" /></p>
</li>
<li>
<p>19c RAC GRID 安装 08<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2008.png" alt="19cRACGRID安装08" /></p>
</li>
<li>
<p>19c RAC GRID 安装 09<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2009.png" alt="19cRACGRID安装09" /></p>
</li>
<li>
<p>19c RAC GRID 安装 10 选定 Private Interfaces<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2010%20选定%20Private%20Interfaces.png" alt="19cRACGRID安装10选定PrivateInterfaces" /></p>
</li>
<li>
<p>19c RAC GRID 安装 11</p>
<blockquote><p>
  如果使用 <code>Oracle Flex ASM</code> 内网接口需要选 <code>ASM & Private</code><br />
  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2011.png" alt="19cRACGRID安装11" />
</p></blockquote>
</li>
<li>19c RAC GRID 安装 12<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2012.png" alt="19cRACGRID安装12" /></p>
</li>
<li>
<p>19c RAC GRID 安装 13 私有网络选 ASM & Private<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2013%20私有网络选%20ASM%20&%20Private.png" alt="19cRACGRID安装13私有网络选ASM&Private" /></p>
</li>
<li>
<p>19c RAC GRID 安装 14<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2014.png" alt="19cRACGRID安装14" /></p>
</li>
<li>
<p>19c RAC GRID 安装 15</p>
<blockquote><p>
  不安装集群配置管理库。如果安装建议单独分配磁盘。在这有点区别，12c选no也会强制装，而且不能将<code>mgmtdb</code>单独装在一个磁盘，导致ocr磁盘不能少于40g。18c的时候可以单独分，19c选择no不装。<br />
  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2015.png" alt="19cRACGRID安装15" />
</p></blockquote>
</li>
<li>19c RAC GRID 安装 16<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2016.png" alt="19cRACGRID安装16" /></p>
</li>
<li>
<p>19c RAC GRID 安装 17 创建 `OCR` 磁盘组<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2017.png" alt="19cRACGRID安装17" /></p>
</li>
<li>
<p>19c RAC GRID 安装 18 ASM Password<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2018.png" alt="19cRACGRID安装18" /></p>
</li>
<li>
<p>19c RAC GRID 安装 19 ASM Password Yes<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2019.png" alt="19cRACGRID安装19" /></p>
</li>
<li>
<p>19c RAC GRID 安装 20<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2020.png" alt="19cRACGRID安装20" /></p>
</li>
<li>
<p>19c RAC GRID 安装 21<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2021.png" alt="19cRACGRID安装21" /></p>
</li>
<li>
<p>19c RAC GRID 安装 22<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2022.png" alt="19cRACGRID安装22" /></p>
</li>
<li>
<p>19c RAC GRID 安装 23<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2023.png" alt="19cRACGRID安装23" /></p>
</li>
<li>
<p>19c RAC GRID 安装 24<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2024.png" alt="19cRACGRID安装24" /></p>
</li>
<li>
<p>19c RAC GRID 安装 25<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2025.png" alt="19cRACGRID安装25" /></p>
</li>
<li>
<p>19c RAC GRID 安装 26<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2026.png" alt="19cRACGRID安装26" /></p>
</li>
<li>
<p>19c RAC GRID 安装 27<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2027.png" alt="19cRACGRID安装27" /></p>
</li>
<li>
<p>19c RAC GRID 安装 28<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2028.png" alt="19cRACGRID安装28" /></p>
</li>
<li>
<p>19c RAC GRID 安装 29 Yes<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2029%20Yes.png" alt="19cRACGRID安装29Yes" /></p>
</li>
<li>
<p>19c RAC GRID 安装 30<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2030.png" alt="19cRACGRID安装30" /></p>
</li>
<li>
<p>19c RAC GRID 安装 31<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2031.png" alt="19cRACGRID安装31" /></p>
</li>
<li>
<p>19c RAC GRID 安装 32<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2032.png" alt="19cRACGRID安装32" /></p>
</li>
<li>
<p>19c RAC GRID 安装 33<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2033.png" alt="19cRACGRID安装33" /></p>
</li>
<li>
<p>19c RAC GRID 安装 34 各个节点依次执行 2 个 root 脚本<br />
==<strong>跑完脚本点ok</strong>==<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2034%20各个节点依次执行%202%20个%20root%20脚本.png" alt="19cRACGRID安装34各个节点依次执行2个root脚本" /></p>
</li>
</ul>
<pre><code class="language-bash line-numbers">root用户运行脚本

第一个脚本：
节点一：
​```
[root@tqdb21: ~]# /u01/app/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@tqdb21: ~]# 
​```
节点二：
​```
[root@tqdb22: ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@tqdb22: ~]# 
​```

第二个脚本：
节点一：
​```
[root@tqdb21: ~]# /u01/app/19c/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19c/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/tqdb21/crsconfig/rootcrs_tqdb21_2020-02-13_00-38-01AM.log
2020/02/13 00:38:09 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/02/13 00:38:10 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/02/13 00:38:10 CLSRSC-363: User ignored prerequisites during installation
2020/02/13 00:38:10 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/02/13 00:38:13 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/02/13 00:38:48 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/02/13 00:38:54 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/02/13 00:39:03 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/02/13 00:39:13 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/02/13 00:39:13 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/02/13 00:39:20 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/02/13 00:39:20 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/02/13 00:39:43 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/02/13 00:39:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/02/13 00:39:53 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/02/13 00:39:58 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-200213AM124031.log for details.

2020/02/13 00:41:59 CLSRSC-482: Running command: '/u01/app/19c/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 12492c49df6f4f04bf57d5a668e6adaa.
Successful addition of voting disk 4da1547a561f4f61bf69b1af64e4a486.
Successful addition of voting disk 02061e2a235d4f45bf4c7b306e8c2c48.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   12492c49df6f4f04bf57d5a668e6adaa (/dev/asm-ocr1) [OCR]
 2. ONLINE   4da1547a561f4f61bf69b1af64e4a486 (/dev/asm-ocr2) [OCR]
 3. ONLINE   02061e2a235d4f45bf4c7b306e8c2c48 (/dev/asm-ocr3) [OCR]
Located 3 voting disk(s).
2020/02/13 00:44:23 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/02/13 00:45:58 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/02/13 00:45:58 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/02/13 00:48:42 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/02/13 00:50:11 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@tqdb21: ~]# 
​```

节点二：
​```
[root@tqdb22: ~]# /u01/app/19c/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19c/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/tqdb22/crsconfig/rootcrs_tqdb22_2020-02-13_00-52-25AM.log
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/02/13 00:52:29 CLSRSC-363: User ignored prerequisites during installation
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/02/13 00:52:32 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/02/13 00:52:32 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/02/13 00:52:42 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/02/13 00:52:42 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/02/13 00:52:47 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/02/13 00:52:48 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/02/13 00:53:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/02/13 00:53:12 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/02/13 00:53:13 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/02/13 00:53:15 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/02/13 00:53:16 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2020/02/13 00:53:25 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/02/13 00:54:12 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/02/13 00:54:12 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/02/13 00:54:50 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/02/13 00:55:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@tqdb22: ~]# 
​```

此时，两个节点都已经可以查询集群服务了。
​```节点1
[grid@tqdb21: ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[grid@tqdb21: ~]$ 
​```

​```节点2
[grid@tqdb22: ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ asmcmd -p
ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304      6144     5228             2048            1590              0             Y  OCR/
ASMCMD [+] > ls -l
State    Type    Rebal  Name
MOUNTED  NORMAL  N      OCR/
ASMCMD [+] > cd OCR
ASMCMD [+OCR] > ls -l
Type      Redund  Striped  Time             Sys  Name
                                            Y    ASM/
PASSWORD  HIGH    COARSE   FEB 13 00:00:00  N    orapwasm => +OCR/ASM/PASSWORD/pwdasm.256.1032223315
PASSWORD  HIGH    COARSE   FEB 13 00:00:00  N    orapwasm_backup => +OCR/ASM/PASSWORD/pwdasm.257.1032223745
                                            Y    tqdb-cluster/
ASMCMD [+OCR] > quit
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ sqlplus / as sysasm 

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 13 01:00:43 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> -- ASM 目录属性                                                        
SQL> -- Oracle ASM Attributes Directory                                     
SQL> set pagesize 200                                                       
SQL> set linesize 200                                                       
SQL> col "group" for a30                                                    
SQL> col "attribute" for a50                                                
SQL> col "value" for a50                                                    
SQL> select g.name "group", a.name "attribute", a.value "value"             
  2  from v$asm_diskgroup g, v$asm_attribute a                              
  3  where g.group_number=a.group_number and a.name not like 'template%';   

group                          attribute                                          value
------------------------------ -------------------------------------------------- --------------------------------------------------
OCR                            idp.type                                           dynamic
OCR                            vam_migration_done                                 true
OCR                            disk_repair_time                                   12.0h
OCR                            phys_meta_replicated                               true
OCR                            failgroup_repair_time                              24.0h
OCR                            thin_provisioned                                   FALSE
OCR                            preferred_read.enabled                             FALSE
OCR                            ate_conversion_done                                true
OCR                            sector_size                                        512
OCR                            logical_sector_size                                512
OCR                            content.type                                       data
OCR                            content.check                                      FALSE
OCR                            au_size                                            4194304
OCR                            appliance._partnering_type                         GENERIC
OCR                            compatible.asm                                     19.0.0.0.0
OCR                            compatible.rdbms                                   10.1.0.0.0
OCR                            cell.smart_scan_capable                            FALSE
OCR                            cell.sparse_dg                                     allnonsparse
OCR                            access_control.enabled                             FALSE
OCR                            access_control.umask                               066
OCR                            content_hardcheck.enabled                          FALSE
OCR                            scrub_async_limit                                  1
OCR                            scrub_metadata.enabled                             TRUE
OCR                            idp.boundary                                       auto

24 rows selected.

SQL> -- ASM 磁盘组信息                                                      
SQL> set linesize 200;                                                      
SQL> col GROUP_NAME for a20;                                                
SQL> col STATE for a20;                                                     
SQL> SELECT                                                                 
  2      name                                     group_name                
  3    , sector_size                              sector_size               
  4    , block_size                               block_size                
  5    , allocation_unit_size                     allocation_unit_size      
  6    , state                                    state                     
  7    , type                                     type                      
  8    , total_mb                                 total_mb                  
  9    , (total_mb - free_mb)                     used_mb                   
 10    , free_mb                                  free_mb                   
 11    , ROUND((1- (free_mb / total_mb))*100, 2)  pct_used                  
 12  FROM                                                                   
 13      v$asm_diskgroup                                                    
 14  ORDER BY                                                               
 15      name                                                               
 16  ;                                                                      

GROUP_NAME           SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE                TYPE                 TOTAL_MB    USED_MB    FREE_MB   PCT_USED
-------------------- ----------- ---------- -------------------- -------------------- ------------------ ---------- ---------- ---------- ----------
OCR                          512       4096              4194304 MOUNTED              NORMAL                   6144        916       5228      14.91

SQL> -- 查看 ASM 磁盘组剩余空间                 
SQL> set linesize 200;                          
SQL> col name for a30;                          
SQL> select group_number,                       
  2         name,                               
  3         state,                              
  4         type,                               
  5         total_mb,                           
  6         free_mb,                            
  7         total_mb - free_mb as used_mb       
  8    from v$asm_diskgroup;                    

GROUP_NUMBER NAME                           STATE                TYPE                 TOTAL_MB    FREE_MB    USED_MB
------------ ------------------------------ -------------------- ------------------ ---------- ---------- ----------
           1 OCR                            MOUNTED              NORMAL                   6144       5228        916

SQL> -- ASM 磁盘使用情况                                                         
SQL> set linesize 200;                                                           
SQL> set pagesize 200;                                                           
SQL> col disk_group_name for a30;                                                
SQL> col disk_file_path for a30;                                                 
SQL> col disk_file_name for a20;                                                 
SQL> col disk_file_fail_group for a20;                                           
SQL> SELECT                                                                      
  2      NVL(a.name, '[CANDIDATE]')                       disk_group_name        
  3    , b.path                                           disk_file_path         
  4    , b.name                                           disk_file_name         
  5    , b.failgroup                                      disk_file_fail_group   
  6    , b.total_mb                                       total_mb               
  7    , (b.total_mb - b.free_mb)                         used_mb                
  8    , ROUND((1- (b.free_mb / b.total_mb))*100, 2)      pct_used               
  9  FROM                                                                        
 10      v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)    
 11  WHERE b.total_mb <> 0                                                       
 12  ORDER BY                                                                    
 13      a.name, b.name                                                          
 14  ;                                                                           

DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME       DISK_FILE_FAIL_GROUP   TOTAL_MB    USED_MB   PCT_USED
------------------------------ ------------------------------ -------------------- -------------------- ---------- ---------- ----------
OCR                            /dev/asm-ocr1                  OCR_0000             OCR_0000                   2048        300      14.65
OCR                            /dev/asm-ocr2                  OCR_0001             OCR_0001                   2048        312      15.23
OCR                            /dev/asm-ocr3                  OCR_0002             OCR_0002                   2048        304      14.84

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb22: ~]$ 
​```
</code></pre>
<ul>
<li>19c RAC GRID 安装 35<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2035.png" alt="19cRACGRID安装35" /></p>
</li>
<li>
<p>19c RAC GRID 安装 36 点ok-next 这个报错是由于scan引起的可以忽略<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2036%20点ok-next%20这个报错是由于scan引起的可以忽略.png" alt="19cRACGRID安装36点ok-next这个报错是由于scan引起的可以忽略" /></p>
</li>
<li>
<p>19c RAC GRID 安装 37<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2037.png" alt="19cRACGRID安装37" /></p>
</li>
<li>
<p>19c RAC GRID 安装 38 Yes<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2038%20Yes.png" alt="19cRACGRID安装38Yes" /></p>
</li>
<li>
<p>19c RAC GRID 安装 39 Close 集群安装完毕<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20GRID%20安装%2039%20Close%20集群安装完毕.png" alt="19cRACGRID安装39Close集群安装完毕" /></p>
</li>
</ul>
<h3>3.2 禁用 ASM 实例的 AMM</h3>
<blockquote><p>
  Linux下开启 HugePage 必须禁用 AMM 。</p>
<pre><code class="language-bash line-numbers"># su – grid
$ sqlplus / as sysasm
> alter system set sga_max_size=1088M scope=spfile sid='*'; 
> alter system set sga_target=1088M scope=spfile sid='*'; 
> alter system set pga_aggregate_target=1024M scope=spfile sid='*';
> alter system set memory_target=0 scope=spfile sid='*';
> alter system set memory_max_target=0 scope=spfile sid='*'; 
> alter system reset memory_max_target scope=spfile sid='*';
> alter system set processes=300 scope=spfile;

</code></pre>
<p>  重启HAS生效。(root 用户)</p>
<pre><code class="language-bash line-numbers"># crsctl stop has 
# crsctl start has
</code></pre>
<hr />
<p>  操作记录：</p>
<pre><code class="language-bash line-numbers">[grid@tqdb21: ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 13 02:52:56 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> 
SQL> alter system set sga_max_size=1088M scope=spfile sid='*'; 

System altered.

SQL> alter system set sga_target=1088M scope=spfile sid='*'; 

System altered.

SQL> alter system set pga_aggregate_target=1024M scope=spfile sid='*';

System altered.

SQL> 
SQL> alter system set memory_target=0 scope=spfile sid='*';

System altered.

SQL> alter system set memory_max_target=0 scope=spfile sid='*'; 

System altered.

SQL> alter system reset memory_max_target scope=spfile sid='*';

System altered.

SQL> alter system set processes=300 scope=spfile;

System altered.

SQL> 
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb21: ~]$ 

[root@tqdb21: ~]# crsctl stop has 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'tqdb21'
CRS-2673: Attempting to stop 'ora.crsd' on 'tqdb21'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'tqdb21'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.cvu' on 'tqdb21'
CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'tqdb21'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.chad' on 'tqdb21'
CRS-2677: Stop of 'ora.OCR.dg' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'tqdb21'
CRS-2677: Stop of 'ora.asm' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'tqdb21'
CRS-2677: Stop of 'ora.cvu' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'tqdb21'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.tqdb21.vip' on 'tqdb21'
CRS-2677: Stop of 'ora.scan1.vip' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.tqdb21.vip' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.chad' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'tqdb21'
CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'tqdb21' succeeded
CRS-33677: Stop of resource group 'ora.asmgroup' on server 'tqdb21' succeeded.
CRS-2672: Attempting to start 'ora.qosmserver' on 'tqdb22'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'tqdb22'
CRS-2672: Attempting to start 'ora.cvu' on 'tqdb22'
CRS-2672: Attempting to start 'ora.tqdb21.vip' on 'tqdb22'
CRS-2676: Start of 'ora.cvu' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.tqdb21.vip' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'tqdb22' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'tqdb22'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'tqdb22' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'tqdb21'
CRS-2677: Stop of 'ora.ons' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'tqdb21'
CRS-2677: Stop of 'ora.net1.network' on 'tqdb21' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'tqdb21' has completed
CRS-2677: Stop of 'ora.crsd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.crf' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'tqdb21'
CRS-2677: Stop of 'ora.crf' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.asm' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'tqdb21'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.evmd' on 'tqdb21'
CRS-2677: Stop of 'ora.ctssd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.evmd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'tqdb21'
CRS-2677: Stop of 'ora.cssd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'tqdb21'
CRS-2677: Stop of 'ora.gipcd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'tqdb21' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tqdb21' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
               ONLINE  ONLINE       tqdb21                   STABLE
               ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       tqdb22                   STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   Started,STABLE
      2        ONLINE  ONLINE       tqdb22                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       tqdb21                   STABLE
      2        ONLINE  ONLINE       tqdb22                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       tqdb22                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       tqdb22                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
ora.tqdb21.vip
      1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
      1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

</code></pre>
</blockquote>
<h3>3.3 DB 安装与配置</h3>
<blockquote><p>
  Oracle用户解压软件：</p>
<pre><code class="language-bash line-numbers">[oracle@tqdb21: /Software]$ ll
total 5922412
drwxr-xr-x 2 root   root             47 Feb 12 18:53 DB RU 19.6.0.0.200114
drwxr-xr-x 2 root   root             47 Feb 12 18:53 GI RU 19.6.0.0.200114
-rwx------ 1 oracle oinstall 3059705302 Feb 12 18:13 LINUX.X64_193000_db_home.zip
-rwx------ 1 grid   oinstall 2889184573 Feb 12 18:13 LINUX.X64_193000_grid_home.zip
-rwx------ 1 root   root      115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[oracle@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb21: /Software]$ unzip LINUX.X64_193000_db_home.zip -d $ORACLE_HOME
[oracle@tqdb21: /Software]$ 
</code></pre>
<p>  使用 oracle 用户登入图形安装：(选择 RAC, 只安装数据库软件)</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - oracle
Last login: Thu Feb 13 17:56:33 CST 2020 on pts/7
[oracle@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[oracle@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd $ORACLE_HOME
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome]$ ./runInstaller 

</code></pre>
</blockquote>
<p>DB 安装截图：</p>
<ul>
<li>19c RAC DB 安装 01<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2001.png" alt="19cRACDB安装01" /></p>
</li>
<li>19c RAC DB 安装 02<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2002.png" alt="19cRACDB安装02" /></p>
</li>
<li>
<p>19c RAC DB 安装 03<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2003.png" alt="19cRACDB安装03" /></p>
</li>
<li>
<p>19c RAC DB 安装 04<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2004.png" alt="19cRACDB安装04" /></p>
</li>
<li>
<p>19c RAC DB 安装 05 SSH 等效性<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2005%20SSH%20等效性.png" alt="19cRACDB安装05SSH等效性" /></p>
</li>
<li>
<p>19c RAC DB 安装 06 SSH 等效性 配置完成<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2006%20SSH%20等效性%20配置完成.png" alt="19cRACDB安装06SSH等效性配置完成" /></p>
</li>
<li>
<p>19c RAC DB 安装 07 SSH 等效性 验证通过<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2007%20SSH%20等效性%20验证通过.png" alt="19cRACDB安装07SSH等效性验证通过" /></p>
</li>
<li>
<p>19c RAC DB 安装 08<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2008.png" alt="19cRACDB安装08" /></p>
</li>
<li>
<p>19c RAC DB 安装 09<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2009.png" alt="19cRACDB安装09" /></p>
</li>
<li>
<p>19c RAC DB 安装 10<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2010.png" alt="19cRACDB安装10" /></p>
</li>
<li>
<p>19c RAC DB 安装 11<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2011.png" alt="19cRACDB安装11" /></p>
</li>
<li>
<p>19c RAC DB 安装 12<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2012.png" alt="19cRACDB安装12" /></p>
</li>
<li>
<p>19c RAC DB 安装 13<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2013.png" alt="19cRACDB安装13" /></p>
</li>
<li>
<p>19c RAC DB 安装 14 Ignore All<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2014%20Ignore%20All.png" alt="19cRACDB安装14IgnoreAll" /></p>
</li>
<li>
<p>19c RAC DB 安装 15 Yes<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2015%20Yes.png" alt="19cRACDB安装15Yes" /></p>
</li>
<li>
<p>19c RAC DB 安装 16<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2016.png" alt="19cRACDB安装16" /></p>
</li>
<li>
<p>19c RAC DB 安装 17<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2017.png" alt="19cRACDB安装17" /></p>
</li>
<li>
<p>19c RAC DB 安装 18 各个节点依次执行 2 个 root 脚本<br />
==<strong>跑完脚本点ok</strong>==<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2018%20各个节点依次执行%202%20个%20root%20脚本.png" alt="19cRACDB安装18各个节点依次执行2个root脚本" /></p>
</li>
</ul>
<blockquote><p>
  运行root.sh脚本</p>
<pre><code class="language-bash line-numbers">节点一：
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/root.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/oracle/product/19c/dbhome

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@tqdb21: ~]# 
​```

节点二：
​```
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/root.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME=  /u01/app/oracle/product/19c/dbhome

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@tqdb22: ~]# 
​```
</code></pre>
</blockquote>
<ul>
<li>19c RAC DB 安装 19 DB 软件安装完成<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20DB%20安装%2019%20DB%20软件安装完成.png" alt="19cRACDB安装19DB软件安装完成" /></li>
</ul>
<h3>3.4 升级：GI 和 DB 打 RU (RELEASE UPDATE)补丁</h3>
<blockquote><p>
  补丁链接：Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)</p>
<table>
<thead>
<tr>
<th><strong>Description</strong></th>
<th><strong>Database Update</strong></th>
<th><strong>GI Update</strong></th>
<th><strong>Windows Bundle Patch</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>JAN2020 (19.6.0.0.0)</td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30557433">30557433</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30501910">30501910</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30445947">30445947</a></td>
</tr>
<tr>
<td>OCT2019 (19.5.0.0.0)</td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30125133">30125133</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30116789">30116789</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=30151705">30151705</a></td>
</tr>
<tr>
<td>JUL2019 (19.4.0.0.0)</td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=29834717">29834717</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=29708769">29708769</a></td>
<td>NA</td>
</tr>
<tr>
<td>APR2019 (19.3.0.0.0)</td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=29517242">29517242</a></td>
<td><a class="wp-editor-md-post-content-link" href="https://support.oracle.com/epmos/faces/ui/patch/PatchDetail.jspx?parent=DOCUMENT&sourceId=2118136.2&patchId=29517302">29517302</a></td>
<td>NA</td>
</tr>
</tbody>
</table>
<p>  OPatch 下载地址： https://updates.oracle.com/download/6880880.html</p>
<ul>
<li>OPatch Patch6880880_OPatch19.0.0.0.0</li>
</ul>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/OPatch%20Patch6880880_OPatch19.0.0.0.0.png" alt="OPatchPatch6880880_OPatch19.0.0.0.0" /></p>
<pre><code class="line-numbers">参见：https://oracleblog.org/study-note/apply-patch-dpbp-170418/
步骤为：

1. 升级opatch到最新版本。注，在opatch 12.2.0.1.5之前，执行opatchauto时需要加-ocmrf [ocm response file]参数。如果使用这个版本之后，就不需要再加响应文件的参数了。另外，170418这个DPBP要求使用opatch版本至少为12.2.0.1.7。

2. [GRID_HOME]/OPatch/opatchauto apply [UNZIPPED_PATCH_LOCATION]/25433352。注意，这个命令需要在各个节点上依次（非并行）执行。执行的时候，会bring down crs和database，会给grid home和oracle home打上补丁。依次打的方式，也减少了停机时间。

3. datapatch -verbose。注，上面说了依次打减少了停机时间，但是停机时间还是需要的，就是在这里的运行datapatch的时间。这个步骤是升级数据字典，针对整个database的数据字典，因此只需在一个节点上跑就可以了。主要注意的是，如果是cdb模式，需要`alter pluggable database all open`，打开所有的pdb之后，再运行datapatch。

文档： Datapatch: Database 12c or later Post Patch SQL Automation (Doc ID 1585822.1)

​```只在一个节点执行即可
在任何一个节点执行加载sql部分到数据库
oracle$ cd $ORACLE_HOME/OPatch

如果是pdb数据库，需要保证所有的pdb属于read write状态。
show pdbs;

oracle$ ./datapatch -verbose

重启一下db
[root@tqdb21: ~]# srvctl stop database -db tqdb
[root@tqdb21: ~]# srvctl start database -db tqdb

查看补丁
19:29:25 sys@TQDB(tqdb21)> set linesize 300;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_TIMESTAMP for a10;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_TIMESTAMP for a20;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_VERSIONT for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> 
19:29:34 sys@TQDB(tqdb21)> select install_id, PATCH_ID,PATCH_UID,ACTION,STATUS, DESCRIPTION, SOURCE_VERSION,SOURCE_BUILD_DESCRIPTION,SOURCE_BUILD_TIMESTAMP, TARGET_VERSION, TARGET_BUILD_DESCRIPTION, to_char(TARGET_BUILD_TIMESTAMP, 'yyyy-mm-dd hh24:mi:ss') from dba_registry_sqlpatch;

INSTALL_ID   PATCH_ID  PATCH_UID ACTION          STATUS          DESCRIPTION                                                  SOURCE_VERSION  SOURCE_BUILD_DESCRIP SOURCE_BUILD_TIMESTA TARGET_VERSION  TARGET_BUILD_DESCRIP TO_CHAR(TARGET_BUIL
---------- ---------- ---------- --------------- --------------- ------------------------------------------------------------ --------------- -------------------- -------------------- --------------- -------------------- -------------------
         1   30557433   23305305 APPLY           SUCCESS         Database Release Update : 19.6.0.0.200114 (30557433)         19.1.0.0.0      Feature Release                           19.6.0.0.0      Release_Update       2019-12-17 15:50:04

19:29:36 sys@TQDB(tqdb21)> 
​```

说明：我这里只安装了数据库软件，还没有创建数据库，所以不需要升级数据字典。（没创建数据库，还没有数据字典呢）

4. 打完之后建议用orachk检查一下。
</code></pre>
</blockquote>
<h4>3.4.1 更新 OPatch</h4>
<blockquote><p>
  ==<strong>说明：分别在两个节点更新 grid 以及 database OPatch 版本。</strong>==</p>
<pre><code class="language-bash line-numbers">-- 更新 opatch 版本到当前最新版本 12.2.0.1.19 
-- 两个节点都执行。
-- p6880880_190000_Linux-x86-64.zip
-- 解压到 grid 用户的 $ORACLE_HOME （/u01/app/19c/grid）
grid$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
  输入：A
grid$ opatch version      ==> 检查opatch版本
-- 解压到 oracle 用户的 $ORACLE_HOME （/u01/app/oracle/product/19c/dbhome）
oracle$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
  输入：A
oracle$ opatch version      ==> 检查opatch版本
</code></pre>
<p>  操作记录：</p>
<pre><code class="language-bash line-numbers">1. 查看：两个节点 grid 以及 oracle 当前 opatch 版本
-- 查看：节点1 grid 以及 oracle 当前 opatch 版本
​```
[grid@tqdb21: ~]$ opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[grid@tqdb21: ~]$ 

[grid@tqdb21: ~]$ su - oracle
Password: 
Last login: Thu Feb 13 19:21:21 CST 2020 on pts/7
[oracle@tqdb21: ~]$ opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[oracle@tqdb21: ~]$ 

-- 查看：节点2 grid 以及 oracle 当前 opatch 版本
[grid@tqdb22: ~]$ opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[grid@tqdb22: ~]$ su - oracle
Password: 
Last login: Thu Feb 13 18:55:38 CST 2020
[oracle@tqdb22: ~]$ opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
[oracle@tqdb22: ~]$ 
​```

-- 2. 两个节点：分别备份 grid 以及 oracle 用户的当前 OPatch 目录
​```
-- 节点1 
[root@tqdb21: ~]# mv /u01/app/19c/grid/OPatch/ /u01/app/19c/grid/OPatch.bak
[root@tqdb21: ~]# mv /u01/app/oracle/product/19c/dbhome/OPatch/ /u01/app/oracle/product/19c/dbhome/OPatch.bak
[root@tqdb21: ~]# mkdir /u01/app/19c/grid/OPatch/
[root@tqdb21: ~]# chown grid:oinstall /u01/app/19c/grid/OPatch/
[root@tqdb21: ~]# mkdir /u01/app/oracle/product/19c/dbhome/OPatch/
[root@tqdb21: ~]# chown oracle:oinstall /u01/app/oracle/product/19c/dbhome/OPatch/

-- 节点2
[root@tqdb22: ~]# mv /u01/app/19c/grid/OPatch/ /u01/app/19c/grid/OPatch.bak
[root@tqdb22: ~]# mv /u01/app/oracle/product/19c/dbhome/OPatch/ /u01/app/oracle/product/19c/dbhome/OPatch.bak
[root@tqdb22: ~]# mkdir /u01/app/19c/grid/OPatch/
[root@tqdb22: ~]# chown grid:oinstall /u01/app/19c/grid/OPatch/
[root@tqdb22: ~]# mkdir /u01/app/oracle/product/19c/dbhome/OPatch/
[root@tqdb22: ~]# chown oracle:oinstall /u01/app/oracle/product/19c/dbhome/OPatch/
​```

-- 3. 两个节点 grid 目录 OPatch 替换
​```
-- 节点1
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 root root 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# 
[root@tqdb21: /Software]# chown grid:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip                  
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# 
[root@tqdb21: /Software]# su - grid
Last login: Fri Feb 14 00:35:39 CST 2020
[grid@tqdb21: ~]$ cd /Software/
[grid@tqdb21: /Software]$ ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[grid@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb21: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[grid@tqdb21: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/19c/grid/OPatch
[grid@tqdb21: /Software]$ opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[grid@tqdb21: /Software]$ 

-- 节点2
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 root root 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# chown grid:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip                  
-rwx------ 1 grid oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# su - grid
Last login: Fri Feb 14 01:01:19 CST 2020
[grid@tqdb22: ~]$ cd /Software/
[grid@tqdb22: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb22: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[grid@tqdb22: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/19c/grid/OPatch
[grid@tqdb22: /Software]$ opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[grid@tqdb22: /Software]$ 
​```

-- 4. 两个节点 oracle 目录 OPatch 替换
​```
-- 节点1
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# chown oracle:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip                    
-rwx------ 1 oracle oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# su - oracle
Last login: Fri Feb 14 00:28:55 CST 2020 on pts/7
[oracle@tqdb21: ~]$ cd /Software/
[oracle@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb21: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@tqdb21: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/oracle/product/19c/dbhome/OPatch
[oracle@tqdb21: /Software]$ opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@tqdb21: /Software]$ 

-- 节点2
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip
total 112944
-rwx------ 1 grid oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# chown oracle:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 oracle oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# su - oracle
Last login: Fri Feb 14 00:55:36 CST 2020 on pts/1
[oracle@tqdb22: ~]$ cd /Software/
[oracle@tqdb22: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb22: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@tqdb22: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/oracle/product/19c/dbhome/OPatch
[oracle@tqdb22: /Software]$ opatch version
OPatch Version: 12.2.0.1.19

OPatch succeeded.
[oracle@tqdb22: /Software]$ 
​```
</code></pre>
</blockquote>
<h4>3.4.2 开始升级 GI RU (RELEASE UPDATE) 补丁</h4>
<blockquote><p>
  说明: (两个节点都要执行)</p>
<p>  (1) 升级过程会自动关闭和启动集群。</p>
<p>  (2) 先升级节点1 grid，再升级节点2 grid。</p>
<pre><code class="language-bash line-numbers">-- 两个节点都要执行
-- 1. grid 用户下解压 GI RU 补丁包
root# cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
root# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
root# su - grid
grid$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
grid$ unzip p30501910_190000_Linux-x86-64.zip 

-- 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性(必须要在 root 用户下，否则报错)
-- (grid 用户的 $ORACLE_HOME 为 /u01/app/19c/grid)
root# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze

-- 3. root 用户下安装 GI RU
root# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid


</code></pre>
<p>  节点1，操作记录：</p>
<pre><code class="language-bash line-numbers">-- 节点1: 1. grid 用户下解压 GI RU 补丁包
[root@tqdb21: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# cd
[root@tqdb21: ~]# cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 root root 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# 
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# su - grid
Last login: Fri Feb 14 01:38:22 CST 2020 on pts/0
[grid@tqdb21: ~]$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110332
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ unzip p30501910_190000_Linux-x86-64.zip 
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 

-- 节点1: 查看当前 patches
[root@tqdb21: ~]# su - grid
Last login: Fri Feb 14 01:35:53 CST 2020
[grid@tqdb21: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[grid@tqdb21: ~]$ 

-- 节点1: 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性(必须要在 root 用户下，否则报错)
​```
[root@tqdb21: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze

OPatchauto session is initiated at Fri Feb 14 02:22:26 2020

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_02-22-31AM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_02-22-46AM.log
The id for this session is QAVQ

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:tqdb21
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log



OPatchauto session completed at Fri Feb 14 02:23:28 2020
Time taken to complete the session 1 minute, 2 seconds
[root@tqdb21: ~]# 
​```

-- 节点1: 3. root 用户下安装 GI RU
​```
[root@tqdb21: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid

OPatchauto session is initiated at Fri Feb 14 02:30:04 2020

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_02-30-09AM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_02-30-21AM.log
The id for this session is A77D

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid


Bringing down CRS service on home /u01/app/19c/grid
CRS service brought down successfully on home /u01/app/19c/grid


Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid


Starting CRS service on home /u01/app/19c/grid
CRS service started successfully on home /u01/app/19c/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:tqdb21
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log



OPatchauto session completed at Fri Feb 14 02:42:10 2020
Time taken to complete the session 12 minutes, 6 seconds
[root@tqdb21: ~]# 
​```

-- 节点1: 查看打完 GI RU 后的 patches，以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[grid@tqdb21: ~]$ opatch lspatches
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[grid@tqdb21: ~]$ 
[grid@tqdb21: ~]$ 
[grid@tqdb21: ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 02:51:10 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[grid@tqdb21: ~]$ 

</code></pre>
<p>  节点2,  操作记录：</p>
<pre><code class="language-bash line-numbers">-- 节点2: 1. grid 用户下解压 GI RU 补丁包
root@tqdb22: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 root root 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# su - grid
Last login: Fri Feb 14 03:01:47 CST 2020
[grid@tqdb22: ~]$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU/
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ unzip p30501910_190000_Linux-x86-64.zip 
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 

-- 节点2: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[grid@tqdb22: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[grid@tqdb22: ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 03:06:11 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb22: ~]$ 

-- 节点2: 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性(必须要在 root 用户下，否则报错)
​```
[root@tqdb22: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze

OPatchauto session is initiated at Fri Feb 14 03:08:44 2020

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_03-08-49AM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_03-09-03AM.log
The id for this session is GCM2

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:tqdb22
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log



OPatchauto session completed at Fri Feb 14 03:09:36 2020
Time taken to complete the session 0 minute, 52 seconds
[root@tqdb22: ~]# 
​```

-- 节点2: 3. root 用户下安装 GI RU
​```
[root@tqdb22: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid

OPatchauto session is initiated at Fri Feb 14 03:10:49 2020

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_03-10-55AM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_03-11-07AM.log
The id for this session is S64Q

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid


Bringing down CRS service on home /u01/app/19c/grid
CRS service brought down successfully on home /u01/app/19c/grid


Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid


Starting CRS service on home /u01/app/19c/grid
CRS service started successfully on home /u01/app/19c/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:tqdb22
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log



OPatchauto session completed at Fri Feb 14 03:24:56 2020
Time taken to complete the session 14 minutes, 7 seconds
[root@tqdb22: ~]# 
​```

-- 节点2: 查看打完 GI RU 后的 patches，以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[grid@tqdb22: ~]$ opatch lspatches
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 03:43:16 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[grid@tqdb22: ~]$ 
</code></pre>
</blockquote>
<h4>3.4.3 开始升级 DB RU (RELEASE UPDATE) 补丁</h4>
<blockquote>
<blockquote><p>
    Patch 30501910: GI RELEASE UPDATE 19.6.0.0.0  (p30501910_190000_Linux-x86-64.zip)</p>
<p>    说明：由于 GI RU 包含 DB RU，所以 RAC 环境升级 DB 时，还将使用此 Patch 即可。
  </p></blockquote>
<p>  说明: (两个节点都要执行)</p>
<p>  (1) 升级过程会自动关闭和启动集群。</p>
<p>  (2) 先升级节点1 database，再升级节点2 database。</p>
<pre><code class="language-bash line-numbers">-- 两个节点都要执行
-- 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
root# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/

-- 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性
(oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
root# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze

-- 3. root 用户下安装 database RU
root# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

</code></pre>
<p>  节点1,  操作记录：</p>
<pre><code class="language-bash line-numbers">-- 节点1: 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
[root@tqdb21: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb21: ~]# 
[root@tqdb21: ~]# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/
[root@tqdb21: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/                               
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid   oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid   oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb21: ~]# 

-- 节点1: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[root@tqdb21: ~]# su - oracle
Last login: Fri Feb 14 01:15:34 CST 2020 on pts/0
[oracle@tqdb21: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)

OPatch succeeded.
[oracle@tqdb21: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:20:52 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> quit
Disconnected
[oracle@tqdb21: ~]$ 

-- 节点1: 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性
-- (oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze

OPatchauto session is initiated at Fri Feb 14 04:22:06 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-22-12AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-22-47AM.log
The id for this session is RC8U

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 
OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:tqdb21
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0


==Following patches were SKIPPED:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-23-20AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-23-20AM_1.log



OPatchauto session completed at Fri Feb 14 04:23:37 2020
Time taken to complete the session 1 minute, 31 seconds
[root@tqdb21: ~]# 
​```

-- 节点1: 3. root 用户下安装 database RU
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

OPatchauto session is initiated at Fri Feb 14 04:25:02 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-25-07AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-25-26AM.log
The id for this session is Y439

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome


Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Binary patch applied successfully on home /u01/app/oracle/product/19c/dbhome


Performing postpatch operation on home /u01/app/oracle/product/19c/dbhome
Postpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome


Preparing home /u01/app/oracle/product/19c/dbhome after database service restarted
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 
OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:tqdb21
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-26-01AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-26-01AM_1.log



OPatchauto session completed at Fri Feb 14 04:32:37 2020
Time taken to complete the session 7 minutes, 35 seconds
[root@tqdb21: ~]# 
​```

-- 节点1: 查看打完 GI RU 后的 patches，以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[oracle@tqdb21: ~]$ opatch lspatches
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:38:35 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> quit
Disconnected
[oracle@tqdb21: ~]$ 

</code></pre>
<p>  节点2,  操作记录：</p>
<pre><code class="language-bash line-numbers">-- 节点2: 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
[root@tqdb22: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: ~]# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/
[root@tqdb22: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/                               
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid   oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid   oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: ~]# 

-- 节点2: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[oracle@tqdb22: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)

OPatch succeeded.
[oracle@tqdb22: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:49:59 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> quit
Disconnected
[oracle@tqdb22: ~]$ 

-- 节点2: 2. root 用户下使用 `-analyze` 命令预安装 RU，测试兼容性
-- (oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
​```
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze

OPatchauto session is initiated at Fri Feb 14 04:51:25 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-51-30AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-51-49AM.log
The id for this session is 63EB

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 
OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:tqdb22
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0


==Following patches were SKIPPED:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-52-04AM_1.log

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-52-04AM_1.log



OPatchauto session completed at Fri Feb 14 04:52:19 2020
Time taken to complete the session 0 minute, 54 seconds
[root@tqdb22: ~]# 
​```

-- 节点2: 3. root 用户下安装 database RU 
-- （包括报错处理：RAC的第二个节点在安装DB补丁时会遇到如下报错）
​```小插曲：RAC的第二个节点在安装DB补丁时会遇到报错，以及处理过程
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
​``` 升级报错了。权限问题
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

OPatchauto session is initiated at Fri Feb 14 04:53:37 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-53-42AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-53-59AM.log
The id for this session is HS71

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome


Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Failed while applying binary patches on home /u01/app/oracle/product/19c/dbhome

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : tqdb22->/u01/app/oracle/product/19c/dbhome Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/dbhome, host: tqdb22.
Command failed:  /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto  apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -target_type rac_database -binary -invPtrLoc /u01/app/oracle/product/19c/dbhome/oraInst.loc -jre /u01/app/oracle/product/19c/dbhome/OPatch/jre -persistresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_tqdb22_rac.ser -analyzedresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_analyze_tqdb22_rac.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-54-53AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Fri Feb 14 04:55:41 2020
Time taken to complete the session 2 minutes, 4 seconds

 opatchauto failed with error code 42
[root@tqdb22: ~]# 
​```

​```赋权限之后重新执行
[root@tqdb22: ~]# chmod g+w /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml       
-rw-rw-r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# 
​```


​```rac1
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ll /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-rw---- 1 grid oinstall 174 Feb 14 04:30 /u01/app/oraInventory/ContentsXML/oui-patch.xml
​```


​```rac2
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-r--r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# chmod g+w /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml       
-rw-rw-r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# 
​```



​```rac2 赋权限之后重新执行，还是有报错
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

OPatchauto session is initiated at Fri Feb 14 05:31:28 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_05-31-33AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_05-31-51AM.log
The id for this session is WEY7

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome


Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Failed while applying binary patches on home /u01/app/oracle/product/19c/dbhome

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : tqdb22->/u01/app/oracle/product/19c/dbhome Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/dbhome, host: tqdb22.
Command failed:  /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto  apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -target_type rac_database -binary -invPtrLoc /u01/app/oracle/product/19c/dbhome/oraInst.loc -jre /u01/app/oracle/product/19c/dbhome/OPatch/jre -persistresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_tqdb22_rac.ser -analyzedresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_analyze_tqdb22_rac.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_05-32-23AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Fri Feb 14 05:36:07 2020
Time taken to complete the session 4 minutes, 39 seconds

 opatchauto failed with error code 42
[root@tqdb22: ~]# 
​```



-- ok 
​```rac2 将第一个节点的补丁oneoffs目录拷贝到第二个节点，继续打补丁
[root@tqdb22: /Software/19.6.0.0.0]# chmod 777 Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0]# ll
total 0
drwxrwxrwx 3 oracle oinstall 86 Feb 14 05:20 Patch_30501910_GI_RU
drwxr-xr-x 2 oracle oinstall 47 Feb 14 02:59 Patch_30557433_DATABASE_RU
[root@tqdb22: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 oracle oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 oracle oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chmod 777 30501910/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll
total 2110640
drwxrwxrwx 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 oracle oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 oracle oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# 

[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cp -r oneoffs/ oneoffs.bak
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cd oneoffs.bak/
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs.bak]$ ls
29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs.bak]$ cd ..
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cd oneoffs
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ rm -rf 29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
​```

​```rac1： 从 节点1 拷贝到 节点2: 将第一个节点的补丁oneoffs目录拷贝到第二个节点
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ scp -r * oracle@tqdb22:/u01/app/oracle/product/19c/dbhome/inventory/oneoffs
actions.xml                                                                                                                                                          100%   98KB  35.8MB/s   00:00    
inventory.xml                                                                                                                                                        100%   64KB  28.4MB/s   00:00    
actions.xml                                                                                                                                                          100%  347   532.2KB/s   00:00    
inventory.xml                                                                                                                                                        100%   18KB  17.4MB/s   00:00    
inventory.xml                                                                                                                                                        100%   45KB  32.8MB/s   00:00    
actions.xml                                                                                                                                                          100%   95KB  36.9MB/s   00:00    
inventory.xml                                                                                                                                                        100%  163KB  46.2MB/s   00:00    
actions.xml                                                                                                                                                          100% 1806KB  66.9MB/s   00:00    
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ 
​```

​```rac2
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
29517242  29585399  30489227  30557433
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ll -th
total 0
drwxr-xr-x 4 oracle oinstall 29 Feb 14 06:04 30557433
drwxr-xr-x 4 oracle oinstall 29 Feb 14 06:04 30489227
drwxr-x--- 4 oracle oinstall 29 Feb 14 06:04 29585399
drwxr-x--- 4 oracle oinstall 29 Feb 14 06:04 29517242
​```


​```rac2
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

OPatchauto session is initiated at Fri Feb 14 06:04:27 2020

System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_06-04-33AM.log.

Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_06-04-51AM.log
The id for this session is UY7B

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome


Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
 

Preparing home /u01/app/oracle/product/19c/dbhome after database service restarted
No step execution required.........
 
OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:tqdb22
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.

Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.



OPatchauto session completed at Fri Feb 14 06:05:14 2020
Time taken to complete the session 0 minute, 48 seconds
[root@tqdb22: ~]# 
​```
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
​```


-- 节点2: 查看打完 GI RU 后的 patches，以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
​```
[oracle@tqdb22: ~]$ opatch lspatches
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded.
[oracle@tqdb22: ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 06:46:12 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> quit
Disconnected
[oracle@tqdb22: ~]$ 
​```


</code></pre>
</blockquote>
<p><code>oracle $ /u01/app/oracle/product/19c/dbhome/OPatch/datapatch -verbose</code>。<span class="text-highlighted-inline" style="background-color: #fffd38;"><strong>注意</strong></span> ：上面说了依次打减少了停机时间，但是停机时间还是需要的，就是在这里的运行datapatch的时间。这个步骤是升级数据字典，针对整个database的数据字典，因此只需在一个节点上跑就可以了。主要注意的是，如果是cdb模式，需要 <code>alter pluggable database all open</code>，打开所有的pdb之后，再运行datapatch。</p>
<p><span class="text-highlighted-inline" style="background-color: #fffd38;"><strong>说明</strong></span>：我这里只安装了数据库软件，还没有创建数据库，所以不需要升级数据字典。（没创建数据库，还没有数据字典呢）</p>
<h3>3.5 创建数据库</h3>
<h4>3.5.1 asmca 创建 DATA 磁盘组</h4>
<blockquote><p>
  grid 用户图形界面，使用 asmcd 创建磁盘组</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    192.168.6.21:0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Fri Feb 14 07:12:53 CST 2020
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ asmca

</code></pre>
</blockquote>
<p>asmca 安装截图：</p>
<ul>
<li>19c RAC asmca 创建 DATA 磁盘组 01<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2001.png" alt="19cRACasmca创建DATA磁盘组01" /></p>
</li>
<li>19c RAC asmca 创建 DATA 磁盘组 02<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2002.png" alt="19cRACasmca创建DATA磁盘组02" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 03<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2003.png" alt="19cRACasmca创建DATA磁盘组03" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 04<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2004.png" alt="19cRACasmca创建DATA磁盘组04" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 05<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2005.png" alt="19cRACasmca创建DATA磁盘组05" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 06 Add DATA Disk Group<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2006%20Add%20DATA%20Disk%20Group.png" alt="19cRACasmca创建DATA磁盘组06AddDATADiskGroup" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 07<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2007.png" alt="19cRACasmca创建DATA磁盘组07" /></p>
</li>
<li>
<p>19c RAC asmca 创建 DATA 磁盘组 08<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20asmca%20创建%20DATA%20磁盘组%2008.png" alt="19cRACasmca创建DATA磁盘组08" /></p>
</li>
</ul>
<h4>3.5.2 dbca 创建数据库</h4>
<blockquote><p>
  oracle 用户图形界面，使用 dbca 创建磁盘组</p>
<pre><code class="language-bash line-numbers">[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - oracle
Last login: Fri Feb 14 07:45:36 CST 2020 on pts/1
[oracle@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[oracle@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[oracle@tqdb21: ~]$ dbca


</code></pre>
</blockquote>
<p>dbca 安装截图：</p>
<ul>
<li>19c RAC dbca 建库 01<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2001.png" alt="19cRACdbca建库01" /></p>
</li>
<li>19c RAC dbca 建库 02<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2002.png" alt="19cRACdbca建库02" /></p>
</li>
<li>
<p>19c RAC dbca 建库 03<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2003.png" alt="19cRACdbca建库03" /></p>
</li>
<li>
<p>19c RAC dbca 建库 04<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2004.png" alt="19cRACdbca建库04" /></p>
</li>
<li>
<p>19c RAC dbca 建库 05<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2005.png" alt="19cRACdbca建库05" /></p>
</li>
<li>
<p>19c RAC dbca 建库 06<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2006.png" alt="19cRACdbca建库06" /></p>
</li>
<li>
<p>19c RAC dbca 建库 07<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2007.png" alt="19cRACdbca建库07" /></p>
</li>
<li>
<p>19c RAC dbca 建库 08<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2008.png" alt="19cRACdbca建库08" /></p>
</li>
<li>
<p>19c RAC dbca 建库 09<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2009.png" alt="19cRACdbca建库09" /></p>
</li>
<li>
<p>19c RAC dbca 建库 10<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2010-ok.png" alt="19cRACdbca建库10-ok" /></p>
</li>
<li>
<p>19c RAC dbca 建库 11<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2011.png" alt="19cRACdbca建库11" /></p>
</li>
<li>
<p>19c RAC dbca 建库 12<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2012.png" alt="19cRACdbca建库12" /></p>
</li>
<li>
<p>19c RAC dbca 建库 13<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2013.png" alt="19cRACdbca建库13" /></p>
</li>
<li>
<p>19c RAC dbca 建库 14<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2014.png" alt="19cRACdbca建库14" /></p>
</li>
<li>
<p>19c RAC dbca 建库 15<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2015.png" alt="19cRACdbca建库15" /></p>
</li>
<li>
<p>19c RAC dbca 建库 16 SYS 口令<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2016.png" alt="19cRACdbca建库16" /></p>
</li>
<li>
<p>19c RAC dbca 建库 17<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2017.png" alt="19cRACdbca建库17" /></p>
</li>
<li>
<p>19c RAC dbca 建库 18 Edit Control Files<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2018%20Edit%20Control%20Files.png" alt="19cRACdbca建库18EditControlFiles" /></p>
</li>
<li>
<p>19c RAC dbca 建库 19 Redo 200M<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2019%20Redo%20200M.png" alt="19cRACdbca建库19Redo200M" /></p>
</li>
<li>
<p>19c RAC dbca 建库 20<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2020.png" alt="19cRACdbca建库20" /></p>
</li>
<li>
<p>19c RAC dbca 建库 21<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2021.png" alt="19cRACdbca建库21" /></p>
</li>
<li>
<p>19c RAC dbca 建库 22 Yes<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2022%20Yes.png" alt="19cRACdbca建库22Yes" /></p>
</li>
<li>
<p>19c RAC dbca 建库 23<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2023.png" alt="19cRACdbca建库23" /></p>
</li>
<li>
<p>19c RAC dbca 建库 24<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2024.png" alt="19cRACdbca建库24" /></p>
</li>
<li>
<p>19c RAC dbca 建库 25<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2025.png" alt="19cRACdbca建库25" /></p>
</li>
<li>
<p>19c RAC dbca 建库 26<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2026.png" alt="19cRACdbca建库26" /></p>
</li>
<li>
<p>19c RAC dbca 建库 27<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2027.png" alt="19cRACdbca建库27" /></p>
</li>
<li>
<p>19c RAC dbca 建库 28<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2028.png" alt="19cRACdbca建库28" /></p>
</li>
<li>
<p>19c RAC dbca 建库 29<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2029.png" alt="19cRACdbca建库29" /></p>
</li>
<li>
<p>19c RAC dbca 建库 30<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2030.png" alt="19cRACdbca建库30" /></p>
</li>
<li>
<p>19c RAC dbca 建库 31<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20dbca%20建库%2031.png" alt="19cRACdbca建库31" /></p>
</li>
</ul>
<pre><code class="language-bash line-numbers">-- 查看补丁信息
19:29:25 sys@TQDB(tqdb21)> set linesize 300;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_TIMESTAMP for a10;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_TIMESTAMP for a20;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_VERSIONT for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> 
19:29:34 sys@TQDB(tqdb21)> select install_id, PATCH_ID,PATCH_UID,ACTION,STATUS, DESCRIPTION, SOURCE_VERSION,SOURCE_BUILD_DESCRIPTION,SOURCE_BUILD_TIMESTAMP, TARGET_VERSION, TARGET_BUILD_DESCRIPTION, to_char(TARGET_BUILD_TIMESTAMP, 'yyyy-mm-dd hh24:mi:ss') from dba_registry_sqlpatch;

INSTALL_ID   PATCH_ID  PATCH_UID ACTION          STATUS          DESCRIPTION                                                  SOURCE_VERSION  SOURCE_BUILD_DESCRIP SOURCE_BUILD_TIMESTA TARGET_VERSION  TARGET_BUILD_DESCRIP TO_CHAR(TARGET_BUIL
---------- ---------- ---------- --------------- --------------- ------------------------------------------------------------ --------------- -------------------- -------------------- --------------- -------------------- -------------------
         1   30557433   23305305 APPLY           SUCCESS         Database Release Update : 19.6.0.0.200114 (30557433)         19.1.0.0.0      Feature Release                           19.6.0.0.0      Release_Update       2019-12-17 15:50:04

19:29:36 sys@TQDB(tqdb21)> 

</code></pre>
<ul>
<li>19c RAC sqlplus 查询补丁信息<br />
<img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/19c%20RAC%20sqlplus%20查询补丁信息.png" alt="19cRACsqlplus查询补丁信息" /></li>
</ul>
<p>至此，Oracle 19c RAC 安装以及升级 RU 部署已经完成。</p>
<p>单实例的安装以及升级RU的步骤 比 RAC环境相对简单很多，就步骤赘述了。</p>
<p>下一篇，我们将搭建 Oracle 19c MAA 高可用性体系结构 （Oracle 19c RAC + Active Data Guard）</p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2020/03/oracle-19c-rac-installation-and-upgrade-ru.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Oracle 19c 相关介绍</title>
		<link>https://dbtan.com/2020/03/oracle-19c-introduce-2.html</link>
					<comments>https://dbtan.com/2020/03/oracle-19c-introduce-2.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Mon, 16 Mar 2020 17:01:27 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Oracle 19c]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=405</guid>

					<description><![CDATA[Oracle Database 19c 是 Oracle Database 12c 和 18c 系列产品的最后 [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Oracle Database 19c 是 Oracle Database 12c 和 18c 系列产品的最后一个长期支持版本。</p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/upgrade%20to%20this%20release%2019c.jpeg" alt="upgrade to this release 19c" /></p>
<p>随着 Oracle 数据库软件版本的升级，对相应的 OS 操作系统版本也有了更新的要求。我们从 metalink 查询下 19c 对 OS Version 的认证吧。</p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle%2019c%20OS%20Certifications%201.png" alt="Oracle 19c OS Certifications 1" /></p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Oracle%2019c%20OS%20Certifications%202.png" alt="Oracle 19c OS Certifications 2" /></p>
<p>从查询结果，看到我们最为常用的 RHEL/OL 都需要 <code>RHEL7/OL7</code> 。</p>
<p>2019 年 2 月 Oracle 19c 正式问世，新的版本引入了很多重大的功能和新特性，如：自治索引、在线操作维护的增强、ADG备库发出的DML支持自动重定向等等，引起了广泛关注。而目前市面上 Oracle 数据库应用较为广泛的版本主要是 11gR2 的 11.2.0.4 。</p>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/A%2011.2.0.4%20is%20forever.jpg" alt="A 11.2.0.4 is forever" /></p>
<blockquote><p>
  Oracle 11g 的最终版本 11.2.0.4 发布于2013年，将于 2019年1月1日 结束官方标准支持期，进入收费的扩展支持期。</p>
<p>  也就是说，这个版本自2019年将不再发布公开的补丁，除非是专门购买了扩展付费服务的客户，才能够获得补丁支持。</p>
<p>  19c 将是 Oracle 12c 的终极版本，相当于传统的 12.2.0.3 版本，按照管理，这个版本将会支持到 2026年。</p>
<p>  <strong>Release Schedule of Current Database Releases (Doc ID 742060.1)</strong></p>
<p>  <img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyuPic/master/uPic/Database%20Release%20Roadmap%201.png" alt="Database Release Roadmap 1" />
</p></blockquote>
<p>想必随着 Oracle 公司的推广，各公司IT部门将迎来一波升级潮。即使不马上升级，也会对 19c 多多关注，并做技术储备。</p>
<blockquote><p>
  按照 Oracle 版本发布周期，以及用户使用情况，预计从 11.2.0.4 升级到 19c (Non-CDB) 会更多些。</p>
<p>  直接升级到 PDB 可能会少些，Non-CDB to PDB 可能会到 20c 也说不定。
</p></blockquote>
<p>接下来的几篇文章，我将使用 VirtualBox 虚拟机安装 CentOS 7.7 + Oracle 19c 开始一步步体验 Oracle 19c。</p>
<p>包括：</p>
<ul>
<li>Oracle 19c 安装、升级 RU（RELEASE UPDATE）</li>
<li>Oracle 19c RAC 搭建</li>
<li>Oracle 19c MAA 最高可用性体系结构 （Oracle RAC + Active Data Guard）
<ul>
<li>Switchover 角色切换</li>
<li>Failover 角色切换</li>
</ul>
</li>
</ul>
<p>尽情期待吧~</p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2020/03/oracle-19c-introduce-2.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>使用 pt-osc 工具增加字段时的不当处理导致了一次故障</title>
		<link>https://dbtan.com/2019/06/pt-osc-a-fault.html</link>
					<comments>https://dbtan.com/2019/06/pt-osc-a-fault.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Thu, 27 Jun 2019 15:35:12 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Percona Toolkit]]></category>
		<category><![CDATA[Trouble Shooting]]></category>
		<category><![CDATA[Online DDL]]></category>
		<category><![CDATA[pt-osc]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=392</guid>

					<description><![CDATA[故事是这样的... 小宝同学在使用 pt-osc 给一个表 A 增加字段，Copying 数据时，异常中断了， [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483cca697" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483cca697"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#%E6%95%85%E4%BA%8B%E6%98%AF%E8%BF%99%E6%A0%B7%E7%9A%84%E2%80%A6" >故事是这样的&#8230;</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#%E8%AE%A9%E6%88%91%E4%BB%AC%E6%A8%A1%E6%8B%9F%E4%B8%80%E4%B8%8B%E6%95%85%E4%BA%8B%E4%B8%AD%E7%9A%84%E5%9C%BA%E6%99%AF" >让我们模拟一下故事中的场景</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#1_%E5%87%86%E5%A4%87%E4%B8%80%E4%B8%AA%E6%B5%8B%E8%AF%95%E8%A1%A8_test_log" >1. 准备一个测试表 test_log</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-4" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#2_%E5%87%86%E5%A4%87%E4%B8%BA%E8%A1%A8_test_log_%E5%A2%9E%E5%8A%A0%E4%B8%80%E4%B8%AA%E5%AD%97%E6%AE%B5_column1" >2. 准备为表 test_log 增加一个字段 column1</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-5" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#3_%E6%A8%A1%E6%8B%9F_pt-osc_%E8%BF%87%E7%A8%8B%E4%B8%AD%E5%BC%82%E5%B8%B8%E4%B8%AD%E6%96%AD%E3%80%82" >3. 模拟 pt-osc 过程中异常中断。</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-6" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#4_%E6%9F%A5%E7%9C%8B%E6%96%B0%E8%A1%A8%E5%92%8C%E8%A7%A6%E5%8F%91%E5%99%A8" >4. 查看新表和触发器</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-7" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#5_%E6%A8%A1%E6%8B%9F%E6%95%85%E9%9A%9C%E3%80%82%E5%8F%AA%E5%88%A0%E9%99%A4%E6%96%B0%E8%A1%A8_test_log_new%EF%BC%8C%E6%B5%8B%E8%AF%95%E2%80%9C%E5%A2%9E%E6%94%B9%E5%88%A0%E2%80%9D%E6%93%8D%E4%BD%9C%E3%80%82" >5. 模拟故障。只删除新表 _test_log_new，测试“增/改/删”操作。</a><ul class='ez-toc-list-level-5' ><li class='ez-toc-heading-level-5'><a class="ez-toc-link ez-toc-heading-8" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#51_%E4%B8%8D%E5%88%A0%E9%99%A4%E6%96%B0%E8%A1%A8%E7%9A%84%E6%83%85%E5%86%B5" >5.1. 不删除新表的情况</a></li><li class='ez-toc-page-1 ez-toc-heading-level-5'><a class="ez-toc-link ez-toc-heading-9" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#52_%E5%88%A0%E9%99%A4%E6%96%B0%E8%A1%A8%E7%9A%84%E6%83%85%E5%86%B5" >5.2. 删除新表的情况</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-10" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#6_%E5%88%A0%E9%99%A43%E4%B8%AA%E8%A7%A6%E5%8F%91%E5%99%A8%EF%BC%8C%E5%86%8D%E8%BF%9B%E8%A1%8C_insertupdatedelete_%E6%93%8D%E4%BD%9C%E6%81%A2%E5%A4%8D%E6%AD%A3%E5%B8%B8%E3%80%82" >6. 删除3个触发器，再进行 insert/update/delete 操作恢复正常。</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-11" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#%E4%BD%BF%E7%94%A8_pt-osc_%E4%B8%8E_%E5%8E%9F%E7%94%9F_56_online_ddl_%E7%9B%B8%E6%AF%94%EF%BC%8C%E5%A6%82%E4%BD%95%E9%80%89%E6%8B%A9" >使用 pt-osc 与 原生 5.6 online ddl 相比，如何选择</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-12" href="https://dbtan.com/2019/06/pt-osc-a-fault.html/#%E6%80%BB%E7%BB%93%E4%B8%80%E4%B8%8B" >总结一下</a></li></ul></nav></div>

<h3>故事是这样的...</h3>
<blockquote><p>
  小宝同学在使用 <code>pt-osc</code> 给一个表 <code>A</code> 增加字段，Copying 数据时，异常中断了，字段没有增加成功。（此时，还没有故障）</p>
<p>  小宝同学也知道 <code>pt-osc</code> 失败后，需要「清理现场」（删除生成的临时表和触发器）。</p>
<p>  但不知道为何只是删除了临时表，而没有删除3个触发器...</p>
<p>  这就引发了这次故障，无法对此表 <code>A</code> 进行 <code>insert/update/delete</code> 操作，报错提示 <code>_A_new</code> 不存在。
</p></blockquote>
<h3>让我们模拟一下故事中的场景</h3>
<h4>1. 准备一个测试表 test_log</h4>
<pre><code class="language-sql line-numbers">root@[10.141.8.203].[dbtan] 14:42:19> select count(*) from test_log;
+----------+
| count(*) |
+----------+
| 26045360 |
+----------+
1 row in set (5.14 sec)

root@[10.141.8.203].[dbtan] 14:42:31> select max(log_id) from test_log;
+-------------+
| max(log_id) |
+-------------+
|    26129680 |
+-------------+
1 row in set (0.05 sec)

root@[10.141.8.203].[dbtan] 14:42:50> show create table test_log \G
*************************** 1. row ***************************
       Table: test_log
Create Table: CREATE TABLE `test_log` (
  `log_id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `abcd_id` int(11) NOT NULL,
  `state` varchar(3) NOT NULL,
  `create_time` datetime NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=26171841 DEFAULT CHARSET=utf8mb4
1 row in set (0.00 sec)

</code></pre>
<h4>2. 准备为表 test_log 增加一个字段 <code>column1</code></h4>
<p>我们先看使用 <code>--print --dry-run</code> 查看下 pt-osc 的操作步骤是这样的：</p>
<ul>
<li>创建一个和要执行 alter 操作的表一样的新的空表 <code>_test_log_new</code> 表结构(是alter之前的结构)</li>
<li>在新表 <code>_test_log_new</code> 执行 alter table 语句（速度应该很快，此时还是空表）</li>
<li>在原表 <code>test_log</code> 中创建触发器3个触发器分别对应 <code>insert/update/delete</code>操作</li>
<li>以一定块大小从原表 <code>test_log</code> 拷贝数据到临时表 <code>_test_log_new</code>，拷贝过程中通过原表上的触发器在原表进行的写操作都会更新到新建的临时表</li>
<li>交换表名 swap_tables (<code>_test_log_new</code> <--><code>test_log</code>): rename 原表<code>test_log</code> 到 <code>_test_log_old</code> 表中，再把临时表 <code>_test_log_new</code> rename为原表<code>test_log</code></li>
<li>如果有参考该表的外键，根据 <code>alter-foreign-keys-method</code> 参数的值，检测外键相关的表，做相应设置的处理</li>
<li>默认最后将旧原表 <code>_test_log_old</code> 删除。</li>
</ul>
<pre><code class="language-bash line-numbers">[root@test-178: ~]# pt-online-schema-change --no-version-check --user=root --password='123456' --host=localhost --chunk-size-limit=1000000 --charset=utf8 P=3306,D=dbtan,t=test_log --alter="ADD COLUMN column1 tinyint(4) DEFAULT NULL" --print --dry-run
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Starting a dry run.  `dbtan`.`test_log` will not be altered.  Specify --execute instead of --dry-run to alter the table.
Creating new table...
CREATE TABLE `dbtan`.`_test_log_new` (
  `log_id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'ä¸»é®ID',
  `abcd_id` int(11) NOT NULL,
  `state` varchar(3) NOT NULL,
  `create_time` datetime NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=26171841 DEFAULT CHARSET=utf8mb4
Created new table dbtan._test_log_new OK.
Altering new table...
ALTER TABLE `dbtan`.`_test_log_new` ADD COLUMN column1 tinyint(4) DEFAULT NULL
Altered `dbtan`.`_test_log_new` OK.
Not creating triggers because this is a dry run.
Not copying rows because this is a dry run.
INSERT LOW_PRIORITY IGNORE INTO `dbtan`.`_test_log_new` (`log_id`, `abcd_id`, `state`, `create_time`) SELECT `log_id`, `abcd_id`, `state`, `create_time` FROM `dbtan`.`test_log` LOCK IN SHARE MODE /*pt-online-schema-change 4525 copy table*/
Not swapping tables because this is a dry run.
Not dropping old table because this is a dry run.
Not dropping triggers because this is a dry run.
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_del`
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_upd`
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_ins`
2019-06-26T20:11:06 Dropping new table...
DROP TABLE IF EXISTS `dbtan`.`_test_log_new`;
2019-06-26T20:11:06 Dropped new table OK.
Dry run complete.  `dbtan`.`test_log` was not altered.
[root@test-178: ~]# 
</code></pre>
<h4>3. 模拟 <code>pt-osc</code> 过程中异常中断。</h4>
<p>我们在使用 <code>pt-osc</code> 为 <code>test_log</code> 增加一个字段 <code>column1</code> 过程中，手工 <code>control+c</code> 中断。模拟「异常中断」。</p>
<pre><code class="language-bash line-numbers">[root@test-178: ~]# pt-online-schema-change --no-version-check --user=root --password='123456' --host=localhost --chunk-size-limit=1000000 --charset=utf8 P=3306,D=dbtan,t=test_log --alter="ADD COLUMN column1 tinyint(4) DEFAULT NULL" --print --execute
No slaves found.  See --recursion-method if host test-178 has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Altering `dbtan`.`test_log`...
Creating new table...
CREATE TABLE `dbtan`.`_test_log_new` (
  `log_id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'ä¸»é®ID',
  `abcd_id` int(11) NOT NULL,
  `state` varchar(3) NOT NULL,
  `create_time` datetime NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=26171841 DEFAULT CHARSET=utf8mb4
Created new table dbtan._test_log_new OK.
Altering new table...
ALTER TABLE `dbtan`.`_test_log_new` ADD COLUMN column1 tinyint(4) DEFAULT NULL
Altered `dbtan`.`_test_log_new` OK.
2019-06-26T21:07:53 Creating triggers...
2019-06-26T21:07:53 Created triggers OK.
2019-06-26T21:07:53 Copying approximately 25059595 rows...
INSERT LOW_PRIORITY IGNORE INTO `dbtan`.`_test_log_new` (`log_id`, `abcd_id`, `state`, `create_time`) SELECT `log_id`, `abcd_id`, `state`, `create_time` FROM `dbtan`.`test_log` LOCK IN SHARE MODE /*pt-online-schema-change 8681 copy table*/
^C^C^C^C^C^C
^C^C^C^C^C^C
^C^C^C^C^C^C^C
# Exiting on SIGINT.
Not dropping triggers because the tool was interrupted.  To drop the triggers, execute:
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_del`
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_upd`
DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_ins`
Not dropping the new table `dbtan`.`_test_log_new` because the tool was interrupted.  To drop the new table, execute:
DROP TABLE IF EXISTS `dbtan`.`_test_log_new`;
`dbtan`.`test_log` was not altered.
[root@test-178: ~]# 
</code></pre>
<blockquote><p>
  手工 <code>control+c</code> 模拟「异常中断」后，我们看到输出显示“由于工具被中断了，没能删除触发器和新表 <code>_test_log_new</code>”
</p></blockquote>
<h4>4. 查看新表和触发器</h4>
<pre><code class="language-sql line-numbers">root@[10.141.8.203].[dbtan] 21:08:53> show tables;
+-----------------+
| Tables_in_dbtan |
+-----------------+
| _test_log_new   |
| test_log        |
+-----------------+
2 rows in set (0.00 sec)

root@[10.141.8.203].[dbtan] 21:08:54> show triggers \G
*************************** 1. row ***************************
             Trigger: pt_osc_dbtan_test_log_ins
               Event: INSERT
               Table: test_log
           Statement: REPLACE INTO `dbtan`.`_test_log_new` (`log_id`, `abcd_id`, `state`, `create_time`) VALUES (NEW.`log_id`, NEW.`abcd_id`, NEW.`state`, NEW.`create_time`)
              Timing: AFTER
             Created: NULL
            sql_mode: NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION
             Definer: root@localhost
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: utf8mb4_general_ci
*************************** 2. row ***************************
             Trigger: pt_osc_dbtan_test_log_upd
               Event: UPDATE
               Table: test_log
           Statement: BEGIN DELETE IGNORE FROM `dbtan`.`_test_log_new` WHERE !(OLD.`log_id` <=> NEW.`log_id`) AND `dbtan`.`_test_log_new`.`log_id` <=> OLD.`log_id`;REPLACE INTO `dbtan`.`_test_log_new` (`log_id`, `abcd_id`, `state`, `create_time`) VALUES (NEW.`log_id`, NEW.`abcd_id`, NEW.`state`, NEW.`create_time`);END
              Timing: AFTER
             Created: NULL
            sql_mode: NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION
             Definer: root@localhost
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: utf8mb4_general_ci
*************************** 3. row ***************************
             Trigger: pt_osc_dbtan_test_log_del
               Event: DELETE
               Table: test_log
           Statement: DELETE IGNORE FROM `dbtan`.`_test_log_new` WHERE `dbtan`.`_test_log_new`.`log_id` <=> OLD.`log_id`
              Timing: AFTER
             Created: NULL
            sql_mode: NO_AUTO_VALUE_ON_ZERO,STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION
             Definer: root@localhost
character_set_client: utf8
collation_connection: utf8_general_ci
  Database Collation: utf8mb4_general_ci
3 rows in set (0.00 sec)

root@[10.141.8.203].[dbtan] 21:08:58> 
root@[10.141.8.203].[dbtan] 16:43:08> show create table test_log \G
*************************** 1. row ***************************
       Table: test_log
Create Table: CREATE TABLE `test_log` (
  `log_id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `abcd_id` int(11) NOT NULL,
  `state` varchar(3) NOT NULL,
  `create_time` datetime NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=26171841 DEFAULT CHARSET=utf8mb4
1 row in set (0.05 sec)

root@[10.141.8.203].[dbtan] 16:43:43> show create table _test_log_new \G
*************************** 1. row ***************************
       Table: _test_log_new
Create Table: CREATE TABLE `_test_log_new` (
  `log_id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键ID',
  `abcd_id` int(11) NOT NULL,
  `state` varchar(3) NOT NULL,
  `create_time` datetime NOT NULL,
  `column1` tinyint(4) DEFAULT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=26171841 DEFAULT CHARSET=utf8mb4
1 row in set (0.00 sec)

root@[10.141.8.203].[dbtan] 16:43:57> select count(*) from test_log;
+----------+
| count(*) |
+----------+
| 26045360 |
+----------+
1 row in set (5.16 sec)

root@[10.141.8.203].[dbtan] 16:44:35> select count(*) from _test_log_new;
+----------+
| count(*) |
+----------+
| 26045360 |
+----------+
1 row in set (5.23 sec)

root@[10.141.8.203].[dbtan] 16:44:50> 
root@[10.141.8.203].[dbtan] 17:47:52> select max(log_id) from test_log;
+-------------+
| max(log_id) |
+-------------+
|    26129680 |
+-------------+
1 row in set (0.00 sec)

root@[10.141.8.203].[dbtan] 17:50:18> select max(log_id) from _test_log_new;
+-------------+
| max(log_id) |
+-------------+
|    26129680 |
+-------------+
1 row in set (0.00 sec)

root@[10.141.8.203].[dbtan] 17:50:27> 
</code></pre>
<h4>5. 模拟故障。只删除新表 <code>_test_log_new</code>，测试“增/改/删”操作。</h4>
<h5>5.1. 不删除新表的情况</h5>
<p>在不删除新表 <code>_test_log_new</code> 时，对原表 <code>test_log</code> 进行 <code>insert/update/delete</code> 操作。</p>
<p>操作正常，触发器会更新新表 <code>_test_log_new</code>。</p>
<pre><code class="language-sql line-numbers">root@[10.141.8.203].[dbtan] 18:25:15> insert into test_log(abcd_id, state, create_time) values(66668888, 'abc', now());                             
Query OK, 1 row affected (0.02 sec)

root@[10.141.8.203].[dbtan] 18:25:44> select max(log_id) from test_log ;
+-------------+
| max(log_id) |
+-------------+
|    26171841 |
+-------------+
1 row in set (0.00 sec)

root@[10.141.8.203].[dbtan] 18:26:28> select * from test_log where log_id = 26171841;
+----------+----------+-------+---------------------+
| log_id   | abcd_id  | state | create_time         |
+----------+----------+-------+---------------------+
| 26171841 | 66668888 | abc   | 2019-06-27 18:25:44 |
+----------+----------+-------+---------------------+
1 row in set (0.00 sec)

root@[10.141.8.203].[dbtan] 18:26:47> select * from _test_log_new where log_id = 26171841;
+----------+----------+-------+---------------------+---------+
| log_id   | abcd_id  | state | create_time         | column1 |
+----------+----------+-------+---------------------+---------+
| 26171841 | 66668888 | abc   | 2019-06-27 18:25:44 |    NULL |
+----------+----------+-------+---------------------+---------+
1 row in set (0.00 sec)
</code></pre>
<h5>5.2. 删除新表的情况</h5>
<p>只删除新表 <code>_test_log_new</code> 后，对原表 <code>test_log</code> 进行 <code>insert/update/delete</code> 操作。</p>
<p>因为 <code>AFTER trigger</code> 触发器的存在，会更新新表 <code>_test_log_new</code>，但此时新表 <code>_test_log_new</code> 已被删除了。</p>
<p>所以，报错提示 <code>_test_log_new</code> 不存在。</p>
<p>操作过程是：</p>
<ol>
<li>开启事务。</li>
<li>操作原表 <code>test_log</code></li>
<li>触发 <code>AFTER trigger</code> 触发器。操作新表 <code>_test_log_new</code> 时报错（表不存在）</li>
<li>回滚对原表 <code>test_log</code> 的操作。</li>
<li>关闭事务。</li>
</ol>
<pre><code class="language-sql line-numbers">root@[10.141.8.203].[dbtan] 18:34:00> drop table _test_log_new;
Query OK, 0 rows affected (1.02 sec)

root@[10.141.8.203].[dbtan] 18:34:02> insert into test_log(abcd_id, state, create_time) values(66669999, 'ABC', now()); 
ERROR 1146 (42S02): Table 'dbtan._test_log_new' doesn't exist
root@[10.141.8.203].[dbtan] 18:34:08> 
root@[10.141.8.203].[dbtan] 18:34:15> update test_log set abcd_id = 66669999 where abcd_id = 66668888;
ERROR 1146 (42S02): Table 'dbtan._test_log_new' doesn't exist
root@[10.141.8.203].[dbtan] 18:34:33> delete from test_log where abcd_id = 66668888;
ERROR 1146 (42S02): Table 'dbtan._test_log_new' doesn't exist
root@[10.141.8.203].[dbtan] 18:34:58> 
</code></pre>
<h4>6. 删除3个触发器，再进行 <code>insert/update/delete</code> 操作恢复正常。</h4>
<pre><code class="language-sql line-numbers">root@[10.141.8.203].[dbtan] 19:11:35> DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_del`;
Query OK, 0 rows affected (0.01 sec)

root@[10.141.8.203].[dbtan] 19:11:36> DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_upd`;
Query OK, 0 rows affected (0.02 sec)

root@[10.141.8.203].[dbtan] 19:11:42> DROP TRIGGER IF EXISTS `dbtan`.`pt_osc_dbtan_test_log_ins`;
Query OK, 0 rows affected (0.00 sec)

root@[10.141.8.203].[dbtan] 19:11:46> 
root@[10.141.8.203].[dbtan] 19:18:44> insert into test_log(abcd_id, state, create_time) values(66669999, 'ABC', now()); 
Query OK, 1 row affected (0.00 sec)

root@[10.141.8.203].[dbtan] 19:18:45> update test_log set abcd_id = 66668888 where abcd_id = 66669999;
Query OK, 1 row affected (16.81 sec)
Rows matched: 1  Changed: 1  Warnings: 0

root@[10.141.8.203].[dbtan] 19:19:06> delete from test_log where abcd_id = 66668888;
Query OK, 1 row affected (17.12 sec)
</code></pre>
<h3>使用 <code>pt-osc</code> 与 <code>原生 5.6 online ddl</code> 相比，如何选择</h3>
<ul>
<li><code>online ddl</code> 在必须 <code>copy table</code> 时成本较高，不宜采用</li>
<li><code>pt-osc</code> 工具在存在触发器时，不适用</li>
<li>修改索引、外键、列名时，优先采用 <code>online ddl</code>，并指定 <code>ALGORITHM=INPLACE</code></li>
<li>其它情况使用 <code>pt-osc</code>，虽然存在 <code>copy data</code></li>
<li><code>pt-osc</code> 比 <code>online ddl</code> 要慢一倍左右，因为它是根据负载调整的</li>
<li>无论哪种方式都选择的业务低峰期执行</li>
<li>特殊情况需要利用主从特性，先<code>alter</code>从库，主备切换，再改原主库</li>
</ul>
<p><img decoding="async" src="https://raw.githubusercontent.com/tanquan/MyPicGo/master/img/ddl_flow.png" alt="Choosing the right DDL option" /></p>
<blockquote><p>
  参考： https://www.percona.com/blog/2014/11/18/avoiding-mysql-alter-table-downtime/
</p></blockquote>
<h3>总结一下</h3>
<ul>
<li>这次故障由于在使用 pt-osc 为表增加字段这类 DDL 操作时，发生中断的后续操作处理不当，导致无法对原表 DML 操作的严重事故。</p>
</li>
<li>通过测试，我们知道 pt-osc 的操作原理：
<ol>
<li>创建一个和要执行 alter 操作的表一样的新的空表 <code>_test_log_new</code> 表结构(是alter之前的结构)</li>
<li>在新表 <code>_test_log_new</code> 执行 alter table 语句（速度应该很快，此时还是空表）</li>
<li>在原表 <code>test_log</code> 中创建触发器3个触发器分别对应 <code>insert/update/delete</code>操作</li>
<li>以一定块大小从原表 <code>test_log</code> 拷贝数据到临时表 <code>_test_log_new</code>，拷贝过程中通过原表上的触发器在原表进行的写操作都会更新到新建的临时表</li>
<li>交换表名 swap_tables (<code>_test_log_new</code> <--><code>test_log</code>): rename 原表<code>test_log</code> 到 <code>_test_log_old</code> 表中，再把临时表 <code>_test_log_new</code> rename为原表<code>test_log</code></li>
<li>如果有参考该表的外键，根据 <code>alter-foreign-keys-method</code> 参数的值，检测外键相关的表，做相应设置的处理</li>
<li>默认最后将旧原表 <code>_test_log_old</code> 删除。</li>
</ol>
</li>
</ul>
<p>归根到底，就是通过3个 <code>AFTER trigger</code> 同步增量数据变化的。</p>
<p>所以，在使用 <code>pt-osc</code> 操作过程中发生中断，「清理现场」：首先要在做的就是「切断」原表与新表之间的联系（删除3个trigger），清理（删除）新表次之。</p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2019/06/pt-osc-a-fault.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NetApp A700 存储 使用 multipath 多路径管理软件</title>
		<link>https://dbtan.com/2019/06/netapp-a700-use-multipath.html</link>
					<comments>https://dbtan.com/2019/06/netapp-a700-use-multipath.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Tue, 25 Jun 2019 06:50:59 +0000</pubDate>
				<category><![CDATA[Oracle]]></category>
		<category><![CDATA[存储]]></category>
		<category><![CDATA[multipath]]></category>
		<category><![CDATA[NetApp A700]]></category>
		<guid isPermaLink="false">https://www.dbtan.com/?p=382</guid>

					<description><![CDATA[背景： 去年（2018年）正好赶上数据库需要存储扩容。在各方协调下，我们借到并测试了 NetApp A700  [&#8230;]]]></description>
										<content:encoded><![CDATA[<blockquote><p>
  背景：</p>
<p>  去年（2018年）正好赶上数据库需要存储扩容。在各方协调下，我们借到并测试了 NetApp A700 All Flash Arrays。</p>
<p>  因为测试得出的性能报告，是根据我们特定的业务场景和具体的硬件设备测试所得，所以我这里测试得出的存储的性能报告未必对大家都受用。也就不公开分享了。</p>
<pre><code>  存储性能的三大关键指标（IOPS、 Throughput 吞吐量、 Latency 访问时延）
  IOPS: 和盘的数量、数据块大小有关。
  Throughput 吞吐量: 和数据块大小也是相关的。
  Latency 访问时延: 是指块存储处理一个I/O需要的时间。
  所以，存储的性能数据需要有个基准，一般是需要具体测试的。
</code></pre>
<p>  虽说由于种种原因最终没有采购使用，但还是非常感谢当时的领导、同事以及 NetApp 厂商、代理商的支持。</p>
<p>  整理此文「NetApp A700 存储 使用 multipath 多路径管理软件」与大家分享。
</p></blockquote>
<p><strong>Revision    V1.4</strong></p>
<table>
<thead>
<tr>
<th>No.</th>
<th>Date</th>
<th>Author/Modifier</th>
<th>Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0</td>
<td>2018-07-20</td>
<td>谈权</td>
<td>初稿</td>
</tr>
<tr>
<td>1.1</td>
<td>2018-08-23</td>
<td>谈权</td>
<td>完善自动生成各个配置文件的脚本</td>
</tr>
<tr>
<td>1.2</td>
<td>2018-08-24</td>
<td>谈权</td>
<td>增加【附2：配置 <code>multipath</code> 基本使用方法】</td>
</tr>
<tr>
<td>1.3</td>
<td>2018-09-03</td>
<td>谈权</td>
<td>增加【附3：删除LUN的操作】</td>
</tr>
<tr>
<td>1.4</td>
<td>2018-09-05</td>
<td>谈权</td>
<td>增加【附4：网卡配置中添加 <code>hotplug=no</code> 参数，<br />避免<code>start_udev</code>命令导致Oracle RAC 的vip漂移问题】</td>
</tr>
</tbody>
</table>
<div id="ez-toc-container" class="ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-light-blue ez-toc-container-direction">
<p class="ez-toc-title" style="cursor:inherit">Table of Contents</p>
<label for="ez-toc-cssicon-toggle-item-69e7483ce2bff" class="ez-toc-cssicon-toggle-label"><span class="ez-toc-cssicon"><span class="eztoc-hide" style="display:none;">Toggle</span><span class="ez-toc-icon-toggle-span"><svg style="fill: #999;color:#999" xmlns="http://www.w3.org/2000/svg" class="list-377408" width="20px" height="20px" viewBox="0 0 24 24" fill="none"><path d="M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z" fill="currentColor"></path></svg><svg style="fill: #999;color:#999" class="arrow-unsorted-368013" xmlns="http://www.w3.org/2000/svg" width="10px" height="10px" viewBox="0 0 24 24" version="1.2" baseProfile="tiny"><path d="M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z"/></svg></span></span></label><input type="checkbox"  id="ez-toc-cssicon-toggle-item-69e7483ce2bff"  aria-label="Toggle" /><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-1" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A41%EF%BC%9A_%E7%94%9F%E6%88%90_lun_infotxt_%E6%96%87%E4%BB%B6%E3%80%82%EF%BC%88%E6%A0%BC%E5%BC%8F%E5%8C%96_sanlun_lun_show_%E8%BE%93%E5%87%BA%EF%BC%8C%E8%A7%81%E9%99%841%EF%BC%89" >步骤1： 生成 lun_info.txt 文件。（格式化 sanlun lun show 输出，见附1）</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-2" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A42%EF%BC%9A_%E7%94%9F%E6%88%90%E5%A4%9A%E8%B7%AF%E5%BE%84%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6_etcmultipathconf" >步骤2： 生成多路径配置文件 /etc/multipath.conf</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-3" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A42-1%EF%BC%9A_%E6%A0%BC%E5%BC%8F%E5%8C%96%E8%BE%93%E5%87%BA_multipath_-ll_%E4%BF%A1%E6%81%AF%EF%BC%8C%E6%96%B9%E4%BE%BF%E6%9F%A5%E7%9C%8B%E9%93%BE%E8%B7%AF%E7%8A%B6%E6%80%81" >步骤2-1： 格式化输出 multipath -ll 信息，方便查看链路状态</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-4" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A43%EF%BC%9A%E7%94%9F%E6%88%90%E7%BB%91%E5%AE%9A%E6%96%87%E4%BB%B6_etcmultipathbindings_%EF%BC%88%E5%8F%AF%E4%B8%8D%E5%81%9A%EF%BC%8C%E7%94%B1%E4%BA%8E%E5%A4%9A%E8%B7%AF%E5%BE%84%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E4%B8%AD_user_friendly_names_no_%EF%BC%89" >步骤3：生成绑定文件 /etc/multipath/bindings （可不做，由于多路径配置文件中 user_friendly_names no ）</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-5" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A44%EF%BC%9A%E7%94%9F%E6%88%90_99-oracle-asmdevicesrules_%E8%A7%84%E5%88%99" >步骤4：生成 99-oracle-asmdevices.rules 规则</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-6" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E6%AD%A5%E9%AA%A45%EF%BC%9A%E5%AF%B9%E6%AF%942%E4%B8%AA%E8%8A%82%E7%82%B9%E7%9A%84_devasm-lunX_%E5%9D%97%E8%AE%BE%E5%A4%87%EF%BC%88block%EF%BC%89%E7%9A%84_wwid" >步骤5：对比2个节点的 /dev/asm-lunX 块设备（block）的 wwid</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-7" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E9%99%841%EF%BC%9A%E6%9F%A5%E7%9C%8B%E6%A0%BC%E5%BC%8F%E5%8C%96%E5%90%8E%E7%9A%84_sanlun_lun_show_%E8%BE%93%E5%87%BA" >附1：查看格式化后的 sanlun lun show 输出</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-8" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E9%99%842%EF%BC%9A%E9%85%8D%E7%BD%AE_multipath_%E5%9F%BA%E6%9C%AC%E4%BD%BF%E7%94%A8%E6%96%B9%E6%B3%95" >附2：配置 multipath 基本使用方法</a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-9" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#1_%E6%9F%A5%E7%9C%8B%E4%B8%BB%E6%9C%BA%E6%88%96%E8%80%85%E5%AD%98%E5%82%A8%E4%BA%A4%E6%8D%A2%E6%9C%BA%E4%B8%8A%E7%9A%84WWN%E5%8F%B7%EF%BC%8C%E5%9C%A8%E5%AD%98%E5%82%A8%E4%B8%8A%E5%B0%86LUN%E6%98%A0%E5%B0%84%E7%BB%99%E9%9C%80%E8%A6%81%E7%9A%84%E4%B8%BB%E6%9C%BA" >1. 查看主机或者存储交换机上的WWN号，在存储上将LUN映射给需要的主机</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-10" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#2_%E5%9C%A8%E7%B3%BB%E7%BB%9F%E5%86%85%E6%89%A7%E8%A1%8C%E6%89%AB%E7%9B%98%E5%91%BD%E4%BB%A4%EF%BC%8C%E6%B2%A1%E6%9C%89%E5%91%BD%E4%BB%A4%E5%85%88%E5%AE%89%E8%A3%85_sg3_utils" >2. 在系统内执行扫盘命令，没有命令先安装 sg3_utils</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-11" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#3_%E6%9F%A5%E7%9C%8B%E6%98%AF%E5%90%A6%E6%98%A0%E5%B0%84%E5%88%B0%E5%AF%B9%E5%BA%94%E7%9A%84" >3. 查看是否映射到对应的</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-12" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#4_%E6%9F%A5%E7%9C%8B%E6%98%AF%E5%90%A6%E5%AE%89%E8%A3%85%E4%BA%86_multipath" >4. 查看是否安装了 multipath</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-13" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#5_%E6%8B%B7%E8%B4%9D%E9%BB%98%E8%AE%A4%E7%9A%84_multipathconf%E5%88%B0_etc_%E7%9B%AE%E5%BD%95%E4%B8%8B%EF%BC%8C%E4%B9%9F%E5%8F%AF%E4%BB%A5%E4%BD%BF%E7%94%A8mpathconf%E5%91%BD%E4%BB%A4%E5%88%9B%E5%BB%BA%E9%BB%98%E8%AE%A4%E6%A8%A1%E6%9D%BF" >5. 拷贝默认的 multipath.conf到 /etc 目录下，也可以使用mpathconf命令创建默认模板</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-14" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#6_%E6%9F%A5%E7%9C%8B%E6%9C%AC%E5%9C%B0%E5%AD%98%E5%82%A8wwid" >6. 查看本地存储wwid</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-15" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#7_%E6%9F%A5%E7%9C%8B%E5%AD%98%E5%82%A8%E5%8F%82%E6%95%B0" >7. 查看存储参数</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-16" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#8_%E6%B7%BB%E5%8A%A0%E6%9C%AC%E5%9C%B0%E7%9B%98%E5%88%B0_blacklist_%E5%86%85%EF%BC%8C%E4%B8%8D%E5%90%8C%E7%9A%84%E5%AD%98%E5%82%A8%E5%92%8C%E7%B3%BB%E7%BB%9F%E5%8F%82%E8%80%83%E5%AE%98%E6%96%B9%E7%9A%84%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5" >8. 添加本地盘到 blacklist 内，不同的存储和系统参考官方的最佳实践</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-17" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#9_%E6%B8%85%E7%A9%BA%E5%B7%B2%E6%9C%89%E7%9A%84_multipath_%E8%AE%B0%E5%BD%95" >9. 清空已有的 multipath 记录</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-18" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#10_%E6%89%93%E5%8D%B0%E8%AF%8A%E6%96%AD%E4%BF%A1%E6%81%AF" >10. 打印诊断信息</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-19" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#11_%E5%90%AF%E7%94%A8%E5%A4%9A%E8%B7%AF%E5%BE%84%E5%AE%88%E6%8A%A4%E7%A8%8B%E5%BA%8F%E4%BB%A5%E5%9C%A8%E5%BC%95%E5%AF%BC%E6%97%B6%E5%90%AF%E5%8A%A8" >11. 启用多路径守护程序以在引导时启动</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-20" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#12_%E5%90%AF%E5%8A%A8%E5%A4%9A%E8%B7%AF%E5%BE%84%E6%9C%8D%E5%8A%A1" >12. 启动多路径服务</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-21" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#13_%E5%A6%82%E6%9E%9C%E5%9C%A8%E5%90%AF%E5%8A%A8_multipath_%E5%AE%88%E6%8A%A4%E7%A8%8B%E5%BA%8F%E5%90%8E%E6%9B%B4%E6%94%B9%E5%A4%9A%E8%B7%AF%E5%BE%84%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%EF%BC%8C%E8%AF%B7%E8%BF%90%E8%A1%8C%E4%BB%A5%E4%B8%8B%E5%91%BD%E4%BB%A4%E4%BB%A5%E4%BD%BF%E6%9B%B4%E6%94%B9%E7%94%9F%E6%95%88%E3%80%82" >13. 如果在启动 multipath 守护程序后更改多路径配置文件，请运行以下命令以使更改生效。</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-22" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#14_%E9%87%8D%E5%90%AF%E7%B3%BB%E7%BB%9F%E6%B5%8B%E8%AF%95" >14. 重启系统测试</a></li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class="ez-toc-link ez-toc-heading-23" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#15_%E6%9F%A5%E7%9C%8B%E5%A4%9A%E8%B7%AF%E5%BE%84%E5%BD%93%E5%89%8D%E7%8A%B6%E6%80%81" >15. 查看多路径当前状态</a></li></ul></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-24" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E9%99%843%EF%BC%9A%E5%88%A0%E9%99%A4LUN%E7%9A%84%E6%93%8D%E4%BD%9C" >附3：删除LUN的操作</a></li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class="ez-toc-link ez-toc-heading-25" href="https://dbtan.com/2019/06/netapp-a700-use-multipath.html/#%E9%99%844%EF%BC%9A%E7%BD%91%E5%8D%A1%E9%85%8D%E7%BD%AE%E4%B8%AD%E6%B7%BB%E5%8A%A0_hotplugno_%E5%8F%82%E6%95%B0%EF%BC%8C%E9%81%BF%E5%85%8Dstart_udev%E5%91%BD%E4%BB%A4%E5%AF%BC%E8%87%B4Oracle_RAC_%E7%9A%84vip%E6%BC%82%E7%A7%BB%E9%97%AE%E9%A2%98" >附4：网卡配置中添加 hotplug=no 参数，避免start_udev命令导致Oracle RAC 的vip漂移问题</a></li></ul></nav></div>

<h3>步骤1： 生成 lun_info.txt 文件。（格式化 <code>sanlun lun show</code> 输出，见附1）</h3>
<pre><code class="language-bash line-numbers"># sanlun lun show | awk '{a[$2]=$3;b[$2]+=!c[$3]++;d[$2]=$(NF-1)}END{for(i in a){print i,a[i],b[i],d[i]}}' | sort -n | grep -v filename | grep -v device | grep -v '[\-]' | grep -v unknown > /root/test/20180801/lun_info_20180801.txt
</code></pre>
<h3>步骤2： 生成多路径配置文件 <code>/etc/multipath.conf</code></h3>
<p><code>generate_multipath.sh</code>  生成多路径配置文件( <code>/etc/multipath.conf</code> )</p>
<pre><code class="language-bash line-numbers">#!/bin/bash

lun_file=/root/test/20180801/lun_info_20180801.txt


echo "
# NetApp A700 `date '+%Y-%m-%d %H:%M:%S'`
defaults {
user_friendly_names no
max_fds max
flush_on_last_del yes
queue_without_daemon no
}

# All data under blacklist must be specific to your system.
blacklist {
devnode \"^hd[a-z]\"
devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\"
devnode \"^cciss.*\"
}

devices {
device {
vendor \"NETAPP\"
product \"LUN\"
path_grouping_policy group_by_prio
features \"3 queue_if_no_path pg_init_retries 50\"
prio \"alua\"
path_checker tur
failback immediate
path_selector \"round-robin 0\"
hardware_handler \"1 alua\"
rr_weight uniform
rr_min_io 128
getuid_callout \"/lib/udev/scsi_id -g -u -d /dev/%n\"
}
}

"
{
echo "multipaths {"
cat $lun_file | awk '{gsub(/.*\//,"",$1);print $1,$2}' | while read t1 t2;do a=$(/lib/udev/scsi_id --whitelisted --device=$t2); echo "
  multipath {
  wwid ${a} 
  alias netapp-${t1}
  }";done
echo "}"
} | grep -v "^$"
</code></pre>
<p>生成 <code>/etc/multipath.conf</code> 多路径配置文件</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~/test/20180801]# sh generate_multipath.sh > /etc/multipath.conf
</code></pre>
<h4>步骤2-1： 格式化输出 <code>multipath -ll</code> 信息，方便查看链路状态</h4>
<p><code>format_multipath-ll.sh</code></p>
<pre><code class="language-bash line-numbers">[root@dbtan22: ~/test/20180808]# cat format_multipath-ll.sh     
#!/bin/bash

multipath_file=/root/test/20180808/dbtan22_multipath-ll.txt


cat ${multipath_file} |awk $'{
        if($0~/^netapp/)
        {
                line=$0
                getline tmp
                line=line" "tmp
                gsub(/features=.*wp=/,"wp=",line)
                if(status!="")
                {
                        print status" active_count="int(active_count)" failed_count="int(failed_count)
                }
                status=""
                active_count=""
                failed_count=""
                print line
        }
        if($0~/status=active/ || $0~/status=enabled/)
        {
                if(status!="")
                {
                        print status" active_count="int(active_count)" failed_count="int(failed_count)
                }
                status=$0
                active_count=""
                failed_count=""
        }
        if($0~/active ready running/)
        {
                active_count++
        }
        if($0~/failed faulty running/)
        {
                failed_count++
        }

}
END{
                if(status!="")
                {
                        print status" active_count="int(active_count)" failed_count="int(failed_count)
                }
}'

[root@dbtan22: ~/test/20180808]#
</code></pre>
<p>执行 <code>format_multipath-ll.sh</code> 脚本，统计每个 LUN 的主/备(active/enabled) 的活跃/失败(active_count/failed_count) 链路个数</p>
<pre><code class="language-bash line-numbers">[root@dbtan22: ~/test/20180808]# sh format_multipath-ll.sh    
netapp-lun19 (3600a09803830475a4c2b4d3059494955) dm-10 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun36 (3600a09803830475a4d3f4d3072414248) dm-28 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun21 (3600a09803830475a4c2b4d3059494957) dm-12 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun68 (3600a09803830475a4c2b4d305949496a) dm-59 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun53 (3600a09803830475a4d3f4d3072414259) dm-44 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun70 (3600a09803830475a4c2b4d305949496c) dm-61 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd12 (3600a09803830475a4d3f4d3072414273) dm-84 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun18 (3600a09803830475a4c2b4d3059494954) dm-7 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun35 (3600a09803830475a4d3f4d3072414247) dm-27 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun20 (3600a09803830475a4c2b4d3059494956) dm-11 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun67 (3600a09803830475a4d3f4d307241426c) dm-58 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun52 (3600a09803830475a4d3f4d3072414258) dm-43 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd09 (3600a09803830475a4d3f4d3072414271) dm-80 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd11 (3600a09803830475a4d3f4d3072414274) dm-83 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun17 (3600a09803830475a4c2b4d3059494953) dm-9 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun49 (3600a09803830475a4d3f4d3072414255) dm-40 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun34 (3600a09803830475a4c2b4d3059494969) dm-26 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun66 (3600a09803830475a4d3f4d307241426b) dm-57 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun51 (3600a09803830475a4d3f4d3072414257) dm-42 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd08 (3600a09803830475a4d3f4d3072414270) dm-79 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd10 (3600a09803830475a4d3f4d3072414272) dm-81 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun16 (3600a09803830475a4c2b4d3059494952) dm-8 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun48 (3600a09803830475a4d3f4d3072414254) dm-39 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun33 (3600a09803830475a4c2b4d3059494968) dm-24 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun65 (3600a09803830475a4d3f4d307241426a) dm-56 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun50 (3600a09803830475a4d3f4d3072414256) dm-41 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd07 (3600a09803830475a4d3f4d307241426f) dm-78 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun15 (3600a09803830475a4c2b4d3059494951) dm-6 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun47 (3600a09803830475a4d3f4d3072414253) dm-38 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun32 (3600a09803830475a4c2b4d3059494967) dm-25 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun64 (3600a09803830475a4d3f4d3072414269) dm-55 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd06 (3600a09803830475a4c2b4d305949497a) dm-77 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun29 (3600a09803830475a4c2b4d3059494964) dm-22 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun14 (3600a09803830475a4c2b4d3059494950) dm-5 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun46 (3600a09803830475a4d3f4d3072414252) dm-37 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun31 (3600a09803830475a4c2b4d3059494966) dm-23 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun63 (3600a09803830475a4d3f4d3072414268) dm-54 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd05 (3600a09803830475a4c2b4d3059494979) dm-76 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun28 (3600a09803830475a4c2b4d3059494963) dm-19 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun13 (3600a09803830475a4c2b4d305949494f) dm-3 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun45 (3600a09803830475a4d3f4d3072414251) dm-36 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun30 (3600a09803830475a4c2b4d3059494965) dm-21 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun103 (3600a09803830475a4d3f4d307241426e) dm-71 NETAPP,LUN C-Mode size=10G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun77 (3600a09803830475a4c2b4d3059494973) dm-68 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun62 (3600a09803830475a4d3f4d3072414267) dm-53 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd04 (3600a09803830475a4c2b4d3059494978) dm-75 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun27 (3600a09803830475a4c2b4d3059494962) dm-18 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun12 (3600a09803830475a4c2b4d305949494e) dm-2 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun59 (3600a09803830475a4d3f4d3072414264) dm-50 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun44 (3600a09803830475a4d3f4d3072414250) dm-35 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun102 (3600a09803830475a4d3f4d307241426d) dm-70 NETAPP,LUN C-Mode size=10G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun76 (3600a09803830475a4c2b4d3059494972) dm-67 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun61 (3600a09803830475a4d3f4d3072414266) dm-52 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd03 (3600a09803830475a4c2b4d3059494977) dm-74 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun26 (3600a09803830475a4c2b4d3059494961) dm-17 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun11 (3600a09803830475a4c2b4d305949494d) dm-4 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun58 (3600a09803830475a4d3f4d3072414263) dm-49 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun43 (3600a09803830475a4d3f4d307241424f) dm-34 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun101 (3600a09803830475a4c2b4d3059494974) dm-69 NETAPP,LUN C-Mode size=10G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun75 (3600a09803830475a4c2b4d3059494971) dm-66 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun60 (3600a09803830475a4d3f4d3072414265) dm-51 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd02 (3600a09803830475a4c2b4d3059494976) dm-73 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun25 (3600a09803830475a4c2b4d305949492f) dm-13 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun57 (3600a09803830475a4d3f4d3072414262) dm-48 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun42 (3600a09803830475a4d3f4d307241424e) dm-33 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun74 (3600a09803830475a4c2b4d3059494970) dm-65 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lunvd01 (3600a09803830475a4c2b4d3059494975) dm-72 NETAPP,LUN C-Mode size=80G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun39 (3600a09803830475a4d3f4d307241424b) dm-20 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun24 (3600a09803830475a4c2b4d305949495a) dm-16 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun56 (3600a09803830475a4d3f4d3072414261) dm-47 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun41 (3600a09803830475a4d3f4d307241424d) dm-32 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun73 (3600a09803830475a4c2b4d305949496f) dm-64 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun38 (3600a09803830475a4d3f4d307241424a) dm-29 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun23 (3600a09803830475a4c2b4d3059494959) dm-15 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun55 (3600a09803830475a4d3f4d307241422f) dm-46 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun40 (3600a09803830475a4d3f4d307241424c) dm-31 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun72 (3600a09803830475a4c2b4d305949496e) dm-63 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun37 (3600a09803830475a4d3f4d3072414249) dm-30 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun22 (3600a09803830475a4c2b4d3059494958) dm-14 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun69 (3600a09803830475a4c2b4d305949496b) dm-60 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun54 (3600a09803830475a4d3f4d307241425a) dm-45 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
netapp-lun71 (3600a09803830475a4c2b4d305949496d) dm-62 NETAPP,LUN C-Mode size=400G wp=rw
|-+- policy='round-robin 0' prio=50 status=active active_count=4 failed_count=0
`-+- policy='round-robin 0' prio=10 status=enabled active_count=4 failed_count=0
[root@dbtan22: ~/test/20180808]# 
</code></pre>
<h3>步骤3：生成绑定文件 <code>/etc/multipath/bindings</code> （可不做，由于多路径配置文件中 <code>user_friendly_names no</code> ）</h3>
<p><code>cat generate_bindings.sh</code> 生成绑定文件 (<code>/etc/multipath/bindings</code>)</p>
<pre><code class="language-bash line-numbers">#!/bin/bash

lun_file=/root/test/20180801/lun_info_20180801.txt

{
echo "# NetApp A700 `date '+%Y-%m-%d %H:%M:%S'`"
cat $lun_file | awk '{gsub(/.*\//,"",$1);print $1,$2}' | while read t1 t2;do a=$(/lib/udev/scsi_id --whitelisted --device=$t2); echo "
mpath${t1} ${a}
";done
} | grep -v "^$"
</code></pre>
<p>生成 <code>/etc/multipath/bindings</code> 多路径绑定配置文件</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~/test/20180801]# sh generate_bindings.sh >> /etc/multipath/bindings
</code></pre>
<h3>步骤4：生成 <code>99-oracle-asmdevices.rules</code> 规则</h3>
<p><code>generate_99-oracle-asmdevices.rules.sh</code></p>
<pre><code class="language-bash line-numbers">#!/bin/bash

lun_file=/root/test/20180801/lun_info_20180801.txt

{
cat $lun_file | awk '{gsub(/.*\//,"",$1);print $1,$2}' | while read t1 t2;do a=$(/lib/udev/scsi_id --whitelisted --device=$t2); echo "
KERNEL==\"dm*\",SUBSYSTEM==\"block\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"$a\", NAME=\"asm-$t1\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"
";done
} | grep -v "^$"
</code></pre>
<p>生成 <code>/etc/udev/rules.d/99-oracle-asmdevices.rules</code> 规则</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~/test/20180801]# sh generate_99-oracle-asmdevices.rules.sh | grep -v asm-lunvd > /etc/udev/rules.d/99-oracle-asmdevices.rules 
[root@dbtan21: ~/test/20180801]# 
</code></pre>
<h3>步骤5：对比2个节点的 <code>/dev/asm-lunX</code> 块设备（block）的 wwid</h3>
<p><code>generate_ASM_wwid.sh</code></p>
<pre><code class="language-bash line-numbers">#!/bin/bash

lun_file=/root/test/20180801/lun_info_20180801.txt

{
cat $lun_file | awk '{gsub(/.*\//,"",$1);print $1,$2}' | while read t1 t2;do a=$(/lib/udev/scsi_id --whitelisted --device=$t2); echo "
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-$t1
`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-$t1`
";done
} | grep -v "^$"
</code></pre>
<p>生成节点1(dbtan21) /dev/asm-lunX 块设备（block）的 wwid</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~/test/20180801]# sh generate_ASM_wwid.sh 
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun101
3600a09803830475a4c2b4d3059494974
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun102
3600a09803830475a4d3f4d307241426d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun103
3600a09803830475a4d3f4d307241426e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun11
3600a09803830475a4c2b4d305949494d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun12
3600a09803830475a4c2b4d305949494e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun13
3600a09803830475a4c2b4d305949494f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun14
3600a09803830475a4c2b4d3059494950
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun15
3600a09803830475a4c2b4d3059494951
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun16
3600a09803830475a4c2b4d3059494952
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun17
3600a09803830475a4c2b4d3059494953
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun18
3600a09803830475a4c2b4d3059494954
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun19
3600a09803830475a4c2b4d3059494955
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun20
3600a09803830475a4c2b4d3059494956
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun21
3600a09803830475a4c2b4d3059494957
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun22
3600a09803830475a4c2b4d3059494958
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun23
3600a09803830475a4c2b4d3059494959
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun24
3600a09803830475a4c2b4d305949495a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun25
3600a09803830475a4c2b4d305949492f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun26
3600a09803830475a4c2b4d3059494961
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun27
3600a09803830475a4c2b4d3059494962
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun28
3600a09803830475a4c2b4d3059494963
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun29
3600a09803830475a4c2b4d3059494964
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun30
3600a09803830475a4c2b4d3059494965
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun31
3600a09803830475a4c2b4d3059494966
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun32
3600a09803830475a4c2b4d3059494967
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun33
3600a09803830475a4c2b4d3059494968
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun34
3600a09803830475a4c2b4d3059494969
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun35
3600a09803830475a4d3f4d3072414247
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun36
3600a09803830475a4d3f4d3072414248
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun37
3600a09803830475a4d3f4d3072414249
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun38
3600a09803830475a4d3f4d307241424a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun39
3600a09803830475a4d3f4d307241424b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun40
3600a09803830475a4d3f4d307241424c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun41
3600a09803830475a4d3f4d307241424d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun42
3600a09803830475a4d3f4d307241424e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun43
3600a09803830475a4d3f4d307241424f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun44
3600a09803830475a4d3f4d3072414250
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun45
3600a09803830475a4d3f4d3072414251
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun46
3600a09803830475a4d3f4d3072414252
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun47
3600a09803830475a4d3f4d3072414253
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun48
3600a09803830475a4d3f4d3072414254
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun49
3600a09803830475a4d3f4d3072414255
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun50
3600a09803830475a4d3f4d3072414256
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun51
3600a09803830475a4d3f4d3072414257
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun52
3600a09803830475a4d3f4d3072414258
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun53
3600a09803830475a4d3f4d3072414259
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun54
3600a09803830475a4d3f4d307241425a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun55
3600a09803830475a4d3f4d307241422f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun56
3600a09803830475a4d3f4d3072414261
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun57
3600a09803830475a4d3f4d3072414262
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun58
3600a09803830475a4d3f4d3072414263
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun59
3600a09803830475a4d3f4d3072414264
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun60
3600a09803830475a4d3f4d3072414265
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun61
3600a09803830475a4d3f4d3072414266
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun62
3600a09803830475a4d3f4d3072414267
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun63
3600a09803830475a4d3f4d3072414268
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun64
3600a09803830475a4d3f4d3072414269
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun65
3600a09803830475a4d3f4d307241426a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun66
3600a09803830475a4d3f4d307241426b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun67
3600a09803830475a4d3f4d307241426c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun68
3600a09803830475a4c2b4d305949496a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun69
3600a09803830475a4c2b4d305949496b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun70
3600a09803830475a4c2b4d305949496c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun71
3600a09803830475a4c2b4d305949496d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun72
3600a09803830475a4c2b4d305949496e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun73
3600a09803830475a4c2b4d305949496f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun74
3600a09803830475a4c2b4d3059494970
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun75
3600a09803830475a4c2b4d3059494971
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun76
3600a09803830475a4c2b4d3059494972
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun77
3600a09803830475a4c2b4d3059494973
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd01
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd02
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd03
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd04
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd05
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd06
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd07
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd08
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd09
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd10
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd11
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd12
[root@dbtan21: ~/test/20180801]# 
</code></pre>
<p>生成节点2(dbtan22) /dev/asm-lunX 块设备（block）的 wwid</p>
<pre><code class="language-bash line-numbers">[root@dbtan22: ~/test/20180801]# sh generate_ASM_wwid.sh 
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun101
3600a09803830475a4c2b4d3059494974
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun102
3600a09803830475a4d3f4d307241426d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun103
3600a09803830475a4d3f4d307241426e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun11
3600a09803830475a4c2b4d305949494d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun12
3600a09803830475a4c2b4d305949494e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun13
3600a09803830475a4c2b4d305949494f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun14
3600a09803830475a4c2b4d3059494950
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun15
3600a09803830475a4c2b4d3059494951
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun16
3600a09803830475a4c2b4d3059494952
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun17
3600a09803830475a4c2b4d3059494953
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun18
3600a09803830475a4c2b4d3059494954
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun19
3600a09803830475a4c2b4d3059494955
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun20
3600a09803830475a4c2b4d3059494956
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun21
3600a09803830475a4c2b4d3059494957
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun22
3600a09803830475a4c2b4d3059494958
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun23
3600a09803830475a4c2b4d3059494959
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun24
3600a09803830475a4c2b4d305949495a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun25
3600a09803830475a4c2b4d305949492f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun26
3600a09803830475a4c2b4d3059494961
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun27
3600a09803830475a4c2b4d3059494962
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun28
3600a09803830475a4c2b4d3059494963
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun29
3600a09803830475a4c2b4d3059494964
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun30
3600a09803830475a4c2b4d3059494965
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun31
3600a09803830475a4c2b4d3059494966
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun32
3600a09803830475a4c2b4d3059494967
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun33
3600a09803830475a4c2b4d3059494968
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun34
3600a09803830475a4c2b4d3059494969
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun35
3600a09803830475a4d3f4d3072414247
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun36
3600a09803830475a4d3f4d3072414248
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun37
3600a09803830475a4d3f4d3072414249
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun38
3600a09803830475a4d3f4d307241424a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun39
3600a09803830475a4d3f4d307241424b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun40
3600a09803830475a4d3f4d307241424c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun41
3600a09803830475a4d3f4d307241424d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun42
3600a09803830475a4d3f4d307241424e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun43
3600a09803830475a4d3f4d307241424f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun44
3600a09803830475a4d3f4d3072414250
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun45
3600a09803830475a4d3f4d3072414251
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun46
3600a09803830475a4d3f4d3072414252
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun47
3600a09803830475a4d3f4d3072414253
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun48
3600a09803830475a4d3f4d3072414254
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun49
3600a09803830475a4d3f4d3072414255
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun50
3600a09803830475a4d3f4d3072414256
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun51
3600a09803830475a4d3f4d3072414257
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun52
3600a09803830475a4d3f4d3072414258
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun53
3600a09803830475a4d3f4d3072414259
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun54
3600a09803830475a4d3f4d307241425a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun55
3600a09803830475a4d3f4d307241422f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun56
3600a09803830475a4d3f4d3072414261
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun57
3600a09803830475a4d3f4d3072414262
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun58
3600a09803830475a4d3f4d3072414263
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun59
3600a09803830475a4d3f4d3072414264
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun60
3600a09803830475a4d3f4d3072414265
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun61
3600a09803830475a4d3f4d3072414266
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun62
3600a09803830475a4d3f4d3072414267
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun63
3600a09803830475a4d3f4d3072414268
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun64
3600a09803830475a4d3f4d3072414269
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun65
3600a09803830475a4d3f4d307241426a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun66
3600a09803830475a4d3f4d307241426b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun67
3600a09803830475a4d3f4d307241426c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun68
3600a09803830475a4c2b4d305949496a
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun69
3600a09803830475a4c2b4d305949496b
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun70
3600a09803830475a4c2b4d305949496c
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun71
3600a09803830475a4c2b4d305949496d
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun72
3600a09803830475a4c2b4d305949496e
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun73
3600a09803830475a4c2b4d305949496f
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun74
3600a09803830475a4c2b4d3059494970
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun75
3600a09803830475a4c2b4d3059494971
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun76
3600a09803830475a4c2b4d3059494972
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lun77
3600a09803830475a4c2b4d3059494973
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd01
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd02
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd03
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd04
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd05
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd06
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd07
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd08
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd09
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd10
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd11
/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/asm-lunvd12
[root@dbtan22: ~/test/20180801]# 
</code></pre>
<hr />
<h3>附1：查看格式化后的 <code>sanlun lun show</code> 输出</h3>
<pre><code class="language-bash line-numbers">[root@dbtan22: ~/test/20180801]# cat lun_info_20180801.txt 
/vol/dbtan2122_ocr1/lun101 /dev/sdbr 8 10g
/vol/dbtan2122_ocr2/lun102 /dev/sdbv 8 10g
/vol/dbtan2122_ocr3/lun103 /dev/sdbw 8 10g
/vol/dbtan2122_vol11/lun11 /dev/sdb 8 400.0g
/vol/dbtan2122_vol12/lun12 /dev/sdd 8 400.0g
/vol/dbtan2122_vol13/lun13 /dev/sde 8 400.0g
/vol/dbtan2122_vol14/lun14 /dev/sdf 8 400.0g
/vol/dbtan2122_vol15/lun15 /dev/sdg 8 400.0g
/vol/dbtan2122_vol16/lun16 /dev/sdh 8 400.0g
/vol/dbtan2122_vol17/lun17 /dev/sdi 8 400.0g
/vol/dbtan2122_vol18/lun18 /dev/sdj 8 400.0g
/vol/dbtan2122_vol19/lun19 /dev/sdk 8 400.0g
/vol/dbtan2122_vol20/lun20 /dev/sdl 8 400.0g
/vol/dbtan2122_vol21/lun21 /dev/sdm 8 400.0g
/vol/dbtan2122_vol22/lun22 /dev/sdn 8 400.0g
/vol/dbtan2122_vol23/lun23 /dev/sdo 8 400.0g
/vol/dbtan2122_vol24/lun24 /dev/sdp 8 400.0g
/vol/dbtan2122_vol25/lun25 /dev/sdq 8 400.0g
/vol/dbtan2122_vol26/lun26 /dev/sdr 8 400.0g
/vol/dbtan2122_vol27/lun27 /dev/sds 8 400.0g
/vol/dbtan2122_vol28/lun28 /dev/sdt 8 400.0g
/vol/dbtan2122_vol29/lun29 /dev/sdu 8 400.0g
/vol/dbtan2122_vol30/lun30 /dev/sdv 8 400.0g
/vol/dbtan2122_vol31/lun31 /dev/sdw 8 400.0g
/vol/dbtan2122_vol32/lun32 /dev/sdx 8 400.0g
/vol/dbtan2122_vol33/lun33 /dev/sdy 8 400.0g
/vol/dbtan2122_vol34/lun34 /dev/sdz 8 400.0g
/vol/dbtan2122_vol35/lun35 /dev/sdaa 8 400.0g
/vol/dbtan2122_vol36/lun36 /dev/sdab 8 400.0g
/vol/dbtan2122_vol37/lun37 /dev/sdac 8 400.0g
/vol/dbtan2122_vol38/lun38 /dev/sdad 8 400.0g
/vol/dbtan2122_vol39/lun39 /dev/sdae 8 400.0g
/vol/dbtan2122_vol40/lun40 /dev/sdaf 8 400.0g
/vol/dbtan2122_vol41/lun41 /dev/sdag 8 400.0g
/vol/dbtan2122_vol42/lun42 /dev/sdah 8 400.0g
/vol/dbtan2122_vol43/lun43 /dev/sdai 8 400.0g
/vol/dbtan2122_vol44/lun44 /dev/sdaj 8 400.0g
/vol/dbtan2122_vol45/lun45 /dev/sdak 8 400.0g
/vol/dbtan2122_vol46/lun46 /dev/sdal 8 400.0g
/vol/dbtan2122_vol47/lun47 /dev/sdam 8 400.0g
/vol/dbtan2122_vol48/lun48 /dev/sdan 8 400.0g
/vol/dbtan2122_vol49/lun49 /dev/sdao 8 400.0g
/vol/dbtan2122_vol50/lun50 /dev/sdap 8 400.0g
/vol/dbtan2122_vol51/lun51 /dev/sdaq 8 400.0g
/vol/dbtan2122_vol52/lun52 /dev/sdar 8 400.0g
/vol/dbtan2122_vol53/lun53 /dev/sdas 8 400.0g
/vol/dbtan2122_vol54/lun54 /dev/sdat 8 400.0g
/vol/dbtan2122_vol55/lun55 /dev/sdau 8 400.0g
/vol/dbtan2122_vol56/lun56 /dev/sdav 8 400.0g
/vol/dbtan2122_vol57/lun57 /dev/sdaw 8 400.0g
/vol/dbtan2122_vol58/lun58 /dev/sdax 8 400.0g
/vol/dbtan2122_vol59/lun59 /dev/sday 8 400.0g
/vol/dbtan2122_vol60/lun60 /dev/sdaz 8 400.0g
/vol/dbtan2122_vol61/lun61 /dev/sdbb 8 400.0g
/vol/dbtan2122_vol62/lun62 /dev/sdbc 8 400.0g
/vol/dbtan2122_vol63/lun63 /dev/sdbd 8 400.0g
/vol/dbtan2122_vol64/lun64 /dev/sdbe 8 400.0g
/vol/dbtan2122_vol65/lun65 /dev/sdbg 8 400.0g
/vol/dbtan2122_vol66/lun66 /dev/sdbh 8 400.0g
/vol/dbtan2122_vol67/lun67 /dev/sdbi 8 400.0g
/vol/dbtan2122_vol68/lun68 /dev/sdbj 8 400.0g
/vol/dbtan2122_vol69/lun69 /dev/sdbk 8 400.0g
/vol/dbtan2122_vol70/lun70 /dev/sdbl 8 400.0g
/vol/dbtan2122_vol71/lun71 /dev/sdbm 8 400.0g
/vol/dbtan2122_vol72/lun72 /dev/sdbn 8 400.0g
/vol/dbtan2122_vol73/lun73 /dev/sdbo 8 400.0g
/vol/dbtan2122_vol74/lun74 /dev/sdbp 8 400.0g
/vol/dbtan2122_vol75/lun75 /dev/sdc 8 400.0g
/vol/dbtan2122_vol76/lun76 /dev/sdba 8 400.0g
/vol/dbtan2122_vol77/lun77 /dev/sdbf 8 400.0g
/vol/vdbench01/lunvd01 /dev/sdbx 8 80.0g
/vol/vdbench02/lunvd02 /dev/sdby 8 80.0g
/vol/vdbench03/lunvd03 /dev/sdbz 8 80.0g
/vol/vdbench04/lunvd04 /dev/sdca 8 80.0g
/vol/vdbench05/lunvd05 /dev/sdcb 8 80.0g
/vol/vdbench06/lunvd06 /dev/sdcc 8 80.0g
/vol/vdbench07/lunvd07 /dev/sdcd 8 80.0g
/vol/vdbench08/lunvd08 /dev/sdce 8 80.0g
/vol/vdbench09/lunvd09 /dev/sdcf 8 80.0g
/vol/vdbench10/lunvd10 /dev/sdcg 8 80.0g
/vol/vdbench11/lunvd11 /dev/sdxr 8 80.0g
/vol/vdbench12/lunvd12 /dev/sdxs 8 80.0g
[root@dbtan22: ~/test/20180801]# 
</code></pre>
<h3>附2：配置 <code>multipath</code> 基本使用方法</h3>
<h4>1. 查看主机或者存储交换机上的WWN号，在存储上将LUN映射给需要的主机</h4>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# cat /sys/class/fc_host/host*/port_name
0x100000109b1b2c72
0x100000109b1b2c73
0x100000109b176552
0x100000109b176553
[root@dbtan21: ~]# 
</code></pre>
<h4>2. 在系统内执行扫盘命令，没有命令先安装 <code>sg3_utils</code></h4>
<pre><code class="line-numbers"># yum install sg3_utils
rescan-scsi-bus.sh
</code></pre>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# rpm -qa | grep sg3_utils
sg3_utils-1.28-13.el6.x86_64
sg3_utils-libs-1.28-13.el6.x86_64
[root@dbtan21: ~]# 
[root@dbtan21: ~]# ll /usr/bin/rescan-scsi-bus.sh 
-rwxr-xr-x. 1 root root 33968 Jun 19 23:22 /usr/bin/rescan-scsi-bus.sh
[root@dbtan21: ~]# 
</code></pre>
<h4>3. 查看是否映射到对应的</h4>
<pre><code class="language-bash line-numbers"># fdisk -l
# lsblk -f
</code></pre>
<h4>4. 查看是否安装了 <code>multipath</code></h4>
<pre><code class="language-bash line-numbers"># yum install device-mapper-multipath
</code></pre>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# rpm -qa | grep device-mapper
device-mapper-libs-1.02.117-7.el6.x86_64
device-mapper-event-1.02.117-7.el6.x86_64
device-mapper-persistent-data-0.6.2-0.1.rc7.el6.x86_64
device-mapper-1.02.117-7.el6.x86_64
device-mapper-multipath-libs-0.4.9-93.el6.x86_64
device-mapper-multipath-0.4.9-93.el6.x86_64
device-mapper-event-libs-1.02.117-7.el6.x86_64
[root@dbtan21: ~]# 
</code></pre>
<h4>5. 拷贝默认的 <code>multipath.conf</code>到 <code>/etc</code> 目录下，也可以使用mpathconf命令创建默认模板</h4>
<pre><code class="language-bash line-numbers"># cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf
# mpathconf --enable --with_multipathd y
</code></pre>
<h4>6. 查看本地存储wwid</h4>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sda
36101b5442bcc700022bf914a0cca39f5
[root@dbtan21: ~]# /lib/udev/scsi_id --whitelisted --device=/dev/mapper/netapp-lun11
3600a09803830475a4c2b4d305949494d
[root@dbtan21: ~]# 
[root@dbtan21: ~]# ll /dev/mapper/*
crw-rw---- 1 root root 10, 58 Aug 13 18:38 /dev/mapper/control
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun101 -> ../dm-69
lrwxrwxrwx 1 root root      8 Aug 16 20:50 /dev/mapper/netapp-lun102 -> ../dm-70
lrwxrwxrwx 1 root root      8 Aug 16 20:50 /dev/mapper/netapp-lun103 -> ../dm-71
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun11 -> ../dm-3
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun12 -> ../dm-4
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun13 -> ../dm-5
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun14 -> ../dm-6
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun15 -> ../dm-9
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun16 -> ../dm-7
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun17 -> ../dm-14
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun18 -> ../dm-8
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun19 -> ../dm-10
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun20 -> ../dm-15
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun21 -> ../dm-11
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun22 -> ../dm-13
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun23 -> ../dm-12
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun24 -> ../dm-17
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun25 -> ../dm-16
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun26 -> ../dm-18
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun27 -> ../dm-20
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun28 -> ../dm-19
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun29 -> ../dm-21
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun30 -> ../dm-23
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun31 -> ../dm-22
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun32 -> ../dm-24
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun33 -> ../dm-25
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun34 -> ../dm-26
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun35 -> ../dm-27
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun36 -> ../dm-28
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun37 -> ../dm-30
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun38 -> ../dm-29
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun39 -> ../dm-31
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun40 -> ../dm-34
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun41 -> ../dm-33
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun42 -> ../dm-32
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun43 -> ../dm-36
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun44 -> ../dm-38
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun45 -> ../dm-35
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun46 -> ../dm-39
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun47 -> ../dm-44
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun48 -> ../dm-41
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun49 -> ../dm-37
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun50 -> ../dm-40
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun51 -> ../dm-42
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun52 -> ../dm-43
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun53 -> ../dm-46
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun54 -> ../dm-45
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun55 -> ../dm-47
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun56 -> ../dm-49
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun57 -> ../dm-48
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun58 -> ../dm-50
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun59 -> ../dm-51
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun60 -> ../dm-52
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun61 -> ../dm-54
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun62 -> ../dm-55
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun63 -> ../dm-56
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun64 -> ../dm-58
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun65 -> ../dm-59
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun66 -> ../dm-60
lrwxrwxrwx 1 root root      8 Aug 16 20:30 /dev/mapper/netapp-lun67 -> ../dm-66
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun68 -> ../dm-62
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun69 -> ../dm-63
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun70 -> ../dm-61
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun71 -> ../dm-64
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun72 -> ../dm-65
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun73 -> ../dm-67
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun74 -> ../dm-68
lrwxrwxrwx 1 root root      7 Aug 16 20:28 /dev/mapper/netapp-lun75 -> ../dm-2
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun76 -> ../dm-53
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun77 -> ../dm-57
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun78 -> ../dm-93
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun79 -> ../dm-94
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun80 -> ../dm-95
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun81 -> ../dm-96
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun82 -> ../dm-97
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun83 -> ../dm-98
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lun84 -> ../dm-99
lrwxrwxrwx 1 root root      9 Aug 16 20:28 /dev/mapper/netapp-lun85 -> ../dm-100
lrwxrwxrwx 1 root root      9 Aug 16 20:28 /dev/mapper/netapp-lun86 -> ../dm-101
lrwxrwxrwx 1 root root      9 Aug 16 20:28 /dev/mapper/netapp-lun87 -> ../dm-102
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lunvd01 -> ../dm-72
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd02 -> ../vdbench-lunvd02
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd03 -> ../vdbench-lunvd03
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd04 -> ../vdbench-lunvd04
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd05 -> ../vdbench-lunvd05
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lunvd06 -> ../dm-77
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lunvd07 -> ../dm-78
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd08 -> ../vdbench-lunvd08
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd09 -> ../vdbench-lunvd09
lrwxrwxrwx 1 root root     18 Aug 16 20:28 /dev/mapper/netapp-lunvd10 -> ../vdbench-lunvd10
lrwxrwxrwx 1 root root      8 Aug 16 20:28 /dev/mapper/netapp-lunvd11 -> ../dm-83
lrwxrwxrwx 1 root root      8 Aug 16 20:29 /dev/mapper/netapp-lunvd12 -> ../dm-84
lrwxrwxrwx 1 root root     18 Aug 16 20:29 /dev/mapper/netapp-lunvd13 -> ../vdbench-lunvd13
lrwxrwxrwx 1 root root     18 Aug 16 20:29 /dev/mapper/netapp-lunvd14 -> ../vdbench-lunvd14
lrwxrwxrwx 1 root root     18 Aug 16 20:29 /dev/mapper/netapp-lunvd15 -> ../vdbench-lunvd15
lrwxrwxrwx 1 root root     18 Aug 16 20:29 /dev/mapper/netapp-lunvd16 -> ../vdbench-lunvd16
lrwxrwxrwx 1 root root      8 Aug 16 20:29 /dev/mapper/netapp-lunvd17 -> ../dm-89
lrwxrwxrwx 1 root root     18 Aug 16 20:29 /dev/mapper/netapp-lunvd18 -> ../vdbench-lunvd18
lrwxrwxrwx 1 root root      8 Aug 16 20:29 /dev/mapper/netapp-lunvd19 -> ../dm-91
lrwxrwxrwx 1 root root      8 Aug 16 20:29 /dev/mapper/netapp-lunvd20 -> ../dm-92
lrwxrwxrwx 1 root root      7 Aug 13 18:38 /dev/mapper/VolGroup-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Aug 13 18:38 /dev/mapper/VolGroup-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Aug 13 18:38 /dev/mapper/VolGroup-lv_app -> ../dm-82
[root@dbtan21: ~]# 
</code></pre>
<h4>7. 查看存储参数</h4>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# cat /sys/block/sdb/device/vendor 
NETAPP  
[root@dbtan21: ~]# cat /sys/block/sdb/device/model
LUN C-Mode      
[root@dbtan21: ~]# 
</code></pre>
<h4>8. 添加本地盘到 <code>blacklist</code> 内，不同的存储和系统参考官方的最佳实践</h4>
<p>已经可以脚本自动生成 multipath 配置文件，参见：<br />
【步骤2： 生成多路径配置文件 <code>/etc/multipath.conf</code> 】<br />
【步骤3：生成绑定文件 /etc/multipath/bindings （可不做，由于多路径配置文件中 user_friendly_names no ）】</p>
<h4>9. 清空已有的 <code>multipath</code> 记录</h4>
<pre><code class="language-bash line-numbers"># multipath -F
</code></pre>
<blockquote><p>
  说明：<code>multipath -F</code> 不会清空使用中的链路。</p>
<p>  下列举例，<code>multipath -F</code> 全部清理了，是因为此时所以链路都已经没有在使用了。</p>
<p>  清理后，重启 <code>multipath</code> 服务（<code>/etc/init.d/multipathd restart</code>），即可重新聚合链路。</p>
<p>  我们可以发现，在写好 <code>/etc/udev/rules.d/99-oracle-asmdevices.rules</code> 规则后，每次重启 <code>multipath</code> 服务，<code>/dev/mapper/netapp-lun<N></code> 的符号链接会切换指向 <code>/dev/dm-<N></code> 和 <code>/dev/asm-lun<N></code> 的块设备（block）。</p>
<p>  通过分析，<code>/dev/mapper/netapp-lun<N></code> 的符号链接会切换，是与执行 <code>partprobe</code> 有关。（例：<code>partprobe /dev/mapper/netapp-lun101</code>）
</p></blockquote>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# multipath -F
[root@dbtan21: ~]# multipath -ll
[root@dbtan21: ~]# ll /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Aug 13 18:38 control
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Aug 13 18:38 VolGroup-lv_app -> ../dm-82
[root@dbtan21: ~]# /etc/init.d/multipathd restart
ok
Stopping multipathd daemon:                                [  OK  ]
Starting multipathd daemon:                                [  OK  ]
[root@dbtan21: ~]# ll /dev/mapper/               
total 0
crw-rw---- 1 root root 10, 58 Aug 13 18:38 control
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun102 -> ../dm-80
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun103 -> ../dm-81
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun11 -> ../dm-2
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun12 -> ../dm-3
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun13 -> ../dm-4
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun14 -> ../dm-5
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun15 -> ../dm-6
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun16 -> ../dm-7
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun17 -> ../dm-8
lrwxrwxrwx 1 root root      7 Aug 24 15:43 netapp-lun18 -> ../dm-9
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun19 -> ../dm-10
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun20 -> ../dm-11
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun21 -> ../dm-12
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun22 -> ../dm-13
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun23 -> ../dm-14
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun24 -> ../dm-15
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun25 -> ../dm-16
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun26 -> ../dm-17
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun27 -> ../dm-18
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun28 -> ../dm-19
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun29 -> ../dm-20
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun30 -> ../dm-21
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun31 -> ../dm-22
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun32 -> ../dm-23
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun33 -> ../dm-24
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun34 -> ../dm-25
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun35 -> ../dm-26
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun36 -> ../dm-27
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun37 -> ../dm-28
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun38 -> ../dm-29
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun39 -> ../dm-30
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun40 -> ../dm-31
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun41 -> ../dm-32
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun42 -> ../dm-33
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun43 -> ../dm-34
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun44 -> ../dm-35
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun45 -> ../dm-36
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun46 -> ../dm-37
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun47 -> ../dm-38
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun48 -> ../dm-39
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun49 -> ../dm-40
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun50 -> ../dm-41
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun51 -> ../dm-42
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun52 -> ../dm-43
lrwxrwxrwx 1 root root     12 Aug 24 15:43 netapp-lun53 -> ../asm-lun53
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun54 -> ../dm-45
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun55 -> ../dm-46
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun56 -> ../dm-47
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun57 -> ../dm-48
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun58 -> ../dm-49
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun59 -> ../dm-50
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun60 -> ../dm-51
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun61 -> ../dm-52
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun62 -> ../dm-53
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun63 -> ../dm-54
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun64 -> ../dm-55
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun65 -> ../dm-56
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun66 -> ../dm-57
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun67 -> ../dm-58
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun68 -> ../dm-59
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun69 -> ../dm-60
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun70 -> ../dm-61
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun71 -> ../dm-62
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun72 -> ../dm-63
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun73 -> ../dm-64
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun74 -> ../dm-65
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun75 -> ../dm-66
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun76 -> ../dm-67
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun77 -> ../dm-68
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun78 -> ../dm-69
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun79 -> ../dm-70
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun80 -> ../dm-71
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun81 -> ../dm-72
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun83 -> ../dm-74
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun84 -> ../dm-75
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun85 -> ../dm-76
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lun87 -> ../dm-78
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lunvd01 -> ../dm-83
lrwxrwxrwx 1 root root      8 Aug 24 15:43 netapp-lunvd03 -> ../dm-85
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Aug 13 18:38 VolGroup-lv_app -> ../dm-82
[root@dbtan21: ~]# 
[root@dbtan21: ~]# /etc/init.d/multipathd restart
ok
Stopping multipathd daemon:                                [  OK  ]
Starting multipathd daemon:                                [  OK  ]
[root@dbtan21: ~]#
[root@dbtan21: ~]# ll /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Aug 13 18:38 control
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun101 -> ../asm-lun78
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun102 -> ../asm-lun79
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun103 -> ../asm-lun80
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun11 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun12 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun13 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun14 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun15 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun16 -> ../asm-lun16
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun17 -> ../asm-lun23
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun18 -> ../asm-lun18
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun19 -> ../asm-lun19
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun20 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun21 -> ../asm-lun21
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun22 -> ../asm-lun22
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun23 -> ../asm-lun23
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun24 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun25 -> ../asm-lun25
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun26 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun27 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun28 -> ../asm-lun28
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun29 -> ../asm-lun29
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun30 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun31 -> ../asm-lun31
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun32 -> ../asm-lun32
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun33 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun34 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun35 -> ../asm-lun35
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun36 -> ../asm-lun36
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun37 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun38 -> ../asm-lun38
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun39 -> ../asm-lun39
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun40 -> ../asm-lun40
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun41 -> ../asm-lun41
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun42 -> ../asm-lun42
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun43 -> ../asm-lun43
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun44 -> ../asm-lun53
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun45 -> ../asm-lun45
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun46 -> ../asm-lun50
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun47 -> ../asm-lun53
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun48 -> ../asm-lun50
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun49 -> ../asm-lun49
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun50 -> ../asm-lun50
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun51 -> ../asm-lun51
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun52 -> ../asm-lun52
lrwxrwxrwx 1 root root      8 Aug 24 15:46 netapp-lun53 -> ../dm-44
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun54 -> ../asm-lun54
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun55 -> ../asm-lun56
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun56 -> ../asm-lun56
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun57 -> ../asm-lun57
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun58 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun59 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun60 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun61 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun62 -> ../asm-lun75
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun63 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun64 -> ../asm-lun75
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun65 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun66 -> ../asm-lun77
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun67 -> ../asm-lun75
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun68 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun69 -> ../asm-lun77
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun70 -> ../asm-lun70
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun71 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun72 -> ../asm-lun77
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun73 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun74 -> ../asm-lun77
lrwxrwxrwx 1 root root     12 Aug 24 15:46 netapp-lun75 -> ../asm-lun75
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun76 -> ../asm-lun76
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun77 -> ../asm-lun77
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun78 -> ../asm-lun78
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun79 -> ../asm-lun79
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun80 -> ../asm-lun80
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun81 -> ../asm-lun81
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun82 -> ../asm-lun82
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun83 -> ../asm-lun83
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun84 -> ../asm-lun84
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun85 -> ../asm-lun85
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun86 -> ../asm-lun86
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lun87 -> ../asm-lun87
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd01 -> ../asm-lun81
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd02 -> ../asm-lun82
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd03 -> ../asm-lun83
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd04 -> ../asm-lun84
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd05 -> ../asm-lun85
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd06 -> ../asm-lun86
lrwxrwxrwx 1 root root     12 Aug 24 15:47 netapp-lunvd07 -> ../asm-lun87
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd08 -> ../vdbench-lunvd08
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd09 -> ../vdbench-lunvd09
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd10 -> ../vdbench-lunvd10
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd11 -> ../vdbench-lunvd11
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd12 -> ../vdbench-lunvd12
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd13 -> ../vdbench-lunvd13
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd14 -> ../vdbench-lunvd14
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd15 -> ../vdbench-lunvd15
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd16 -> ../vdbench-lunvd16
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd17 -> ../vdbench-lunvd17
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd18 -> ../vdbench-lunvd18
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd19 -> ../vdbench-lunvd19
lrwxrwxrwx 1 root root     18 Aug 24 15:47 netapp-lunvd20 -> ../vdbench-lunvd20
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol00 -> ../dm-1
lrwxrwxrwx 1 root root      7 Aug 13 18:38 VolGroup-LogVol01 -> ../dm-0
lrwxrwxrwx 1 root root      8 Aug 13 18:38 VolGroup-lv_app -> ../dm-82
[root@dbtan21: ~]# 
[root@dbtan21: ~]# ll /dev/mapper/netapp-lun10*
lrwxrwxrwx 1 root root 12 Aug 24 15:47 /dev/mapper/netapp-lun101 -> ../asm-lun78
lrwxrwxrwx 1 root root 12 Aug 24 15:47 /dev/mapper/netapp-lun102 -> ../asm-lun79
lrwxrwxrwx 1 root root 12 Aug 24 15:47 /dev/mapper/netapp-lun103 -> ../asm-lun80
[root@dbtan21: ~]# partprobe /dev/mapper/netapp-lun101
[root@dbtan21: ~]# partprobe /dev/mapper/netapp-lun102
[root@dbtan21: ~]# partprobe /dev/mapper/netapp-lun103
[root@dbtan21: ~]# 
[root@dbtan21: ~]# ll /dev/mapper/netapp-lun10*       
lrwxrwxrwx 1 root root 8 Aug 24 15:52 /dev/mapper/netapp-lun101 -> ../dm-79
lrwxrwxrwx 1 root root 8 Aug 24 15:52 /dev/mapper/netapp-lun102 -> ../dm-80
lrwxrwxrwx 1 root root 8 Aug 24 15:52 /dev/mapper/netapp-lun103 -> ../dm-81
[root@dbtan21: ~]# 
</code></pre>
<h4>10. 打印诊断信息</h4>
<pre><code class="language-bash line-numbers"># multipath -v3
</code></pre>
<h4>11. 启用多路径守护程序以在引导时启动</h4>
<pre><code class="language-bash line-numbers"># mpathconf --enable
</code></pre>
<h4>12. 启动多路径服务</h4>
<pre><code class="language-bash line-numbers"># service multipathd start
或
# /etc/init.d/multipathd restart
</code></pre>
<h4>13. 如果在启动 <code>multipath</code> 守护程序后更改多路径配置文件，请运行以下命令以使更改生效。</h4>
<pre><code class="language-bash line-numbers"># service multipathd reload 
</code></pre>
<h4>14. 重启系统测试</h4>
<blockquote><p>
  说明：首次配置好 <code>multipath</code> 后，需要重启服务器。
</p></blockquote>
<pre><code class="language-bash line-numbers"># init 6
</code></pre>
<h4>15. 查看多路径当前状态</h4>
<pre><code class="language-bash line-numbers"># multipath -ll
</code></pre>
<h3>附3：删除LUN的操作</h3>
<blockquote><p>
  通过匹配 <code>lsblk -s</code> 的结果分别执行</p>
<pre><code class="language-bash line-numbers">echo 1 > /sys/block/sdX/device/delete
multipath -f mpathc
</code></pre>
<p>  从而达到删除LUN设备的目的。
</p></blockquote>
<p>下例，在 dbtan21/dbtan22 主机删除12个测试LUN（ <code>netapp-lunvd01</code> ~ <code>netapp-lunvd12</code> ）</p>
<p>dbtan21 节点：</p>
<pre><code class="language-bash line-numbers">--  dbtan21
echo 1 > /sys/block/sdbx/device/delete
echo 1 > /sys/block/sddl/device/delete
echo 1 > /sys/block/sdiq/device/delete
echo 1 > /sys/block/sduf/device/delete
echo 1 > /sys/block/sdfg/device/delete
echo 1 > /sys/block/sdrd/device/delete
echo 1 > /sys/block/sdnp/device/delete
echo 1 > /sys/block/sdxh/device/delete
multipath -f netapp-lunvd01

echo 1 > /sys/block/sdby/device/delete
echo 1 > /sys/block/sddw/device/delete
echo 1 > /sys/block/sdis/device/delete
echo 1 > /sys/block/sdug/device/delete
echo 1 > /sys/block/sdfh/device/delete
echo 1 > /sys/block/sdre/device/delete
echo 1 > /sys/block/sdnr/device/delete
echo 1 > /sys/block/sdxi/device/delete
multipath -f netapp-lunvd02

echo 1 > /sys/block/sdbz/device/delete
echo 1 > /sys/block/sdeh/device/delete
echo 1 > /sys/block/sdit/device/delete
echo 1 > /sys/block/sduh/device/delete
echo 1 > /sys/block/sdfi/device/delete
echo 1 > /sys/block/sdrf/device/delete
echo 1 > /sys/block/sdns/device/delete
echo 1 > /sys/block/sdxj/device/delete
multipath -f netapp-lunvd03

echo 1 > /sys/block/sdca/device/delete
echo 1 > /sys/block/sder/device/delete
echo 1 > /sys/block/sdiu/device/delete
echo 1 > /sys/block/sdui/device/delete
echo 1 > /sys/block/sdfj/device/delete
echo 1 > /sys/block/sdrg/device/delete
echo 1 > /sys/block/sdnu/device/delete
echo 1 > /sys/block/sdxk/device/delete
multipath -f netapp-lunvd04

echo 1 > /sys/block/sdcb/device/delete
echo 1 > /sys/block/sdfd/device/delete
echo 1 > /sys/block/sdiw/device/delete
echo 1 > /sys/block/sduj/device/delete
echo 1 > /sys/block/sdfk/device/delete
echo 1 > /sys/block/sdrh/device/delete
echo 1 > /sys/block/sdnv/device/delete
echo 1 > /sys/block/sdxl/device/delete
multipath -f netapp-lunvd05

echo 1 > /sys/block/sdcc/device/delete
echo 1 > /sys/block/sdfq/device/delete
echo 1 > /sys/block/sdix/device/delete
echo 1 > /sys/block/sduk/device/delete
echo 1 > /sys/block/sdfl/device/delete
echo 1 > /sys/block/sdri/device/delete
echo 1 > /sys/block/sdnw/device/delete
echo 1 > /sys/block/sdxm/device/delete
multipath -f netapp-lunvd06

echo 1 > /sys/block/sdfm/device/delete
echo 1 > /sys/block/sdrj/device/delete
echo 1 > /sys/block/sdnz/device/delete
echo 1 > /sys/block/sdxn/device/delete
echo 1 > /sys/block/sdcd/device/delete
echo 1 > /sys/block/sdgc/device/delete
echo 1 > /sys/block/sdja/device/delete
echo 1 > /sys/block/sdul/device/delete
multipath -f netapp-lunvd07

echo 1 > /sys/block/sdfn/device/delete
echo 1 > /sys/block/sdrk/device/delete
echo 1 > /sys/block/sdoa/device/delete
echo 1 > /sys/block/sdxo/device/delete
echo 1 > /sys/block/sdce/device/delete
echo 1 > /sys/block/sdgo/device/delete
echo 1 > /sys/block/sdjb/device/delete
echo 1 > /sys/block/sdum/device/delete
multipath -f netapp-lunvd08

echo 1 > /sys/block/sdfo/device/delete
echo 1 > /sys/block/sdrl/device/delete
echo 1 > /sys/block/sdob/device/delete
echo 1 > /sys/block/sdxp/device/delete
echo 1 > /sys/block/sdcf/device/delete
echo 1 > /sys/block/sdhb/device/delete
echo 1 > /sys/block/sdjd/device/delete
echo 1 > /sys/block/sdun/device/delete
multipath -f netapp-lunvd09

echo 1 > /sys/block/sdfp/device/delete
echo 1 > /sys/block/sdrm/device/delete
echo 1 > /sys/block/sdod/device/delete
echo 1 > /sys/block/sdxq/device/delete
echo 1 > /sys/block/sdcg/device/delete
echo 1 > /sys/block/sdhn/device/delete
echo 1 > /sys/block/sdje/device/delete
echo 1 > /sys/block/sduo/device/delete
multipath -f netapp-lunvd10

echo 1 > /sys/block/sdxt/device/delete
echo 1 > /sys/block/sdyb/device/delete
echo 1 > /sys/block/sdxx/device/delete
echo 1 > /sys/block/sdyf/device/delete
echo 1 > /sys/block/sdxr/device/delete
echo 1 > /sys/block/sdxz/device/delete
echo 1 > /sys/block/sdxv/device/delete
echo 1 > /sys/block/sdyd/device/delete
multipath -f netapp-lunvd11

echo 1 > /sys/block/sdxu/device/delete
echo 1 > /sys/block/sdyc/device/delete
echo 1 > /sys/block/sdxy/device/delete
echo 1 > /sys/block/sdyg/device/delete
echo 1 > /sys/block/sdxs/device/delete
echo 1 > /sys/block/sdya/device/delete
echo 1 > /sys/block/sdxw/device/delete
echo 1 > /sys/block/sdye/device/delete
multipath -f netapp-lunvd12
</code></pre>
<p>dbtan22 节点：</p>
<pre><code class="language-bash line-numbers">-- dbtan22
echo 1 > /sys/block/sdbt/device/delete
echo 1 > /sys/block/sdob/device/delete
echo 1 > /sys/block/sdhx/device/delete
echo 1 > /sys/block/sduf/device/delete
echo 1 > /sys/block/sdev/device/delete
echo 1 > /sys/block/sdrd/device/delete
echo 1 > /sys/block/sdkz/device/delete
echo 1 > /sys/block/sdxh/device/delete
multipath -f netapp-lunvd01

echo 1 > /sys/block/sdbu/device/delete
echo 1 > /sys/block/sdoc/device/delete
echo 1 > /sys/block/sdhy/device/delete
echo 1 > /sys/block/sdug/device/delete
echo 1 > /sys/block/sdew/device/delete
echo 1 > /sys/block/sdre/device/delete
echo 1 > /sys/block/sdla/device/delete
echo 1 > /sys/block/sdxi/device/delete
multipath -f netapp-lunvd02

echo 1 > /sys/block/sdbv/device/delete
echo 1 > /sys/block/sdod/device/delete
echo 1 > /sys/block/sdhz/device/delete
echo 1 > /sys/block/sduh/device/delete
echo 1 > /sys/block/sdex/device/delete
echo 1 > /sys/block/sdrf/device/delete
echo 1 > /sys/block/sdlb/device/delete
echo 1 > /sys/block/sdxj/device/delete
multipath -f netapp-lunvd03

echo 1 > /sys/block/sdbw/device/delete
echo 1 > /sys/block/sdoe/device/delete
echo 1 > /sys/block/sdia/device/delete
echo 1 > /sys/block/sdui/device/delete
echo 1 > /sys/block/sdey/device/delete
echo 1 > /sys/block/sdrg/device/delete
echo 1 > /sys/block/sdlc/device/delete
echo 1 > /sys/block/sdxk/device/delete
multipath -f netapp-lunvd04

echo 1 > /sys/block/sdbx/device/delete
echo 1 > /sys/block/sdof/device/delete
echo 1 > /sys/block/sdib/device/delete
echo 1 > /sys/block/sduj/device/delete
echo 1 > /sys/block/sdez/device/delete
echo 1 > /sys/block/sdrh/device/delete
echo 1 > /sys/block/sdld/device/delete
echo 1 > /sys/block/sdxl/device/delete
multipath -f netapp-lunvd05

echo 1 > /sys/block/sdby/device/delete
echo 1 > /sys/block/sdog/device/delete
echo 1 > /sys/block/sdic/device/delete
echo 1 > /sys/block/sduk/device/delete
echo 1 > /sys/block/sdfa/device/delete
echo 1 > /sys/block/sdri/device/delete
echo 1 > /sys/block/sdle/device/delete
echo 1 > /sys/block/sdxm/device/delete
multipath -f netapp-lunvd06

echo 1 > /sys/block/sdfb/device/delete
echo 1 > /sys/block/sdrj/device/delete
echo 1 > /sys/block/sdlf/device/delete
echo 1 > /sys/block/sdxn/device/delete
echo 1 > /sys/block/sdbz/device/delete
echo 1 > /sys/block/sdoh/device/delete
echo 1 > /sys/block/sdid/device/delete
echo 1 > /sys/block/sdul/device/delete
multipath -f netapp-lunvd07

echo 1 > /sys/block/sdfc/device/delete
echo 1 > /sys/block/sdrk/device/delete
echo 1 > /sys/block/sdlg/device/delete
echo 1 > /sys/block/sdxo/device/delete
echo 1 > /sys/block/sdca/device/delete
echo 1 > /sys/block/sdoi/device/delete
echo 1 > /sys/block/sdie/device/delete
echo 1 > /sys/block/sdum/device/delete
multipath -f netapp-lunvd08

echo 1 > /sys/block/sdfd/device/delete
echo 1 > /sys/block/sdrl/device/delete
echo 1 > /sys/block/sdlh/device/delete
echo 1 > /sys/block/sdxp/device/delete
echo 1 > /sys/block/sdcb/device/delete
echo 1 > /sys/block/sdoj/device/delete
echo 1 > /sys/block/sdif/device/delete
echo 1 > /sys/block/sdun/device/delete
multipath -f netapp-lunvd09

echo 1 > /sys/block/sdfe/device/delete
echo 1 > /sys/block/sdrm/device/delete
echo 1 > /sys/block/sdli/device/delete
echo 1 > /sys/block/sdxq/device/delete
echo 1 > /sys/block/sdcc/device/delete
echo 1 > /sys/block/sdok/device/delete
echo 1 > /sys/block/sdig/device/delete
echo 1 > /sys/block/sduo/device/delete
multipath -f netapp-lunvd10

echo 1 > /sys/block/sdxt/device/delete
echo 1 > /sys/block/sdyb/device/delete
echo 1 > /sys/block/sdxx/device/delete
echo 1 > /sys/block/sdyf/device/delete
echo 1 > /sys/block/sdxr/device/delete
echo 1 > /sys/block/sdxz/device/delete
echo 1 > /sys/block/sdxv/device/delete
echo 1 > /sys/block/sdyd/device/delete
multipath -f netapp-lunvd11

echo 1 > /sys/block/sdxu/device/delete
echo 1 > /sys/block/sdyc/device/delete
echo 1 > /sys/block/sdxy/device/delete
echo 1 > /sys/block/sdyg/device/delete
echo 1 > /sys/block/sdxs/device/delete
echo 1 > /sys/block/sdya/device/delete
echo 1 > /sys/block/sdxw/device/delete
echo 1 > /sys/block/sdye/device/delete
multipath -f netapp-lunvd12
</code></pre>
<h3>附4：网卡配置中添加 <code>hotplug=no</code> 参数，避免<code>start_udev</code>命令导致Oracle RAC 的vip漂移问题</h3>
<blockquote><p>
  <code>start_udev</code> 命令导致网卡重启</p>
<p>  从而导致vip漂移
</p></blockquote>
<p><strong><span class="text-highlighted-inline" style="background-color: #fffd38;">解决办法</span></strong>：网卡配置中添加 <code>hotplug=no</code> 参数<br />
<strong><span class="text-highlighted-inline" style="background-color: #fffd38;">注意</span></strong>：如果是使用网卡绑定，比如绑定后的网卡为<code>bond0</code>，则要在bond0的配置文件里添加hotplug配置信息，在eth0里添加不起作用。</p>
<blockquote><p>
  参考：</p>
<p>  <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=429944236080945&id=1569028.1&_afrWindowMode=0&_adf.ctrl-state=rr78wx0yn_4">Network interface going down when dynamically adding disks to storage using udev in RHEL 6 (文档 ID 1569028.1)</a></p>
<p>  <a href="http://www.traveldba.com/archives/1077">udev重新加载配置</a></p>
<p>  <a href="https://blog.csdn.net/lijingkuan/article/details/68957259">RAC ASM磁盘扩容执行start_udev命令导致vip漂移问题分析及解决办法</a>
</p></blockquote>
<p>dbtan21 测试节点</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: /etc/sysconfig/network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.18.21
NETMASK=255.255.255.0
GATEWAY=192.168.18.1
HOTPLUG="no"
[root@dbtan21: /etc/sysconfig/network-scripts]# cat ifcfg-bond1
DEVICE=bond1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.118.21
NETMASK=255.255.255.0
HOTPLUG="no"
[root@dbtan21: /etc/sysconfig/network-scripts]# cat ifcfg-bond2
DEVICE=bond2
BOOTPROTO=static
ONBOOT=yes
IPADDR=172.16.18.21
NETMASK=255.255.254.0
HOTPLUG="no"
[root@dbtan21: /etc/sysconfig/network-scripts]# 
</code></pre>
<p>网卡绑定信息</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: /etc/modprobe.d]# cat modprobe.conf 
alias bond0 bonding
options bond0 miimon=100 mode=4
alias bond1 bonding
options bond1 miimon=100 mode=1
alias bond2 bonding
options bond2 miimon=100 mode=4
[root@dbtan21: /etc/modprobe.d]# 
[root@dbtan21: /etc/modprobe.d]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 41
        Number of ports: 2
        Actor Key: 11
        Partner Key: 32816
        Partner Mac Address: 00:23:04:ee:be:64

Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e4:c2:d1:f4:5f:dc
Aggregator ID: 41
Slave queue ID: 0

Slave Interface: eth8
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e4:c2:d1:f4:5f:de
Aggregator ID: 41
Slave queue ID: 0
[root@dbtan21: /etc/modprobe.d]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 41
        Number of ports: 1
        Actor Key: 11
        Partner Key: 1
        Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eth10
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 44:6a:2e:ee:18:f0
Aggregator ID: 41
Slave queue ID: 0

Slave Interface: eth11
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 44:6a:2e:ee:18:f1
Aggregator ID: 42
Slave queue ID: 0
[root@dbtan21: /etc/modprobe.d]# cat /proc/net/bonding/bond2
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 41
        Number of ports: 2
        Actor Key: 11
        Partner Key: 833
        Partner Mac Address: 70:79:90:a8:40:91

Slave Interface: eth7
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e4:c2:d1:f4:5f:dd
Aggregator ID: 41
Slave queue ID: 0

Slave Interface: eth9
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e4:c2:d1:f4:5f:df
Aggregator ID: 41
Slave queue ID: 0
[root@dbtan21: /etc/modprobe.d]# ethtool bond0
Settings for bond0:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 20000Mb/s
        Duplex: Full
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes
[root@dbtan21: /etc/modprobe.d]# ethtool bond1
Settings for bond1:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 10000Mb/s
        Duplex: Full
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes
[root@dbtan21: /etc/modprobe.d]# ethtool bond2
Settings for bond2:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 20000Mb/s
        Duplex: Full
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes
[root@dbtan21: /etc/modprobe.d]# 
</code></pre>
<p>Reload and Restart the udev rules</p>
<pre><code class="language-bash line-numbers">[root@dbtan21: ~]# udevadm control --reload-rules
[root@dbtan21: ~]# udevadm trigger
[root@dbtan21: ~]# udevadm trigger  subsystem-match=block
[root@dbtan21: ~]# udevadm trigger  subsystem-nomatch=net
[root@dbtan21: ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@dbtan21: ~]# 
</code></pre>
<blockquote><p>
  udevadm trigger [options]<br />
      接收内核发送来的设备事件。主要用于重放coldplug事件信息<br />
  （译者补充：内核在启动时已经检测到了系统的硬件设备，并把硬件设备信息通过sysfs内核虚拟文件系统导出。udev扫描sysfs文件系统，根据硬件设备信息生成热插拔（hotplug）事件，udev再读取这些事件，生成对应的硬件设备文件。由于没有实际的硬件插拔动作，所以这一过程被称为coldplug。）</p>
<pre><code>--verbose 输出将要被触发的设备列表。
--dry-run 不真的触发事件
--type=type 触发一个特殊的设备。合法的类型：devices,subsystem,failed.默认是devices
--action=action
被触发的事件，默认是change
--subsystem-match=subsystem
触发匹配子系统的设备事件。这个选项可以被多次指定，并且支持shell模式匹配。
--attr-match=attribute=value
触发匹配sysfs属性的设备事件。如果属性值和属性一起指定，属性的值可以使用shell模式匹配。如果没有指定值，会重新确认现有属性。这个选项可以被多次指定。
--attr-nomatch=attribute=value
不要触发匹配属性的设备事件。如果可以使用模式匹配。也可以多次指定
--property-match=property=value
匹配属性吻合的设备。可以多次指定支持模式匹配
--tag-match=property
匹配标签吻合的设备。可以多次指定。
--sysname-match=name
</code></pre>
</blockquote>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2019/06/netapp-a700-use-multipath.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Soundbrenner Pulse 节拍器使用指南 for Android</title>
		<link>https://dbtan.com/2017/10/soundbrenner-pulse-for-android.html</link>
					<comments>https://dbtan.com/2017/10/soundbrenner-pulse-for-android.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Thu, 26 Oct 2017 18:17:08 +0000</pubDate>
				<category><![CDATA[节拍器]]></category>
		<category><![CDATA[音乐]]></category>
		<category><![CDATA[Soundbrenner Pulse]]></category>
		<guid isPermaLink="false">http://www.dbtan.com/?p=334</guid>

					<description><![CDATA[Soundbrenner Pulse 节拍器使用指南 for Android Soundbrenner Pul [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>Soundbrenner Pulse 节拍器使用指南 for Android</h3>
<p>Soundbrenner Pulse 手表式智能体感震动脉冲节拍器</p>
<p><img decoding="async" src="https://farm8.staticflickr.com/7878/46724224381_9339f548e7_o.jpg" alt="soundbrenner_vedio" /></p>
<p><strong>快速使用说明</strong></p>
<ol>
<li>到货，先充电大概三小时，充好后灯变为绿色，用两个手指按住表盘2秒开机。</li>
<li>用单手指快速拍表盘两下开始节拍运算，Led 蓝白显示速度律动。</li>
<li>连续拍三下或者以上可以按照你的速度打拍子，转动表圈也能调整拍子的速度。</li>
<li>关机用手指按住表盘两秒。</li>
<li>戴的越紧拍子的震感越强烈。</li>
</ol>
<p>配合手机APP 使用蓝牙连接，开启更多配合模式，律动一发不可收拾。<br />
包装清单：主机x1   充电底座x1   充电USB线x1  表带x2（1长1短）  英文说明书x1  中文说明书x1   贴纸x1   拨片x1</p>
<p><strong>使用APP（iOS 和 Android）</strong></p>
<ol>
<li>下载 Soundbrenner Pulse App 软件 Metronome</li>
<li>第一次使用需要注册邮箱</li>
<li>注册完毕之后开始连接，首先开启您的手机蓝牙开关。然后在 APP 软件中的设置（Settings）里面点击增加设备（+Set up Soundbrenner Pulse）连续点击下一步（Next），直到链接设备（Connect a Device）看动画提示在手表开机的状态下旋转表壳 + 双指按住表盘，这时候LED变成蓝色代表在配对连接中。最后点击下面的SB Pulse 链接成功。</li>
<li>升级链接和固件程序（Firmware），连续下一步升级（Download and install），开始升级（Start Update）</li>
<li>升级完根据提示点击配对蓝牙，最后提示成功（Success）连续点击（Done）下一步，点击动画演示Next。</li>
<li>升级完固件，以后开机需要旋转一下 LED 表壳，然后单指按表盘两秒，就能开关机了。</li>
</ol>
<p><strong>这里主要说明 Soundbrenner Pulse 与 Android 手机 APP 配对连接方法：</strong></p>
<p>不论是官方的宣传/教学视频，还是国内商家录制的教学视频，都是已 iPhone 为例做演示。</p>
<p>而作为 Android 用户，我在首次配对连接时，也遇到些麻烦。最终，我在 twitter 上咨询了 @Soundbrenner 官微，按照官微回复的操作后，现在可以与 Android 手机配对了。总结整理如下，供 Android 用户分享。</p>
<ol>
<li>
<p>Android 版 APP 一定要在 Google Play 下载。当前最新版本 1.7.0</p>
<p><img decoding="async" src="https://farm8.staticflickr.com/7897/45808761825_38d128cc78_o.jpg" alt="soundbrenner_android" /></p>
<blockquote>
<p>注意：国内应用市场（如：应用宝）下载的 APP 版本是 1.1.1，无法完成下面第2步注册或登录。</p>
</blockquote>
</li>
</ol>
<ol start="2">
<li>
<p>要 Soundbrenner Pulse 与 APP 连接，必须先在 APP 里注册用户。Android 手机注册 <strong>必须</strong> 科学上网。</p>
</li>
<li>
<p>Soundbrenner Pulse 与 APP 首次连接前，需要设置下列两个权限。</p>
<p>3.1. 设置 APP 允许开启蓝牙</p>
<p><img decoding="async" src="https://farm5.staticflickr.com/4818/45999576704_894cff6a9a_o.jpg" alt="soundbrenner_android-Privilege" /></p>
<p>3.2. 要在 Android 系统配置里开启&quot;位置信息&quot;</p>
<p><img decoding="async" src="https://farm5.staticflickr.com/4881/31782706017_7855064ca6_o.jpg" alt="soundbrenner_android-GPS" /></p>
</li>
<li>
<p>Soundbrenner Pulse 与 APP 首次连接，按照动画提示操作即可完成。</p>
<p><img decoding="async" src="https://farm5.staticflickr.com/4809/32849263228_af28e573ce_o.jpg" alt="soundbrenner_connect" /></p>
</li>
</ol>
<blockquote>
<p>说明：只有首次配对时，需要开启&quot;位置信息&quot;。以后再连接，手机只要开启蓝牙即可。<br />
<strong>但在 Google Play 的 APP 权限（Permissions）里，并没有提到需要&quot;位置信息&quot;权限。</strong></p>
</blockquote>
<p><img decoding="async" src="https://farm8.staticflickr.com/7839/46724326561_d4b9a24ba4_o.png" alt="soundbrenner_Permissions-1" /><br />
<img decoding="async" src="https://farm5.staticflickr.com/4912/46671475192_b97ac07b2c_o.png" alt="soundbrenner_Permissions-2" /></p>
<ol start="5">
<li>
<p>如果遇到 Soundbrenner Pulse 死机。把 Soundbrenner pulse 反方向放在充电站上（要通电），显示红灯，即可重启 Soundbrenner pulse。</p>
<p><img decoding="async" src="https://farm8.staticflickr.com/7805/46724296851_e724d7f6f2_o.jpg" alt="soundbrenner_reboot" /></p>
</li>
</ol>
<p><strong>APP设置</strong></p>
<p>APP 已经支持中文，下列的设置也都有中文说明。</p>
<ol>
<li>节拍器大全（Metronome）可以选择各种节拍LED和震动的强弱 3格蓝 2格绿 1盒格白 0格关闭LED。</li>
<li>开发实验室（Library）你可以开发或者使用现有的节奏模式然后保存（SAVE），然后在 Metronome 中点 Load 加载你开发的节奏和手表同步使用。</li>
<li>在设置（settings）中，点击 App Setting 可以设置很多参数：<br />
Silent Metronome：手机 App 静音开关<br />
Metonome Tone：设置手机 App 的节奏响铃开关<br />
Light theme：手机 App 屏幕变亮开关<br />
Screen Always On：手机屏幕长亮开关<br />
Fullscreen Flash：手机APP屏幕闪烁开关<br />
Beat Counting：显示比特 beat 数字<br />
Camera Led Flash：手机摄像头闪光灯节拍同步开关<br />
Soundbrenner wheel：APP表盘旋转控制开关，让APP的Metronome中的转轮颜色和手表LED颜色同步。</li>
</ol>
<p><img decoding="async" src="https://farm8.staticflickr.com/7844/46724300751_26594bc687_o.jpg" alt="soundbrenner_setting" /></p>
<p>-- The End --</p>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2017/10/soundbrenner-pulse-for-android.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>entity manager close</title>
		<link>https://dbtan.com/2017/03/entity-manager-close.html</link>
					<comments>https://dbtan.com/2017/03/entity-manager-close.html#respond</comments>
		
		<dc:creator><![CDATA[dbtan]]></dc:creator>
		<pubDate>Fri, 24 Mar 2017 05:55:31 +0000</pubDate>
				<category><![CDATA[Trouble Shooting]]></category>
		<category><![CDATA[jpa]]></category>
		<category><![CDATA[ORM]]></category>
		<guid isPermaLink="false">http://www.dbtan.com/?p=330</guid>

					<description><![CDATA[前言 entity manager未关闭造成的影响 spring jpa version:1.10.1 底层h [&#8230;]]]></description>
										<content:encoded><![CDATA[<h3>前言</h3>
<p>entity manager未关闭造成的影响</p>
<p>spring jpa version:1.10.1<br />
底层hibernate: 5.1.0</p>
<h3>场景</h3>
<p>某些情况需要自行写sql,然后就用到了EntityManager, 用完之后没有显示的close(==|||)</p>
<p>上线后大量请求超时,偶尔有那么几个请求会快</p>
<h3>问题定位</h3>
<p>首先确定不是数据库正在的查询慢,目前量还很小,然后找到sql去数据库查询系统确认,查询速度接近0ms</p>
<p>然后怀疑是网络延迟,然后找到接收到前端传递过来的参数,想让前端查查log他们发送的时间点,没人搭理...无果</p>
<p>找运维同学要了份error log,一看,果然不是网络问题,大量报获取不到数据库链接,等待获取连接超时,当时心一惊</p>
<p>想着用的spring jpa,底层使用的数据库链接应该会自动释放啊,找dba看了下应用系统连到数据库的活跃连接数,果然每个应用的连接数都到了最大值</p>
<p>想了想,有地方用了EntityManager,找到使用的地方一看,没有手动关闭,哎....</p>
<p>加上close后,问题解决...</p>
<h3>后记</h3>
<p>没有任何技术含量,给自己的一个提醒,用惯了自动释放连接,以后得多注意</p>
<p>-- The End --</p>
<blockquote>
<p>原文链接: <a href="http://illegalaccess.com/2017/03/24/entity-manager/" title="http://illegalaccess.com/2017/03/24/entity-manager/">http://illegalaccess.com/2017/03/24/entity-manager/</a></p>
</blockquote>
]]></content:encoded>
					
					<wfw:commentRss>https://dbtan.com/2017/03/entity-manager-close.html/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
