This is an interesting bug which has affected a number of our databases – perhaps because we are not patched to the latest levels in every case.
The MoS note is this one – Patch 20373598: GEN0 TIMEOUT ACTION ALWAYS TRY TO INITIALIZE OCR CONTENT p20373598_121022_Linux-x86-64
OR
fixed in Bug 22291127 12.1.0.2.160419 (Apr 2016) Database Patch Set Update (DB PSU)
Here is view of top showing how a server looks when it is affected, and a list of servers running 12c grid home below the Apr 2016 PSU patch level.
If the server has loads of cores assigned the overall effect will be small as seen here, CPU shows as 2.8% used
Tasks: 618 total, 3 running, 614 sleeping, 0 stopped, 1 zombie Cpu(s): 2.8%us, 0.2%sy, 0.0%ni, 96.8%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 198475784k total, 80283568k used, 118192216k free, 4491100k buffers Swap: 41680876k total, 0k used, 41680876k free, 56391976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 44361 oracle 20 0 4225m 2.8g 18m R 99.6 1.5 753924:28 asm_gen0_+ASM 24347 root 20 0 448m 5536 1348 S 4.9 0.0 7480:21 /usr/sbin/collectd -C /etc/collectd.conf -f 68317 oracle 20 0 50.3g 182m 166m S 3.3 0.1 3052:05 ora_pr00_OEMPRD2A 66023 oracle -2 0 50.3g 19m 17m S 2.3 0.0 1113:44 ora_vktm_OEMPRD2A 34357 oracle -2 0 1405m 17m 15m S 2.0 0.0 17424:26 asm_vktm_+ASM 67399 oracle 20 0 50.4g 99m 38m S 2.0 0.1 1885:56 oracleOEMPRD2A (LOCAL=NO)
This SQL is a useful way to identify which servers require patching assuming you are using OEM
select host_name, home_name, home_location patch_id, install_timestamp, description</span> from sysman."MGMT$SOFTWARE_ONEOFF_PATCHES" where host_name not in ( select host_name from sysman."MGMT$SOFTWARE_ONEOFF_PATCHES" where patch_id in ('22291127','20373598')) and home_location like '%12%grid' and description like '%Database Patch Set Update%' order by 1,4 ;