There's no correlation to be made there, let alone causation.
“The invalid assumption that correlation implies cause is probably among the two or three most serious and common errors of human reasoning.” Steven J Gould.And if you haven't read it, Tom Kyte has a great Oracle related article on the matter. His old AskTom link is dead, but here is one from Oracle China - I think the transmission is by carrier pigeon, or a 9000 baud modem, but give it a few minutes. Similar to opening a sensis website, really - except it's all text.
Not a DBA!!! |
Nope, the first thing I did was informed my boss that my laptop blue screened - first time on this sucker, not long after I noticed Glassfish gone and the Oracle DB simmering on the ashes.
After a quick layman's look following a hunch, I found my SYSTEM tablespace was lacking elbow room.
I thought I'd lost the original figures in another crash (opening the lid from hibernation), but I found them in my e-mail to Penny.
SQL> @free sys%
NAME KBytes Used Free Used Largest
----------------- ------------ ------------ ------ ------------
SYSAUX 768,000 630,848 137,152 82.1 40,960SYSTEM 1,249,280 1,244,032 5,248 99.6 5,120
With my uneducated eye, I deduced that SYSTEM Used 99.6% was called a "vital clue".
So in another script I had hidden away, some other little tip Penny gave me one day, was to make a bigger canvas, so to speak
alter tablespace system add datafile 'C:\app\Scott\oracle\11.2.0\oradata\sw11g\SYSTEM02.dbf' SIZE 1000M;And once Penny got back to me with another suggestion:
select owner,segment_name,segment_type ,bytes/(1024*1024) size_m from dba_segments where tablespace_name = 'SYSTEM' and bytes/(1024*1024)> 1 order by size_m desc / OWNER SEGMENT_NAME SEGMENT SIZE_M ----- --------------- ------- ------ SYS AUD$ TABLE 360 SYS IDL_UB1$ TABLE 288 SYS SOURCE$ TABLE 120 SYS IDL_UB2$ TABLE 40 SYS C_TOID_VERSION# CLUSTER 23 SYS C_OBJ#_INTCOL# CLUSTER 22 SYS ARGUMENT$ TABLE 16 SYS I_SOURCE1 INDEX 15 SYS C_OBJ# CLUSTER 14She palmed me off to the relevant documentation where I could do some trimming. 11.2.0.1 for those watching at home.
Here's what I came up with
BEGIN DBMS_AUDIT_MGMT.SET_DEBUG_LEVEL(DBMS_AUDIT_MGMT.TRACE_LEVEL_ERROR); -- didn't work? DBMS_AUDIT_MGMT.init_cleanup( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL, default_cleanup_interval => 24*7 /* hours */); END; / BEGIN DBMS_AUDIT_MGMT.create_purge_job( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL, audit_trail_purge_interval => 24*30 /* hours */, audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS', use_last_arch_timestamp => FALSE); END; /
I was originally having problems with seeing the trace, as I didn't have enough temporary space (or something similar, I lost the actual message) in SYSAUX, so I gave that tablespace another 300M datafile.
Now my free space looks like I have enough room to swing a dinosaur, and I haven't had any velociraptors opening doors since.
SQL> @free sys%
NAME KBytes Used Free Used Largest
----------------- ------------ ------------ ------ ------------
SYSAUX 1,075,200 1,015,680 59,520 94.5 43,008
SYSTEM 2,273,280 875,840 1,397,440 38.5 1,022,976
------------ ------------ ------------
sum 3,348,480 1,891,520 1,456,960
I saw Jeff Smith's article about his ORA-3113 issue via twitter. After locating my alert log, I couldn't track down anything wrong from where I first saw issues, but I'm not used to reading these logs.
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK" ERROR: Unable to normalize symbol name for the following short stack (at offset 199): dbgexProcessError()+193<-dbgeExecuteForError()+65<-dbgePostErrorKGE()+1726<-dbkePostKGE_kgsf()+75<-kgeade()+560<-kgerev()+125<-kgerec5()+60<-sss_xcpt_EvalFilterEx()+1869<-sss_xcpt_EvalFilter()+174<-.1.4_5+59<-00000000775A85A8<-00000000775B9D0D<-00000000775A91AF<-00000000775E1278<-kgllkal()+151<-kglLockCursor()+188<-kxsGetLookupLock()+146<-kkscsCheckCursor()+326<-kkscsSearchChildList()+1067<-kksfbc()+12294<-kkspsc0()+2117<-kksParseCursor()+181<-opiosq0()+2538<-kpooprx()+357<-kpoal8()+940<-opiodr()+1662<-PGOSF523_kpoodrc()+32<-rpiswu2()+2757<-kpoodr()+717<-xupirtrc()+2739<-upirtrc()+124<-kpurcsc()+150<-kpuexec()+9766<-OCIStmtExecute()+70<-kewrose_oci_stmt_exec()+79<-kewrgwxf1_gwrsql_exft_1()+407<-kewrgwxf_gwrsql_exft()+620<-kewrews_execute_wr_sql()+72<-kewrftbs_flush_table_by_sql()+210<-kewrft_flush_table()+150<-kewrftec_flush_table_ehdlcx()+454<-kewrfat_flush_all_tables()+1021<-kewrfos_flush_onesnap()+167<-kewrfsc_flush_snapshot_c()+613<-kewrafs_auto_flush_slave()+548<-kebm_slave_main()+856<-ksvrdp()+2506<-opirip()+965<-opidrv()+909<-sou2o()+98 Sun Feb 26 19:16:53 2012 Errors in file c:\app\scott\diag\rdbms\sw11g\sw11g\trace\sw11g_smon_7520.trc (incident=73309): ORA-00600: internal error code, arguments: [25027], [2], [2965385640], [], [], [], [], [], [], [], [], [] Errors in file c:\app\scott\diag\rdbms\sw11g\sw11g\trace\sw11g_m002_7944.trc (incident=73461): ORA-07445: exception encountered: core dump [kgllkal()+151] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x92A2D07] [UNABLE_TO_READ] [] Incident details in: c:\app\scott\diag\rdbms\sw11g\sw11g\incident\incdir_73309\sw11g_smon_7520_i73309.trc Incident details in: c:\app\scott\diag\rdbms\sw11g\sw11g\incident\incdir_73461\sw11g_m002_7944_i73461.trc Non-fatal internal error happenned while SMON was doing cursor transient type cleanup. SMON encountered 1 out of maximum 100 non-fatal internal errors. Sun Feb 26 19:17:01 2012 Trace dumping is performing id=[cdmp_20120226191701]Maybe it might help someone on day, or someone can add further detail.
References
No comments:
Post a Comment