关于html5:alert日志中出现Private-Strand-Flush-Not-Complete的处理方法

36次阅读

共计 5820 个字符,预计需要花费 15 分钟才能阅读完成。

Fri Oct 17 19:59:51 2014

Thread 1 cannot allocate new log, sequence 4722

Private strand flush not complete

Current log# 1 seq# 4721 mem# 0: /oradata/sgomp5/redo01.log

Thread 1 advanced to log sequence 4722 (LGWR switch)

Current log# 2 seq# 4722 mem# 0: /oradata/sgomp5/redo02.log

在 MOS 社区中找到了一篇对于这个问题的文章:

Historically, Every user session wrote the changes to redo log buffer andwww.sangpi.com changes from redo log buffer are flushed to redo logs on disk by lgwr. As number of users increased, the race and the need to get latch for redo allocation and redo copy on the public redo buffer increased.

So, starting from 10g, Oracle came up with concept ofprivate redo (x$kcrfstrand) and in-memory undo (x$ktifp). Every session has private redo where session writes to and then a (small) batch of changes is written to public redo and finally from public redo log buffer to redo log files on disk. This mechanismreduces the gets/sleeps on redo copy and redo allocation latches on the public redo buffer and hence makes the architecture more scalable.

It is also worth noting that oracle falls back to old redo mechanism in case transaction is too big (with lots of changes) and if changes done by that transaction can’t fit into private redo buffers.

当数据库切换日志时,所有 private strand 都必须刷新到以后日志,而后能力持续。此信息示意咱们在尝试切换时,还没有齐全将所有 redo 信息写入到日志中。这有点相似于“checkpoint not complete”,不同的是,它仅波及到正在被写入日志的 redo。在写入所有 redo 前,无奈切换日志。

Private Strands 是 10gR2 才有的,它用于解决 redo 的 latch(redo allocation latch)。是一种容许过程利用多个 allocation latch 更高效地将 redo 写入 redo buffer cache 的机制,它与 9i 中呈现的 log_parallelism 参数相干。提出 Strand 的概念是为了确保实例的 redo 生成率达到最佳,并能确保在呈现某种 redo 争用时,能够动静调整 strand 的数量进行弥补。初始调配的 strand 数量取决于 CPU 的数量,起码两个 strand,其中一个 strand 用于 active 的 redo 生成。

对于大型的 oltp 零碎,redo 生成量十分大,因而以后台过程遇到 redo 争用时,这些游戏 strand 会被激活。shared strand 总是与多个 private strand 共存。Oracle 10g 的 redo(和 undo)机制有一些重大变动,目标是为了缩小争用。此机制不再实时记录 redo,而是先记录在一个 private area,并在 commit 时 flush 到 redo log buffer 中去。在这种新机制引入后,一旦用户过程申请到 private strand,redo 不再保留到 pga 中,因而不再须要 redo copy latch 这个过程。

如果新事务申请不到 private strand 的 redo allocation latch,则会持续遵循旧的 redo buffer 机制,申请写入 shared strand 中。对于这个新的机制,在进行 redo 被写出到 logfile 时,LGWR 须要将 shared strand 与 private strand 的内容写出。当 redo flush 产生时,所有的 public strands 的 redo allocation latch 须要被获取,所有的 public strands 的 redo copy latch 须要被查看,所有蕴含流动事务的 private strands 须要被持有。

其实,对于这个景象也能够疏忽,除非“cannot allocate new log”信息和“advanced to log sequence”信息之间有显著的时间差。

如果想要在 alert.log 中避免出现 Private strand flush not complete 事件,那么能够通过减少参数 db_writer_processes 的值来实现,因为 DBWn 会触发 LGWR 将 redo 写入到 logfile,如果有多个 DBWn 过程一起写,能够减速 redo buffer cache 写入 redo logfile。

能够应用以下命令批改:

SQL> alter system set db_writer_processes=4 scope=spfile; – 该参数时动态参数,必须重启数据库后失效

留神,DBWR 过程数应该与逻辑 CPU 数相当。另外地,当 oracle 发现一个 DB_WRITER_PROCESS 不能实现工作时,也会主动减少其数量,前提是曾经在初始化参数中设定过最大容许的值。

对于 DB_WRITER_PROCESSES 和 DBWR_IO_SLAVES 参数的一些阐明:

DB_WRITER_PROCESSES replaces the Oracle7 parameter DB_WRITERS and specifies the initial number of database writer processes for an instance. If you use DBWR_IO_SLAVES, only one database writer process will be used, regardless of the setting for DB_WRITER_PROCESSES

DB_WRITER_PROCESSES 参数就是在 Oracle 7 中的 DB_WRITERS 参数,用来指定数据库实例的 DBWR 过程个数,当零碎中还配置了 DBWR_IO_SLAVES 参数时(默认为 0),则只能利用到一个 DBWn 过程,而疏忽其余的。

DBWR_IO_SLAVES

If it is not practical to use multiple DBWR processes, then Oracle provides a facility whereby the I/O load can be distributed over multiple slave processes. The DBWR process is the only process that scans the buffer cache LRU list for blocks to be written out. However, the I/O for those blocks is performed by the I/O slaves. The number of I/O slaves is determined by the parameter DBWR_IO_SLAVES.

当应用繁多 DBWR 过程时,Oralce 提供了应用多个 I /O slave 过程来实现模仿异步 IO,去实现全本应该由 DBWR 做的事件(写 LRU 上的数据块到磁盘文件),这个 slave 的数量是通过 DBWR_IO_SLAVES 参数来指定的

DBWR_IO_SLAVES is intended for scenarios where you cannot use multiple DB_WRITER_PROCESSES (for example, where you have a single CPU). I/O slaves are also useful when asynchronous I/O is not available, because the multiple I/O slaves simulate nonblocking, asynchronous requests by freeing DBWR to continue identifying blocks in the cache to be written. Asynchronous I/O at the operating system level, if you have it, is generally preferred.

DBWR_IO_SLAVES 参数通常被用在单 CPU 的场景中,因为单 CPU 即便设置了多 DBWR 过程数也是没有成果的。无论操作系统是否反对异步 IO,应用多个 I /O slaves 都是无效的,能够分担 DBWR 的工作。如果应用了异步 IO,那就更加举荐设置了

DBWR I/O slaves are allocated immediately following database open when the first I/O request is made. The DBWR continues to perform all of the DBWR-related work, apart from performing I/O. I/O slaves simply perform the I/O on behalf of DBWR. The writing of the batch is parallelized between the I/O slaves.

DBWR 的 I /O slaves 当数据库 open 时产生第一次 I / O 申请时被调配,DBWR 过程持续实现与本身相干工作,而拆散出局部 I / O 解决工作给 I /O slaves,各个 I /O slaves 之间的 I / O 解决都是并行的

Choosing Between Multiple DBWR Processes and I/O Slaves

Configuring multiple DBWR processes benefits performance when a single DBWR process is unable to keep up with the required workload. However, before configuring multiple DBWR processes, check whether asynchronous I/O is available and configured on the system. If the system supports asynchronous I/O but it is not currently used, then enable asynchronous I/O to see if this alleviates the problem. If the system does not support asynchronous I/O, or if asynchronous I/O is already configured and there is still a DBWR bottleneck, then configure multiple DBWR processes.

对于如何抉择多个 DBWR 过程和 I /O slaves 过程

当繁多的 DBWR 过程无奈胜任大量的写工作负载,配置多个 DBWR 过程是无效的。然而在配置多个 DBWR 过程前,须要先查看 OS 上是否反对异步 I /O,如果反对但未开启,那么先开启;如果零碎不反对或曾经配置了异步 IO 后,依然有 DBWR 瓶颈,那么就能够配置多个 DBWR 过程

Using multiple DBWRs parallelizes the gathering and writing of buffers. Therefore, multiple DBWn processes should deliver more throughput than one DBWR process with the same number of I/O slaves. For this reason, the use of I/O slaves has been deprecated in favor of multiple DBWR processes. I/O slaves should only be used if multiple DBWR processes cannot be configured.

开启多个 DBWR 过程就意味着能够并行写更多的脏缓存(dirty buffer)到数据文件,而多个 DBWR 的吞吐量,也要比 1 个 DBWR+ 相当数量的 I /O slaves 的要高,因而,当开启了多个 DBWR 过程时,就不应该再配置 DBWR_IO_SLAVES(如果原来是非零的话),能够把这个参数设置为 0

总结:

DBWR_IO_SLAVES 次要用于模仿异步环境,在不反对异步操作的 OS 上,能够进步 IO 的读写速度。

多个 DBWR 过程能够并行地从 data buffer 中获取 dirty block 并且并行地写入磁盘。然而,在单 DBWR+ 多个 I /O slaves 的场景下,只能是一个 DBWR 负责从 data buffer 中获取,而多个 I /O slaves 并行写入。如果零碎反对 AIO(disk_async_io=true),个别不必设置多 dbwr 或 io slaves。

如果在有多个 cpu 的状况下倡议应用 DB_WRITER_PROCESSES,因为这样的状况下不必去模仿异步模式,但要留神过程数量不能大于 cpu 数量。而在只有一个 cpu 的状况下倡议应用 DBWR_IO_SLAVES 来模仿异步模式,以便进步数据库性能。

—- 官网倡议:

Default value 1 or CPU_COUNT / 8, whichever is greater 默认值为 1 或 CPU 个数的 1 /8

[oracle@orcl ~]$ vmstat 5 —- 对 CPU 以后信息进行采集,每 5 秒一次

SQL> show parameter cpu_count; —- 查看以后 cpu_count 数

SQL> show parameter db_writer_processes; —- 查看以后数据写过程数

正文完
 0