[arch-general] kernel 3.14.2 hangs - VirtualBox suspected
Hi, I recently updated kernel to 3.14.2 and now I see i/o subsystem hang in few minutes after I start VirtualBox 4.3.10 machines (with Windows7). messages.log below. It looks like every process that tries to access /dev/md3 (even kworker and md3_resync) hangs forever. Anyone has similar problems? May 13 12:35:29 serenity kernel: kworker/u16:9 D 0000000000000000 0 112 2 0x00000000 May 13 12:35:29 serenity kernel: Workqueue: writeback bdi_writeback_workfn (flush-9:3) May 13 12:35:29 serenity kernel: ffff8804175177c0 0000000000000046 ffff88042fc575a0 ffff880417518000 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff880417517fd8 00000000000146c0 ffff880417518000 May 13 12:35:29 serenity kernel: 0000000034ca04fb ffff880417517710 ffff880412a11e10 ffff8804175177b8 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff81292817>] ? check_blkcg_changed+0x57/0x210 May 13 12:35:29 serenity kernel: [<ffffffff81145d35>] ? mempool_alloc_slab+0x15/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8128ce7f>] ? blk_throtl_bio+0x35f/0x940 May 13 12:35:29 serenity kernel: [<ffffffff81145d35>] ? mempool_alloc_slab+0x15/0x20 May 13 12:35:29 serenity kernel: [<ffffffff81145df1>] ? mempool_alloc+0x61/0x170 May 13 12:35:29 serenity kernel: [<ffffffffa04fe523>] ? __ext4_handle_dirty_metadata+0x83/0x1a0 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa048a666>] wait_barrier+0xc6/0x190 [raid10] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa048eca4>] make_request+0x44/0x130 [raid10] May 13 12:35:29 serenity kernel: [<ffffffffa0458a83>] md_make_request+0x103/0x260 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffff81145d35>] ? mempool_alloc_slab+0x15/0x20 May 13 12:35:29 serenity kernel: [<ffffffff81145df1>] ? mempool_alloc+0x61/0x170 May 13 12:35:29 serenity kernel: [<ffffffff8126f728>] generic_make_request+0xf8/0x150 May 13 12:35:29 serenity kernel: [<ffffffff8126f7f8>] submit_bio+0x78/0x190 May 13 12:35:29 serenity kernel: [<ffffffff811ef800>] _submit_bh+0x140/0x230 May 13 12:35:29 serenity kernel: [<ffffffff811f1379>] __block_write_full_page+0x129/0x370 May 13 12:35:29 serenity kernel: [<ffffffff811f4c70>] ? I_BDEV+0x10/0x10 May 13 12:35:29 serenity kernel: [<ffffffff811f17d2>] block_write_full_page_endio+0xb2/0x150 May 13 12:35:29 serenity kernel: [<ffffffff811f1885>] block_write_full_page+0x15/0x20 May 13 12:35:29 serenity kernel: [<ffffffff811f53f8>] blkdev_writepage+0x18/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8114e0f3>] __writepage+0x13/0x40 May 13 12:35:29 serenity kernel: [<ffffffff8114e680>] write_cache_pages+0x1e0/0x4d0 May 13 12:35:29 serenity kernel: [<ffffffff8114e0e0>] ? mapping_tagged+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8114e9bd>] generic_writepages+0x4d/0x80 May 13 12:35:29 serenity kernel: [<ffffffff811504ee>] do_writepages+0x1e/0x30 May 13 12:35:29 serenity kernel: [<ffffffff811e5dc0>] __writeback_single_inode+0x40/0x2b0 May 13 12:35:29 serenity kernel: [<ffffffff811e72ca>] writeback_sb_inodes+0x26a/0x430 May 13 12:35:29 serenity kernel: [<ffffffff811e752f>] __writeback_inodes_wb+0x9f/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff811e778b>] wb_writeback+0x22b/0x360 May 13 12:35:29 serenity kernel: [<ffffffff811d4361>] ? get_nr_inodes+0x51/0x70 May 13 12:35:29 serenity kernel: [<ffffffff811e7d5f>] bdi_writeback_workfn+0x33f/0x4c0 May 13 12:35:29 serenity kernel: [<ffffffff81088068>] process_one_work+0x168/0x450 May 13 12:35:29 serenity kernel: [<ffffffff81088ac2>] worker_thread+0x132/0x3e0 May 13 12:35:29 serenity kernel: [<ffffffff81088990>] ? manage_workers.isra.23+0x2d0/0x2d0 May 13 12:35:29 serenity kernel: [<ffffffff8108f2ea>] kthread+0xea/0x100 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: [<ffffffff8151757c>] ret_from_fork+0x7c/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: jbd2/md3-8 D 0000000000000000 0 496 2 0x00000000 May 13 12:35:29 serenity kernel: ffff8804136e5ca0 0000000000000046 0000000000000001 ffff88041963ce80 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8804136e5fd8 00000000000146c0 ffff88041963ce80 May 13 12:35:29 serenity kernel: ffff8804136e5bf0 ffffffff8119d6e6 0000000000000046 0000000000000000 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff8119d6e6>] ? kmem_cache_free+0x216/0x240 May 13 12:35:29 serenity kernel: [<ffffffff810ba100>] ? cpuacct_charge+0x50/0x60 May 13 12:35:29 serenity kernel: [<ffffffff810a991c>] ? update_curr+0xec/0x1b0 May 13 12:35:29 serenity kernel: [<ffffffff810a9e8f>] ? dequeue_entity+0x13f/0x580 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa049a7e5>] jbd2_journal_commit_transaction+0x215/0x19c0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8101567f>] ? __switch_to+0x1af/0x540 May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8107928e>] ? try_to_del_timer_sync+0x5e/0x90 May 13 12:35:29 serenity kernel: [<ffffffffa04a1c4c>] kjournald2+0xec/0x2a0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04a1b60>] ? commit_timeout+0x10/0x10 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8108f2ea>] kthread+0xea/0x100 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: [<ffffffff8151757c>] ret_from_fork+0x7c/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: md3_resync D 0000000000000000 0 1297 2 0x00000000 May 13 12:35:29 serenity kernel: ffff8800c3fb1b48 0000000000000046 ffff88042fff9d80 ffff88040634ebf0 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8800c3fb1fd8 00000000000146c0 ffff88040634ebf0 May 13 12:35:29 serenity kernel: ffffffff81b0acc0 0000000000011200 0000000000000000 ffff88040634ebf0 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff81190861>] ? alloc_pages_current+0xb1/0x160 May 13 12:35:29 serenity kernel: [<ffffffffa048af79>] ? r10buf_pool_alloc+0x1b9/0x2b0 [raid10] May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa048a505>] raise_barrier+0x135/0x1a0 [raid10] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa048b2cf>] sync_request+0x25f/0x19b0 [raid10] May 13 12:35:29 serenity kernel: [<ffffffffa045cdb6>] ? is_mddev_idle+0x136/0x170 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffffa0460593>] md_do_sync+0x8c3/0xe40 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa045ca90>] ? md_register_thread+0xe0/0xe0 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffffa045cbe5>] md_thread+0x155/0x160 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffffa045ca90>] ? md_register_thread+0xe0/0xe0 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffff8108f2ea>] kthread+0xea/0x100 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: [<ffffffff8151757c>] ret_from_fork+0x7c/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: thunderbird D 0000000000000000 0 1323 1 0x00000004 May 13 12:35:29 serenity kernel: ffff8804044b3b60 0000000000000086 ffff880419674000 ffff88040634ce80 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8804044b3fd8 00000000000146c0 ffff88040634ce80 May 13 12:35:29 serenity kernel: ffffffff810b3aa4 0000000000000000 ffff8804044ef400 0000000070efd2c9 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff810b3aa4>] ? add_wait_queue+0x44/0x50 May 13 12:35:29 serenity kernel: [<ffffffff810b3b4d>] ? remove_wait_queue+0x4d/0x60 May 13 12:35:29 serenity kernel: [<ffffffff811ce4f3>] ? poll_freewait+0x53/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff811cf6fa>] ? do_sys_poll+0x14a/0x570 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa049708d>] wait_transaction_locked+0x8d/0xd0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04973c2>] start_this_handle+0x262/0x610 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8119c95a>] ? kmem_cache_alloc+0x1fa/0x220 May 13 12:35:29 serenity kernel: [<ffffffffa0497b8b>] jbd2__journal_start+0xfb/0x210 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ? ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04fdead>] __ext4_journal_start_sb+0x6d/0x110 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811e65d8>] __mark_inode_dirty+0x38/0x2d0 May 13 12:35:29 serenity kernel: [<ffffffff811d55e1>] update_time+0x81/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff811d5729>] touch_atime+0xf9/0x170 May 13 12:35:29 serenity kernel: [<ffffffff81145b1a>] generic_file_aio_read+0x54a/0x720 May 13 12:35:29 serenity kernel: [<ffffffff811ba137>] do_sync_read+0x67/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ba797>] vfs_read+0x97/0x160 May 13 12:35:29 serenity kernel: [<ffffffff811bb2e9>] SyS_read+0x59/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff81517629>] system_call_fastpath+0x16/0x1b May 13 12:35:29 serenity kernel: Chrome_FileUser D 0000000000000000 0 1410 1 0x00000000 May 13 12:35:29 serenity kernel: ffff8803f9a79b60 0000000000000082 ffffffff810a98a9 ffff8803fa9b2740 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8803f9a79fd8 00000000000146c0 ffff8803fa9b2740 May 13 12:35:29 serenity kernel: ffff8803fb2e7000 ffff8803fb2e6e00 ffff88042fcd4738 ffff8803fb2e6e00 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff810a98a9>] ? update_curr+0x79/0x1b0 May 13 12:35:29 serenity kernel: [<ffffffff810a9e8f>] ? dequeue_entity+0x13f/0x580 May 13 12:35:29 serenity kernel: [<ffffffff810a57f8>] ? __enqueue_entity+0x78/0x80 May 13 12:35:29 serenity kernel: [<ffffffff810aa3f0>] ? dequeue_task_fair+0x120/0x530 May 13 12:35:29 serenity kernel: [<ffffffff810a41a5>] ? sched_clock_cpu+0xb5/0xe0 May 13 12:35:29 serenity kernel: [<ffffffff810156c1>] ? __switch_to+0x1f1/0x540 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa049708d>] wait_transaction_locked+0x8d/0xd0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04973c2>] start_this_handle+0x262/0x610 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8119c95a>] ? kmem_cache_alloc+0x1fa/0x220 May 13 12:35:29 serenity kernel: [<ffffffffa0497b8b>] jbd2__journal_start+0xfb/0x210 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ? ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04fdead>] __ext4_journal_start_sb+0x6d/0x110 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811e65d8>] __mark_inode_dirty+0x38/0x2d0 May 13 12:35:29 serenity kernel: [<ffffffff811d55e1>] update_time+0x81/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff811d5729>] touch_atime+0xf9/0x170 May 13 12:35:29 serenity kernel: [<ffffffff81145b1a>] generic_file_aio_read+0x54a/0x720 May 13 12:35:29 serenity kernel: [<ffffffff811ba137>] do_sync_read+0x67/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ba797>] vfs_read+0x97/0x160 May 13 12:35:29 serenity kernel: [<ffffffff811bb2e9>] SyS_read+0x59/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff81517629>] system_call_fastpath+0x16/0x1b May 13 12:35:29 serenity kernel: Chrome_CacheThr D 0000000000000000 0 1412 1 0x00000000 May 13 12:35:29 serenity kernel: ffff8803faf09ab0 0000000000000082 0000000000000001 ffff8803fa9b3ae0 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8803faf09fd8 00000000000146c0 ffff8803fa9b3ae0 May 13 12:35:29 serenity kernel: ffffffff810acfa3 00000000cb458df0 ffff880419600700 00000000000146c0 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff810acfa3>] ? find_busiest_group+0x143/0x8b0 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa049708d>] wait_transaction_locked+0x8d/0xd0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04973c2>] start_this_handle+0x262/0x610 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8119c95a>] ? kmem_cache_alloc+0x1fa/0x220 May 13 12:35:29 serenity kernel: [<ffffffffa0497b8b>] jbd2__journal_start+0xfb/0x210 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ? ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04fdead>] __ext4_journal_start_sb+0x6d/0x110 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811e65d8>] __mark_inode_dirty+0x38/0x2d0 May 13 12:35:29 serenity kernel: [<ffffffff811d55e1>] update_time+0x81/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff811d5840>] file_update_time+0xa0/0xf0 May 13 12:35:29 serenity kernel: [<ffffffff81144a2c>] __generic_file_aio_write+0x14c/0x3e0 May 13 12:35:29 serenity kernel: [<ffffffff81144d13>] generic_file_aio_write+0x53/0xe0 May 13 12:35:29 serenity kernel: [<ffffffffa04c6261>] ext4_file_write+0xb1/0x4e0 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811c3a59>] ? pipe_read+0x509/0x520 May 13 12:35:29 serenity kernel: [<ffffffff811ba1d7>] do_sync_write+0x67/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ba91a>] vfs_write+0xba/0x1e0 May 13 12:35:29 serenity kernel: [<ffffffff811bb58a>] SyS_pwrite64+0x9a/0xc0 May 13 12:35:29 serenity kernel: [<ffffffff81517629>] system_call_fastpath+0x16/0x1b May 13 12:35:29 serenity kernel: BrowserBlocking D 0000000000000000 0 1431 1 0x00000000 May 13 12:35:29 serenity kernel: ffff8800c0871ab8 0000000000000082 0000000000000000 ffff8803fc463ae0 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8800c0871fd8 00000000000146c0 ffff8803fc463ae0 May 13 12:35:29 serenity kernel: ffff8800cb458f40 ffff8800c0871a10 ffffffff8114323f ffff8800cb458d00 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff8114323f>] ? find_get_page+0x5f/0xc0 May 13 12:35:29 serenity kernel: [<ffffffff811ed988>] ? __find_get_block_slow+0xc8/0x170 May 13 12:35:29 serenity kernel: [<ffffffff811ed2b0>] ? generic_block_bmap+0x70/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150b8f4>] io_schedule+0x94/0xf0 May 13 12:35:29 serenity kernel: [<ffffffff811ed2be>] sleep_on_buffer+0xe/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8150bce3>] __wait_on_bit+0x83/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ed2b0>] ? generic_block_bmap+0x70/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150bd87>] out_of_line_wait_on_bit+0x87/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff810b4060>] ? autoremove_wake_function+0x40/0x40 May 13 12:35:29 serenity kernel: [<ffffffff811ed3aa>] __wait_on_buffer+0x2a/0x30 May 13 12:35:29 serenity kernel: [<ffffffffa04d6b3d>] ext4_find_entry+0x3bd/0x4f0 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d6cda>] ext4_lookup+0x6a/0x170 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811c46ed>] lookup_real+0x1d/0x70 May 13 12:35:29 serenity kernel: [<ffffffff811c9229>] do_last.isra.36+0x539/0xe30 May 13 12:35:29 serenity kernel: [<ffffffff811c7785>] ? path_init+0x335/0x410 May 13 12:35:29 serenity kernel: [<ffffffff811c9be7>] path_openat+0xc7/0x6e0 May 13 12:35:29 serenity kernel: [<ffffffff810a0c0f>] ? try_to_wake_up+0x1ff/0x2e0 May 13 12:35:29 serenity kernel: [<ffffffff811cb40d>] do_filp_open+0x4d/0xc0 May 13 12:35:29 serenity kernel: [<ffffffff811d82d7>] ? __alloc_fd+0xa7/0x130 May 13 12:35:29 serenity kernel: [<ffffffff811b9cbe>] do_sys_open+0x14e/0x250 May 13 12:35:29 serenity kernel: [<ffffffff811b9dde>] SyS_open+0x1e/0x20 May 13 12:35:29 serenity kernel: [<ffffffff81517629>] system_call_fastpath+0x16/0x1b May 13 12:35:29 serenity kernel: BrowserBlocking D 0000000000000000 0 1432 1 0x00000000 May 13 12:35:29 serenity kernel: ffff8803fa099b60 0000000000000082 ffff88041f00be00 ffff8800bf7af5c0 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8803fa099fd8 00000000000146c0 ffff8800bf7af5c0 May 13 12:35:29 serenity kernel: ffff8803fa099aa8 ffffffff8109cb2a ffff880419600700 ffff88041f00be00 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff8109cb2a>] ? update_rq_clock.part.78+0x1a/0x130 May 13 12:35:29 serenity kernel: [<ffffffff8101f575>] ? native_sched_clock+0x35/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8101f5f9>] ? sched_clock+0x9/0x10 May 13 12:35:29 serenity kernel: [<ffffffff810a41a5>] ? sched_clock_cpu+0xb5/0xe0 May 13 12:35:29 serenity kernel: [<ffffffff810156c1>] ? __switch_to+0x1f1/0x540 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa049708d>] wait_transaction_locked+0x8d/0xd0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04973c2>] start_this_handle+0x262/0x610 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8119c95a>] ? kmem_cache_alloc+0x1fa/0x220 May 13 12:35:29 serenity kernel: [<ffffffffa0497b8b>] jbd2__journal_start+0xfb/0x210 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ? ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04fdead>] __ext4_journal_start_sb+0x6d/0x110 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d233a>] ext4_dirty_inode+0x2a/0x60 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811e65d8>] __mark_inode_dirty+0x38/0x2d0 May 13 12:35:29 serenity kernel: [<ffffffff811d55e1>] update_time+0x81/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff811d5729>] touch_atime+0xf9/0x170 May 13 12:35:29 serenity kernel: [<ffffffff81145b1a>] generic_file_aio_read+0x54a/0x720 May 13 12:35:29 serenity kernel: [<ffffffff811ba137>] do_sync_read+0x67/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ba797>] vfs_read+0x97/0x160 May 13 12:35:29 serenity kernel: [<ffffffff811bb2e9>] SyS_read+0x59/0xd0 May 13 12:35:29 serenity kernel: [<ffffffff81517629>] system_call_fastpath+0x16/0x1b May 13 12:35:29 serenity kernel: loop0 D 0000000000000000 0 1660 2 0x00000000 May 13 12:35:29 serenity kernel: ffff8803fc1d58a8 0000000000000046 ffff8803fc1d58b8 ffff88041963b110 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8803fc1d5fd8 00000000000146c0 ffff88041963b110 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff88041963b110 ffff8800cbb93e68 ffff880417806f78 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff810a0d52>] ? default_wake_function+0x12/0x20 May 13 12:35:29 serenity kernel: [<ffffffff810b4032>] ? autoremove_wake_function+0x12/0x40 May 13 12:35:29 serenity kernel: [<ffffffff810b3915>] ? __wake_up_common+0x55/0x90 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa048a666>] wait_barrier+0xc6/0x190 [raid10] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa048eca4>] make_request+0x44/0x130 [raid10] May 13 12:35:29 serenity kernel: [<ffffffffa0458a83>] md_make_request+0x103/0x260 [md_mod] May 13 12:35:29 serenity kernel: [<ffffffff81145df1>] ? mempool_alloc+0x61/0x170 May 13 12:35:29 serenity kernel: [<ffffffff8126f728>] generic_make_request+0xf8/0x150 May 13 12:35:29 serenity kernel: [<ffffffff8126f7f8>] submit_bio+0x78/0x190 May 13 12:35:29 serenity kernel: [<ffffffff8114e02f>] ? test_set_page_writeback+0x14f/0x1e0 May 13 12:35:29 serenity kernel: [<ffffffffa04d3125>] ext4_io_submit+0x25/0x50 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04d3393>] ext4_bio_write_page+0x213/0x320 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04cad7a>] mpage_submit_page+0x5a/0x80 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04caeb0>] mpage_process_page_bufs+0x110/0x120 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04cb0a0>] mpage_prepare_extent_to_map+0x1e0/0x300 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa0497b8b>] ? jbd2__journal_start+0xfb/0x210 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04cf400>] ? ext4_writepages+0x3c0/0xd20 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04fdead>] ? __ext4_journal_start_sb+0x6d/0x110 [ext4] May 13 12:35:29 serenity kernel: [<ffffffffa04cf42b>] ext4_writepages+0x3eb/0xd20 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811504ee>] do_writepages+0x1e/0x30 May 13 12:35:29 serenity kernel: [<ffffffff811445ed>] __filemap_fdatawrite_range+0x5d/0x80 May 13 12:35:29 serenity kernel: [<ffffffff8114470a>] filemap_write_and_wait_range+0x2a/0x70 May 13 12:35:29 serenity kernel: [<ffffffffa04c67a0>] ext4_sync_file+0x110/0x370 [ext4] May 13 12:35:29 serenity kernel: [<ffffffff811eb8a6>] vfs_fsync+0x26/0x40 May 13 12:35:29 serenity kernel: [<ffffffffa0eaefb2>] loop_thread+0x392/0x6a0 [loop] May 13 12:35:29 serenity kernel: [<ffffffffa0eae8e0>] ? __do_lo_send_write+0x120/0x120 [loop] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa0eaec20>] ? lo_receive+0x210/0x210 [loop] May 13 12:35:29 serenity kernel: [<ffffffff8108f2ea>] kthread+0xea/0x100 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: [<ffffffff8151757c>] ret_from_fork+0x7c/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: jbd2/dm-0-8 D 0000000000000000 0 1732 2 0x00000000 May 13 12:35:29 serenity kernel: ffff8803fc331bb8 0000000000000046 0000000000000000 ffff8803fa4f6220 May 13 12:35:29 serenity kernel: 00000000000146c0 ffff8803fc331fd8 00000000000146c0 ffff8803fa4f6220 May 13 12:35:29 serenity kernel: ffff8803fc331b18 ffffffff810b3ba4 ffff88041892d200 0000000000000001 May 13 12:35:29 serenity kernel: Call Trace: May 13 12:35:29 serenity kernel: [<ffffffff810b3ba4>] ? __wake_up+0x44/0x50 May 13 12:35:29 serenity kernel: [<ffffffffa0eae70d>] ? loop_make_request+0x11d/0x1d0 [loop] May 13 12:35:29 serenity kernel: [<ffffffff811ed2b0>] ? generic_block_bmap+0x70/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150b609>] schedule+0x29/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150b8f4>] io_schedule+0x94/0xf0 May 13 12:35:29 serenity kernel: [<ffffffff811ed2be>] sleep_on_buffer+0xe/0x20 May 13 12:35:29 serenity kernel: [<ffffffff8150bce3>] __wait_on_bit+0x83/0xa0 May 13 12:35:29 serenity kernel: [<ffffffff811ed2b0>] ? generic_block_bmap+0x70/0x70 May 13 12:35:29 serenity kernel: [<ffffffff8150bd87>] out_of_line_wait_on_bit+0x87/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff810b4060>] ? autoremove_wake_function+0x40/0x40 May 13 12:35:29 serenity kernel: [<ffffffff811ed3aa>] __wait_on_buffer+0x2a/0x30 May 13 12:35:29 serenity kernel: [<ffffffffa049bf4a>] jbd2_journal_commit_transaction+0x197a/0x19c0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffffa04a1c4c>] kjournald2+0xec/0x2a0 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff810b4020>] ? __wake_up_sync+0x20/0x20 May 13 12:35:29 serenity kernel: [<ffffffffa04a1b60>] ? commit_timeout+0x10/0x10 [jbd2] May 13 12:35:29 serenity kernel: [<ffffffff8108f2ea>] kthread+0xea/0x100 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0 May 13 12:35:29 serenity kernel: [<ffffffff8151757c>] ret_from_fork+0x7c/0xb0 May 13 12:35:29 serenity kernel: [<ffffffff8108f200>] ? kthread_create_on_node+0x1a0/0x1a0
I recently updated kernel to 3.14.2 and now I see i/o subsystem hang in few minutes after I start VirtualBox 4.3.10 machines (with Windows7).
messages.log below. It looks like every process that tries to access /dev/md3 (even kworker and md3_resync) hangs forever.
Anyone has similar problems?
I also use soft raid1 so I just ran W7 on my 3.14.3 and it just works. You are probably hitting some bad sector. VM disk is on /dev/md3, isn't it? You may want to `cat` read the VM disk file, and dd `read` the whole drives and the md drive to see if they are readable. You may want to inspect /proc/mdstat, /sys/block/**/bad_blocks and smartctl reports. You may be facing some bug in the raid drivers. I faced one too, which was then fixed. https://bugzilla.kernel.org/show_bug.cgi?id=68181 If it *is* a bug, the best what you can do is to report the bug, and keep the the current raid mirror as long as possible so you can tests patches the kernel guys provide. -- Kind regards, Damian Nowak StratusHost www.AtlasHost.eu
Yes, I had bad sector on md3 (raid10) array. Disk was replaced. But it looks like it is unrelated to the reason why my box is hanging. It started after kernel update from 3.10 to 3.14.2 and vbox to 4.3.10. Now i updated kernel to 3.14.3, going 4h from update and it looks like my problem is gone. Regards, Łukasz On 05/13/14 15:39, Nowaker wrote:
I recently updated kernel to 3.14.2 and now I see i/o subsystem hang in few minutes after I start VirtualBox 4.3.10 machines (with Windows7).
messages.log below. It looks like every process that tries to access /dev/md3 (even kworker and md3_resync) hangs forever.
Anyone has similar problems?
I also use soft raid1 so I just ran W7 on my 3.14.3 and it just works.
You are probably hitting some bad sector. VM disk is on /dev/md3, isn't it? You may want to `cat` read the VM disk file, and dd `read` the whole drives and the md drive to see if they are readable. You may want to inspect /proc/mdstat, /sys/block/**/bad_blocks and smartctl reports.
You may be facing some bug in the raid drivers. I faced one too, which was then fixed. https://bugzilla.kernel.org/show_bug.cgi?id=68181 If it *is* a bug, the best what you can do is to report the bug, and keep the the current raid mirror as long as possible so you can tests patches the kernel guys provide.
participants (2)
-
Nowaker
-
Łukasz Michalski