Error 30 on /config/admin folder, is my flash drive gone bad?


sansei

Recommended Posts

Running on 6.1.4 with Sabnzbd in Docker.

 

Did the parity check and corrected 13 errors. Then I try to restart Sabnzbd and it won't start. Sabnzbd started to act up since 2 weeks ago, I had to shutdown the server and restart, then it won't last a night of downloading.

 

See log below. Since the config folder is on the flash, does that mean it's about to kick the bucket? Or if the config folder is in Docker's container root? Which is running in the cache drive.

 

2015-11-19 01:02:31,110 DEBG 'sabnzbd' stderr output:
2015-11-19 01:02:31,110::INFO::[postproc:85] Saving postproc queue
2015-11-19 01:02:31,110::INFO::[__init__:919] Saving data for postproc1.sab in /config/admin/postproc1.sab

2015-11-19 01:02:31,111 DEBG 'sabnzbd' stderr output:
2015-11-19 01:02:31,110::ERROR::[__init__:935] Saving /config/admin/postproc1.sab failed

2015-11-19 01:02:31,111 DEBG 'sabnzbd' stderr output:
2015-11-19 01:02:31,110::INFO::[__init__:936] Traceback:
Traceback (most recent call last):
File "/opt/sabnzbd/sabnzbd/__init__.py", line 922, in save_admin
_f = open(path, 'wb')
IOError: [Errno 30] Read-only file system: '/config/admin/postproc1.sab'

2015-11-19 01:02:31,115 DEBG fd 8 closed, stopped monitoring (stderr)>
2015-11-19 01:02:31,115 DEBG fd 6 closed, stopped monitoring (stdout)>
2015-11-19 01:02:31,115 INFO stopped: sabnzbd (exit status 0)
2015-11-19 01:02:31,115 DEBG received SIGCLD indicating a child quit

Link to comment

Thanks for the replies. Logs provided is produced by Sabnzbd, and I'm using binhex's Sabnzbd container.

 

Screenshot below showing the folders mapping. The cache drive is hosting all Docker containers, /config is mapped to the cache drive, /data is mapped to the user share.

 

0wUA5kR.png

Link to comment

The screenshot suggests there is a space in the host path for the /config mapping!  It may be an artefact of the screenshot, but if it is there I could see it causing problems.

 

Strange, there are no spaces in the path:

/mnt/cache/.docker_apps/sabnzbd/config/
/mnt/user/myshare/sab_unsorted/

Link to comment

You said the log is from SAB, so it must be the container's /config folder it is referring to. The docker has no access to the flash drive according to your volume mappings. The container's /config is on your cache drive. Don't know why it is read-only unless it is corrupt. Go to Tools - Diagnostics and post the complete diagnostics zip.

Link to comment

Acted up again, ran reiserfsck in the webGUI in maintenance mode, on the cache drive. Got the suggestion below. Not sure if I should proceed to --rebuild-tree. Also want to know if it's about time to replace the cache drive.

 

...
    Replaying journal: |==================================      - 84.2%  545 trans
    Trans replayed: mountid 183, transid 11309566, desc 1535, len 1, commit 1537, next trans offset 1520
    Trans replayed: mountid 183, transid 11309567, desc 1538, len 1, commit 1540, next trans offset 1523
    Trans replayed: mountid 183, transid 11309568, desc 1541, len 4, commit 1546, next trans offset 1529

    Replaying journal: |==================================      \ 84.7%  548 trans
    Trans replayed: mountid 183, transid 11309569, desc 1547, len 1, commit 1549, next trans offset 1532

                                                                                    

    Replaying journal: Done.
    Reiserfs journal '/dev/sdj1' in blocks [18..8211]: 549 transactions replayed
    Checking internal tree..  finished
    Comparing bitmaps..Fatal corruptions were found, Semantic pass skipped
    1 found corruptions can be fixed only when running with --rebuild-tree
    ###########
    reiserfsck finished at Thu Nov 19 23:18:54 2015
    ###########
    bad_directory_item: block 214633232: The directory item [3706016 3807385 0x1 DIR (3)] has a not properly hashed entry (2)
    bad_leaf: block 214633232, item 0: The corrupted item found (3706016 3807385 0x1 DIR (3), len 528, location 3568 entry count 9, fsck need 0, format old)
    bad_indirect_item: block 240603453: The item (6334118 6405803 0x4e7f001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (311) to the block (240587365), which is in tree already
    vpf-10640: The on-disk and the correct bitmaps differs.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.