sansei

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by sansei

  1. It's indeed the problem of the one of cache drives. I moved the data from cache pool to array, ran copy tasks individually on each drive and found one of the disks is acting up. Will do the RMA then. Thanks a lot!
  2. These SSD drives are Crucial BX500 SATA disk rated 500MB/s write speed https://www.crucial.com/ssd/bx500/ct2000bx500ssd1. If read speed is normal, why is the write so slow? I'll try to use some old SSD to test then.
  3. Noticed the behavior since I changed the cache pool filesystem to ZFS. They are mirrored 2x2TB SSD drives. Before was using BTRFS without issues. File copies from Windows to Unraid sometimes started normal, then gradually dropped to 0 bytes/s and eventually failed. I created a share that's exposed a specific disk in the array, then copy files from Windows to this share directly is normal, at about 60MB/s. This is to eliminate any network issues. Once done copying, then go into terminal to copy files to cache to test write speed, it is extremely slow. A 70GB file takes over an hour to complete. And often block other activities, such as updating docker containers. It behaves like the file copying task is blocking any other writes to the cache disks. In one severe case, the front GUI became completely non-responsive and threw 500 server error. Reading from cache pool is always fine, speed is normal, can reach 115MB/s. This box has been running for about 11 years, with 32GB non-ECC memory. In which, 4GB is allocated to ZFS, as the ratio seems fine. Don't have any VMs, just dockers. Ran Fix Common Problems and did not find any errors or warnings. Attached the diagnostics file. tower-diagnostics-20240328-0607.zip
  4. In the admin page, also noticed the updating to 27.0.1. Clicked on the 'Open Updater' button, and it gave me 404 page not found error. Is it the expected behavior?
  5. Tried to recover the data on cache drive. The cache pool still mountable, however, when attempting to copy files using MC, many files stalled while copying. If I'm skipping them, what's the point of recover them? Problems only occurred once I upgrade to 6.12.x. Update: had to restart the server, then mount an individual SSD in the cache pool. Then copy all files to the unassigned disk. Still waiting the files copy to finish. So far, no more copying errors.
  6. tower-diagnostics-20230623-0945.zip First the dockers started to act up. I changed the network from macvlan to ipvlan. Then all dockers start to drop. And VM stopped. Deleted docker image, restarted server, still failed to start VM. Told that the cache disk is read only. Tried to reboot few time same issue.
  7. Please consider add support of Python 2 for VideoSort, as the post processing is broken now.
  8. It's been running quite smoothly for several weeks. And suddenly the web interface stopped responding, along with all dockers. Tried reboot hot and cold, the web interface simply refused to load. File share works fine tho. Memtest ran fine without issues. 32GB memory should be plenty. No VM installed. Attached the diag file. milano-diagnostics-20160517-0018.zip
  9. UnRaid is the only box running 24/7 in my place. Would be nice to have a container for this so I don't have to deal with eBay anymore: https://openbazaar.org/
  10. Thank you headnail for sharing your experience! The LAN is a dedicated Intel NIC. I do have monitor and keyboard hooked up to it. I'll try to unplug them. Also will try turning off the plug and play in BIOS. Will report how it goes.
  11. Hang again today, after my DHCP server went. And I found the irq35 is eth0.
  12. Running on 6.1.4, usually I had to restart the machine every 2-3 weeks. And every time, it's due to this error: Disabling IRQ #35 All remote connection is lost, I had to physically log into the box to reboot it. Attached the log file dump via terminal method. Search for 'nobody cared' near the end of file. There are some call trace immediately after that, I don't have much clue about them though. syslog-201601171800.txt
  13. Thanks for confirmation on 8TB drives. What's the version of the firmware you are running, P11? By any chance you tried any 10TB drives with it?
  14. Attached zip file. Thanks! ffserver-diagnostics-20151120-1857.zip
  15. Acted up again, ran reiserfsck in the webGUI in maintenance mode, on the cache drive. Got the suggestion below. Not sure if I should proceed to --rebuild-tree. Also want to know if it's about time to replace the cache drive. ... Replaying journal: |================================== - 84.2% 545 trans Trans replayed: mountid 183, transid 11309566, desc 1535, len 1, commit 1537, next trans offset 1520 Trans replayed: mountid 183, transid 11309567, desc 1538, len 1, commit 1540, next trans offset 1523 Trans replayed: mountid 183, transid 11309568, desc 1541, len 4, commit 1546, next trans offset 1529 Replaying journal: |================================== \ 84.7% 548 trans Trans replayed: mountid 183, transid 11309569, desc 1547, len 1, commit 1549, next trans offset 1532 Replaying journal: Done. Reiserfs journal '/dev/sdj1' in blocks [18..8211]: 549 transactions replayed Checking internal tree.. finished Comparing bitmaps..Fatal corruptions were found, Semantic pass skipped 1 found corruptions can be fixed only when running with --rebuild-tree ########### reiserfsck finished at Thu Nov 19 23:18:54 2015 ########### bad_directory_item: block 214633232: The directory item [3706016 3807385 0x1 DIR (3)] has a not properly hashed entry (2) bad_leaf: block 214633232, item 0: The corrupted item found (3706016 3807385 0x1 DIR (3), len 528, location 3568 entry count 9, fsck need 0, format old) bad_indirect_item: block 240603453: The item (6334118 6405803 0x4e7f001 IND (1), len 4048, location 48 entry count 0, fsck need 0, format new) has the bad pointer (311) to the block (240587365), which is in tree already vpf-10640: The on-disk and the correct bitmaps differs.
  16. So far, sabnzbd is working. Will keep an eye on this error30 behavior. If the docker container is corrupted, is it simply reinstall to fix the corruption?
  17. Strange, there are no spaces in the path: /mnt/cache/.docker_apps/sabnzbd/config/ /mnt/user/myshare/sab_unsorted/
  18. Thanks for the replies. Logs provided is produced by Sabnzbd, and I'm using binhex's Sabnzbd container. Screenshot below showing the folders mapping. The cache drive is hosting all Docker containers, /config is mapped to the cache drive, /data is mapped to the user share.
  19. With the latest sabnzbd running on unRaid 6.1.4. Sabnzbd runs for half day and eventually stopped responding, log info provided in the link below. Is this a container issue, or is the boot flash drive about to die? https://lime-technology.com/forum/index.php?topic=44106.0
  20. Running on 6.1.4 with Sabnzbd in Docker. Did the parity check and corrected 13 errors. Then I try to restart Sabnzbd and it won't start. Sabnzbd started to act up since 2 weeks ago, I had to shutdown the server and restart, then it won't last a night of downloading. See log below. Since the config folder is on the flash, does that mean it's about to kick the bucket? Or if the config folder is in Docker's container root? Which is running in the cache drive. 2015-11-19 01:02:31,110 DEBG 'sabnzbd' stderr output: 2015-11-19 01:02:31,110::INFO::[postproc:85] Saving postproc queue 2015-11-19 01:02:31,110::INFO::[__init__:919] Saving data for postproc1.sab in /config/admin/postproc1.sab 2015-11-19 01:02:31,111 DEBG 'sabnzbd' stderr output: 2015-11-19 01:02:31,110::ERROR::[__init__:935] Saving /config/admin/postproc1.sab failed 2015-11-19 01:02:31,111 DEBG 'sabnzbd' stderr output: 2015-11-19 01:02:31,110::INFO::[__init__:936] Traceback: Traceback (most recent call last): File "/opt/sabnzbd/sabnzbd/__init__.py", line 922, in save_admin _f = open(path, 'wb') IOError: [Errno 30] Read-only file system: '/config/admin/postproc1.sab' 2015-11-19 01:02:31,115 DEBG fd 8 closed, stopped monitoring (stderr)> 2015-11-19 01:02:31,115 DEBG fd 6 closed, stopped monitoring (stdout)> 2015-11-19 01:02:31,115 INFO stopped: sabnzbd (exit status 0) 2015-11-19 01:02:31,115 DEBG received SIGCLD indicating a child quit
  21. Simple Features 1.0.5 just stopped to show the graphs after I installed phpVirtualBox. The VM is running an instance of W2K8 server. It was working before. See screenshot below: