Jump to content

JorgeB

Moderators
  • Posts

    63,741
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. Both disks have the same UUID: Apr 11 21:47:50 NASBackup kernel: XFS (md8): Filesystem has duplicate UUID 6e5537c6-38fe-4f06-8d46-587f6c2185fe - can't mount Likely one was rebuilt for the other at some point in the past, so it will only mount the first one, you can change the UUID in either one: xfs_admin -U generate /dev/sdX1 Note the 1 in the end.
  2. That's a bad idea, if it keeps increasing there's still a problem.
  3. Apr 6 09:21:07 drogo kernel: XFS (md1): Internal error XFS_WANT_CORRUPTED_GOTO at line 1423 of file fs/xfs/libxfs/xfs_ialloc.c. Caller xfs_dialloc_ag+0xdd/0x23f [xfs] Yes
  4. There's filesystem corruption on disk1, check filesystem: https://lime-technology.com/wiki/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  5. Disk dropped offline, it's on a Marvell controller and these are know to drop disks in some cases, disk4 is on the same controller, has it been there for long and without issues?
  6. This type o call trace happens to several users in different unRAID releases, seems to be mostly harmless though, but don't know the cause.
  7. If the number of errors keeps increasing there's a problem.
  8. The fact that's it's new doesn't mean there isn't a problem, but wait, as long as you don't get more you're OK.
  9. Single error is not a problem, but since it happen on all disks you might get more, and that would mean there's trouble, I would say 4 bad SATA cables it's not likely, so maybe controller or if they share some kind of enclosure.
  10. When the stop array button is available. That's likely a corrupt docker image, would need the diags to confirm, if yes just delete and recreate.
  11. If you're using latest unRAID you can use the instructions on the FAQ.
  12. Agree, especially since many won't even have or look at the console.
  13. Likely because the controller doesn't support trim, connect the SSD to one of the onbaord SATA ports.
  14. Like I said never had a single duplicate file, and I have over 300TB of data on unRAID servers, to know what you're doing wrong we'd need more details on how and where you end up with duplicates.
  15. Your doing something wrong, but like mentioned there are various solutions for dealing with that.
  16. What do you mean caused by unRAID? I use unRAID for over 10 years and never had any dupe issues, nor do I remember reading on the forums about anything other than user error regarding dupes.
  17. These usually work well: Tunable (md_num_stripes): 4096 Tunable (md_sync_window): 2048 Tunable (md_sync_thresh): 2000 If you're sig is correct and you're still using the SAS2LP also change this one: Tunable (nr_requests): 8
  18. Happened several times to me, since Terminal was included, this one is from v6.5.0, at some point Terminal crashes and stops working, only reboot fixes it. I believe these are from when it stopped working: Mar 17 18:54:19 Tower1 nginx: 2018/03/17 18:54:19 [error] 5672#5672: *698426 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 192.168.1.130, server: , request: "GET /webterminal/ws HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/ws", host: "1ad7a6e4386dab50134fcb40b4490f1f43e4ce85.unraid.net" Mar 17 18:54:24 Tower1 unassigned.devices: Device '/dev/sdh1' script file not found. 'REMOVE' script not executed. Mar 17 18:54:24 Tower1 unassigned.devices: Unmounting disk 'WDC_WD20SPZX-22CRAT0_WD-WXA1E171A1KU'... Mar 17 18:54:24 Tower1 unassigned.devices: Unmounting '/dev/sdh1'... Mar 17 18:54:26 Tower1 unassigned.devices: Successfully unmounted '/dev/sdh1' Mar 17 18:54:26 Tower1 unassigned.devices: Disk with serial 'WDC_WD20SPZX-22CRAT0_WD-WXA1E171A1KU', mountpoint '2TB' removed successfully. Mar 17 18:54:29 Tower1 nginx: 2018/03/17 18:54:29 [error] 5672#5672: *698491 connect() to unix:/var/run/ttyd.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.130, server: , request: "GET /webterminal/ws HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/ws", host: "1ad7a6e4386dab50134fcb40b4490f1f43e4ce85.unraid.net" Mar 17 18:54:39 Tower1 nginx: 2018/03/17 18:54:39 [error] 5672#5672: *698548 connect() to unix:/var/run/ttyd.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.130, server: , request: "GET /webterminal/ws HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/ws", host: "1ad7a6e4386dab50134fcb40b4490f1f43e4ce85.unraid.net" Mar 17 18:54:49 Tower1 nginx: 2018/03/17 18:54:49 [error] 5672#5672: *698613 connect() to unix:/var/run/ttyd.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.130, server: , request: "GET /webterminal/ws HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/ws", host: "1ad7a6e4386dab50134fcb40b4490f1f43e4ce85.unraid.net" tower1-diagnostics-20180322-1714.zip
  19. On Windows 10 (and 8 ) you need to restart (not shutdown) the VM to detect the increased space.
  20. It's a lower case L, copy/past the code I posted above, there will be no output after typing it, just refresh the VM page and you'll see the new size.
×
×
  • Create New...