• Posts

  • Joined

  • Last visited

Everything posted by wishie

  1. Oh well, im pointing out my experience. Ive update unRAID versions for years without issue. I did nothing different this time, and that was the result. Apparently others have reported they needed to make the USB image again using the unRAID USB Creator, so maybe that was the reason?
  2. I understand that.. but I had a working USB, labelled "UNRAID" and then I did the "Update OS" from within the unRAID GUI, so I dont understand why/how it changed the volume label. There must be some quirk with the update from 6.6.7 to 6.7.
  3. So, a bit of an update.. I re-imaged my USB stick using the UnRAID USB Creator tool thing, and it booted.. so I've then copied my config/ folder over, and I'm successfully on 6.7 now. What I did notice is, when I was doing the 'update' from 6.6.7 to 6.7, it seems the USB label was 'UnRaid' instead of 'UNRAID'. Perhaps this was the issue? When I made the USB with the creator tool, its 'UNRAID' again.
  4. It's been suggested that it might be the USB, but that doesnt explain why it boots fine with every other version of unRAID ive used, and even reverting back to 6.6.7 works perfectly. I suspect its something to do with the USB drivers or something related in 6.7
  5. This (6.7) is the first unRAID version that I've had major issues... When I try to boot, everything seems to be going fine, until it tries to mount the UNRAID labelled USB to /boot... I get the following: waiting for /dev/disk/by-label/UNRAID (will check for 30 sec)... and then of course it fails, so /boot isn't mounted, therefore no modules etc are available and network doesn't work. For now, I've had to revert back to 6.6.7
  6. Cool, thanks for that.. I will do as you have done and move the .AppleDB files onto the cache SSD. Have you tried using TimeMachine over SMB in High Sierra (if you have any Macs running High Sierra) ?
  7. As the title suggests, I can no longer seem to mount any AFP shares, and the unRAID logs show errors as follows (full diagnostics also attached to post): Aug 10 13:32:30 wishie cnid_dbd[32310]: Error opening lockfile: No space left on device Aug 10 13:32:30 wishie cnid_dbd[32310]: main: fatal db lock error Aug 10 13:32:30 wishie cnid_dbd[32310]: Failed to open CNID database for volume "Time Machine" Aug 10 13:32:30 wishie cnid_dbd[32310]: delete_db() failed: No space left on device Aug 10 13:32:30 wishie cnid_dbd[32310]: reinit_db() failed: No space left on device Aug 10 13:32:30 wishie afpd[31924]: read: Connection reset by peer Aug 10 13:32:31 wishie cnid_metad[32343]: Multiple attempts to start CNID db daemon for "/mnt/user/Time Machine" failed, wiping the slate clean... I can't see any of my disks being anywhere near capacity.. the only thing that was close, was my docker image, which i've now doubled in size, but still have the issue. Since I use the AFP share for my Time Machine backups, it's pretty important to me that it continues to work. Apparently you can do Time Machine over SMB with High Sierra, but I've yet to get that to work. Any help on that would also be appreciated. wishie-diagnostics-20180810-1334.zip
  8. Ok, thanks for the input. I ended up disabling the cache for that share and doing the transfer.. that reminds me.. I should re-enable the cache for that share now.
  9. I am having the same issue... just upgraded to 6.5.1, so will see how I go.. Is there any technical explanation of what was happening with 6.5.0? Was it only an issue with this specific container, or others?
  10. So I'm about to start copying a fairly large amount of data (around 2TB) to my unRAID server, and it has a 128GB SSD for a cache drive. My question is: Should I leave the cache drive enabled? Once its filled the cache drive, will unRAID automatically start writing directly to the array, or will it error with a 'disk full' message or something? Or am I better off disabling the cache and writing directly to the array, albeit at a much slower rate?
  11. Yeah, this is going to be based on an i3-7100 which im quite sure has the new AES instruction set.
  12. So the best solution would be a modem/router/switch that can run the VPN server itself I guess.. I am going to nee to make this remote access as easy for these guys as possible (they are mostly 50+ yr old women with not much computer experience) For the cloud backups, ill look into backblaze and duplicati. Thanks
  13. Hi all, I've recently been asked to set up a 'file server' for a small business. They were originally going to use a laptop with an external usb hard disk as their solution. I've talked them into setting up an unRAID machine to store their documents instead, which is good. But, as we know, while unRAID helps with hardware failures, it is NOT a backup. So, I've been thinking about how to also store their data in the cloud. Things like CrashPlan Pro etc look promising. Are there any other services I should also check out? Also, another requirement they have just hit me with, is remote access to the files on the machine. They want several committee members to be able to connect remotely to open/edit/delete files (hopefully) using the same username/password system that unRAID already employs across user shares. Is there a handy way to do this, or is using a VPN (will be a pain to setup and configure on end users machines) the only real way to go? Any advice and help appreciated.
  14. 51 errors in total. Not too bad.
  15. Well, it looks like I did almost everything wrong, and you all helped me sort it out, again. Thank you very much.
  16. Just over 2hrs in, and there have been 31 corrections.. is it fairly safe to assume these were caused by my xfs_repair on the disk itself?
  17. Running now.. will check back in when its finished. Thanks
  18. So simply going to the "Main" page, "Array Operation" tab and clicking "Check" ?
  19. I tried it on the md device first. It said it was 'mountable' and 'mountable & writable' but then exited with an error: xfs_repair -v /dev/md5 xfs_repair: /dev/md5 contains a mounted filesystem xfs_repair: /dev/md5 contains a mounted and writable filesystem fatal error -- couldn't initialize XFS library So I stopped the array, and ran it on the actual device. Nothing ended up in lost+found (there is no lost+found on that disk)
  20. root@wishie:~# xfs_repair -v /dev/sdd1 Phase 1 - find and verify superblock... - block cache size set to 326328 entries Phase 2 - using internal log - zero log... zero_log: head block 68262 tail block 68262 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Mar 16 11:39:12 2018 Phase Start End Duration Phase 1: 03/16 11:39:09 03/16 11:39:09 Phase 2: 03/16 11:39:09 03/16 11:39:10 1 second Phase 3: 03/16 11:39:10 03/16 11:39:11 1 second Phase 4: 03/16 11:39:11 03/16 11:39:11 Phase 5: 03/16 11:39:11 03/16 11:39:11 Phase 6: 03/16 11:39:11 03/16 11:39:11 Phase 7: 03/16 11:39:11 03/16 11:39:11 Total run time: 2 seconds done
  21. Ok, so rebuild finished, 0 errors. Does this mean I got away with it?!
  22. Yeah, I should have mentioned, check power and sata cables first.. replace if you think you need to. I did all of that, but mine ended up being the power supply not providing nice clean power when all disks spun up for a parity check.
  23. lots of this in syslog: Mar 10 00:53:14 unRAID kernel: sd 4:0:2:0: [sdd] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Mar 10 00:53:14 unRAID kernel: sd 4:0:2:0: [sdd] tag#1 Sense Key : 0x2 [current] Mar 10 00:53:14 unRAID kernel: sd 4:0:2:0: [sdd] tag#1 ASC=0x4 ASCQ=0x0 Mar 10 00:53:14 unRAID kernel: sd 4:0:2:0: [sdd] tag#1 CDB: opcode=0x8a 8a 00 00 00 00 00 74 70 8e 38 00 00 00 08 00 00 Mar 10 00:53:14 unRAID kernel: print_req_error: I/O error, dev sdd, sector 1953533496 followed by lots of: Mar 10 00:54:06 unRAID kernel: md: disk0 write error, sector=2091053720 Mar 10 00:54:06 unRAID kernel: md: disk0 write error, sector=2091053728 Mar 10 00:54:06 unRAID kernel: md: disk0 write error, sector=2091053736 Mar 10 00:54:06 unRAID kernel: md: disk0 write error, sector=2091053744 Mar 10 00:54:06 unRAID kernel: md: disk0 write error, sector=2091053752
  24. I'm no expert, but it looks like a bunch of 'drive not ready' errors, followed by a heap of write errors.. I had a similar issue a while back, and in my case, it ended up being a power supply issue.. did your system recently try to do a parity check by chance?
  25. So back to this: mdcmd set invalidslot 5 29 I get that 'invalidslot 5' likely tells unRAID that the disk is 'invalid' and perhaps should be rebuilt.. but what does the '29' mean?