wildfire305

Members
  • Posts

    140
  • Joined

  • Last visited

Everything posted by wildfire305

  1. @rixAre you in the USA? I have a spare internal sata or external usb optical drive (choose one) I'd donate and ship to you. DM me if interested.
  2. This happened again when I upgraded the firmware on my router this week.
  3. Did you have success with this? I've tried various mappings with docker parameters and I got it to see the other drive, but the directory structure was garbage on output. I tried modding the .sh file and that still was a problem. I've kind of given up on multiple disks and just use something else for bluray (separate drive). @rixI think all of us need more specific instructions on how to get this to work with something other than sr0. Can you post an example of something like sr5/sg12 on the container syntax? I have tried with and without privileged, and I have used --device=/dev/sr5:/dev/sr0 \ --device=/dev/sg12:/dev/sg0 \ and --device=/dev/sr0:/dev/sr5 \ --device=/dev/sg0:/dev/sg12 \ and (I can't remember which one) one of them worked, but the output folder structure that led to the ripped files was all goofed.
  4. I've been having some trouble with my unifi UDM router running out of ram and locking up since the December update. Strangely, if it reboots, sometimes it causes the unraid server to also reboot - dirty. I intentionally rebooted the router this morning because that temporarily solves the lockup issue for a few days. When I rebooted the router unraid kicked some errors into the log and also rebooted. Does this seem like a hardware or software issue? Logs attached, server description in signature. Below are some lines in the syslog that may not show up in diagnostics: cvg02-diagnostics-20220208-1503.zip
  5. Worked like a charm. Thanks! I'm assuming that UUID was assigned to the array and I removed the disk from the array, the system was playing safe not letting the disk back in. Running a diff on it and the new disk 3 now for "peace of mind" comparison.
  6. I found a clue: Jan 23 02:05:52 CVG02 kernel: XFS (sdl1): Filesystem has duplicate UUID d0ba1183-2aa6-432f-9ae6-1c029498de77 - can't mount I assume I need to generate a new UUID somehow since it was an array disk? Can anyone show me how to do that? Something like this: https://www.tecmint.com/change-uuid-of-partition-in-linux/ ?
  7. I'm going to preface this by saying "I have no concern for data loss - I am backed up three ways." So please have no concern for my data. I came here to be educated on this topic. Storytime: I had a bad 8087 x 4 sata cable that caused a bunch of UDMA CRC errors with a disk. I diagnosed it by swapping the cable to another disk and saw it build some crc errors during a parity check. I pulled the CRC failed disk out of caution and started rebuilding the removed disk with a tested spare. My curiosity asked - "Hey I wonder if I can mount this pulled unraid disk in another computer if I had to recover data?" Sure enough, I plugged it into a Fedora VM on my laptop using a usb enclosure and it was seen immediately and automatically mounted. I tested my skills unmounting it, running xfs_repair to check it and so forth. I plugged it into an OMV 5 server and was able to mount it and read the directory structure just fine as well. Then I put it back in the unraid server and attempted to mount using unassigned devices - NO LOVE. Dropped into terminal and tried manually - unsuccessful - "error was wrong FS type". I tried again specifying -t xfs - still NO LOVE same error. I put it back in the Fedora VM and it works perfectly fine I can mount it all day long. I ran some par2 checks on the stored data and it was fine. I'm going to save the disk from further testing until its replacement is rebuilt in the server. Can anyone tell me why I can't get it to mount in unraid? I used "mount /dev/sdl1 /mnt/old3" and "mount -t xfs /dev/sdl1 /mnt/old3" and had no success - both times gave me the wrong filesystem error message.
  8. Understood, that makes sense for shares. Is the minimum disk space only changeable if the array is stopped?
  9. I have no concerns about data loss on this dvr disk, but it is concerning that I cannot set the minimum free space. Have I found another bug?
  10. Minimum free space is also unchangeable on my main cache pool. Both pools are btrfs. There is one share set to only use the DVR cache pool with one disk in it and I cannot change the minimum free space their either. All of my other shares, whether they use the main cache pool or go direct to disk can be changed, and I have set them all previously.
  11. Both minimum free space for the share and the disk are greyed out and not changeable.
  12. Would rebooting count towards this? If so, I've rebooted about four or five times since I made the setting. I looked at my notification logs (slack app) and it also notified me at 71% and every percent past 90. So the zero setting doesn't work as the note mentions. This is also a pool device and not part of the main storage array. I used the pool instead of unassigned devices so I could have more control over sharing.
  13. Well setting it to 100 worked to effectively disable the notifications. It updated to say that everything was fine. Then setting it back to 0 caused it to send out disk full warning notifications again. I'm going to set it to 100, but Perhaps the instructions on mouseover could be updated to reflect that if it isn't really a problem to be fixed.
  14. I will try that and report back. I'm waiting on a 7tb file copy at the moment - first backup on a new media.
  15. I have a one disk pool that I am using for the DVR for my security cameras. I expect this disk to always be full as the security camera software is managing the use of the disk. So, I set the notifications for this pool to zero as specified by the popup help. Despite this, I am still receiving notifications that the disk is approaching full. Have I done something wrong or missed an additional setting, or is this a genuine bug? Please see attached screenshot and diagnostics (I know I have a paused parity check, I accidentally rebooted with a terminal session open). cvg02-diagnostics-20220109-1339.zip
  16. https://www.pc-pitstop.com/scsat84xb If unfamiliar with serial attached storage, keep these two terms in mind SFF-8087 = internal 6GB/s connections, SFF-8088 = external 6GB/s connections. I also bought a dell perc h310 HBA raid controller flashed in IT mode (JBOD with no raid) from ebay (i would suggest buying a different model with external ports, I converted mine) I used the HBA controller, two sff-8087 cables to get to an sff-8087 to sff-8088 adapter pci slot (this made the h310 have two external sff-8088 ports) I then connected the 9 bay box with two SFF-8088 cables. When purchasing the box, I specified that I wanted it set up with SFF-8087 to 4x sata cables and 8088 to 8087 adapter. In the end I get one 6GB/s channel direct to each disk. The Dell hba is an 8x pcie card that provides 8 channels to the memory and cpu. My disk performance doing a parity check is almost 1GB/s spanned across five data and one parity. 180MB/s per disk Prior to that I had 2 port esata with a 4x pcie port multiplier card and two 4 disk boxes connected by esata - performance was 80MB/s per disk doing a parity check. The port multiplier card worked and was mostly reliable unless I put a seagate brand drive in the enclosure and didn't disable NCQ. Prior to that I had the same boxes connected USB 3.0 with 40MB/s per disk performance from my old stablebit drivepool windows server. In the last four years of upgrades, I have gained greater performance out of the same drives by changing my operating system and connection method. Let me know if you have any additional questions about the setup.
  17. I converted everything to SAS using an enclosure from pc-pitstop. Then I was able to retire the mediasonic enclosures.
  18. SANTA DELIVERED! I converted everything to an external SAS enclosure and while I now get around 160 MB/s on a parity check across six disks, it didn't resolve the issue with the disks not staying asleep. However, after doing some more digging, I used the inotifywait command to watch what was actually going on with the disks. Problems found: 1. I had mistakenly put a syslog server on the array instead of the ssd cache 2. I had configured all of the windows computers (eight) on the network to use file history (w10 built-in backup) to backup to the array. I had it landing on the cache, but I failed to realize that windows would compare existing files in the backup and it was waking the disks to do that. I moved those shares completely to cache - they were already being backed up to the cloud daily. 3. I re-configured a lot of daily server maintenance tasks to take place within the same two hour window. 4. I temporarily stopped all VM's and dockers and the automatic part of the file integrity plug-in. I'll see if I find more, but the point of this post was to mention the use of the inotifywait command allowed me to quickly and easily see what was going on. Syntax I used was: inotifywait -r -m /mnt/diskx Be patient, sometimes that command took a while (several minutes) to start.
  19. I can confirm that the log webUI is working
  20. All that being said, you could create a Linux virtual machine with a mapping to the array share that you want to backup and pass it to the urbackup server. Be careful you exclude your backup location or you'll create a loop.
  21. Rsync can also be made to display all the desired statuses and send reports of the completed task with the proper syntax. You can send the output of the program to another program to do whatever you desire. Admittedly, I've not used it to send reports because I haven't had the time to dig that deep into the syntax. However, whenever I need to copy a bunch of files or sync two folders (my offline backup system) it's my go to tool and probably most of us here. It's a lot more powerful than cp and mv.
  22. No, urbackup is for supporting your users with backups of their systems. I have 9 systems backing up and it has saved my bacon on a rogue windows update that caused a system not to boot. If you want to back up your server, I would suggest the duplicati docker. I use it to send encrypted backups to backblaze b2 on a daily schedule. You could also use lucky backup docker (it's a gui front-end for rsync). You can also script rsync as a cron task using the user scripts plugin to make incremental automated backups to a backup location. It's a lot more powerful than a copy utility. Using rsync, rclone, or rsnapshot with a script is much more difficult because of learning all the syntax, but at the same time much simpler than using a docker and/or gui to accomplish the same task.
  23. Unless I misunderstood you, are you talking about trying to restore to a client?
  24. You can only use this software to backup from clients. You're looking for something completely different like rsync or rclone.