Jump to content

wildfire305

Members
  • Posts

    145
  • Joined

  • Last visited

Posts posted by wildfire305

  1. Sounds like you may need to look at your allocated path for the output directory on the settings page and confirm it to be correct. I noticed that I have privileged turned on for mine, although I am not sure why I have that set. Perhaps I had the same issue as you and that fixed it. Anyone Correct me if I am wrong, but I think "privileged" enables root access to the file system for the docker. 

  2. Santa Clause might be helping me fix that this year. I'll update this thread if he does. Those esata enclosures have weird issues that waste a lot of my time. I added a Seagate drive once and so many errors popped under load. Turning off ncq fixed that problem. But speed suffered dramatically. I removed the Seagate and turned ncq back on. That Seagate tested perfectly fine on the motherboard sata port. 

  3. I deleted it (krusader on disk 1). Shut down all dockers and all vm's. Turned off file integrity automatic. Still all the disks pop awake about a minute after being spun down. Shfs is the thing that shows up in iotop. Where else can I look at to try to find the culprit? I used to have the file caching plugin a while back, but I noticed that it was spending too much time scanning my directories and ironically it too was keeping the disks awake. The disks used to sleep a while ago. Would the esata enclosures have anything to do with the issue? I did move the other half of the disks into them a while ago around the time I noticed it wasn't staying spun down anymore. 

  4. root@CVG02:~# du -h -d 1 /mnt/cache/appdata
    5.0G    /mnt/cache/appdata/Plex-Media-Server
    13M     /mnt/cache/appdata/krusader
    0       /mnt/cache/appdata/jsdos
    0       /mnt/cache/appdata/Shinobi
    5.2M    /mnt/cache/appdata/DiskSpeed
    464K    /mnt/cache/appdata/QDirStat
    0       /mnt/cache/appdata/MotionEye
    2.7M    /mnt/cache/appdata/HandBrake
    0       /mnt/cache/appdata/photostructure
    7.1G    /mnt/cache/appdata/duplicati
    12K     /mnt/cache/appdata/transmission
    812K    /mnt/cache/appdata/luckybackup
    60K     /mnt/cache/appdata/filebrowser
    111M    /mnt/cache/appdata/crushftp
    17M     /mnt/cache/appdata/ripper
    41G     /mnt/cache/appdata/photoprism
    98M     /mnt/cache/appdata/JDownloader2
    1.8G    /mnt/cache/appdata/mariadb-official
    63M     /mnt/cache/appdata/sabnzbd
    228K    /mnt/cache/appdata/MakeMKV
    100M    /mnt/cache/appdata/firefox
    3.8M    /mnt/cache/appdata/nzbhydra2
    6.3G    /mnt/cache/appdata/binhex-urbackup
    51M     /mnt/cache/appdata/LibreELEC
    348M    /mnt/cache/appdata/digikam
    208M    /mnt/cache/appdata/tdarr
    596K    /mnt/cache/appdata/beets
    595M    /mnt/cache/appdata/FoldingAtHome
    16K     /mnt/cache/appdata/NoIp
    333M    /mnt/cache/appdata/clamav
    18G     /mnt/cache/appdata/jellyfin
    0       /mnt/cache/appdata/PlexMediaServer
    184M    /mnt/cache/appdata/mysql
    13M     /mnt/cache/appdata/vm_custom_icons
    81G     /mnt/cache/appdata

  5. 2021-12-04 17:08:28,933 DEBG 'urbackup' stdout output:
    ERROR: Image mounting failed: Loading FUSE kernel module...
    modprobe: FATAL: Module fuse not found in directory /lib/modules/5.10.28-Unraid
    Starting VHD background process...
    Waiting for background process to become available...
    Timeout while waiting for background process. Please see logfile (/var/log/urbackup-fuse.log) for error details.
    UrBackup mount process returned non-zero return code
     

  6. Urbackup has a mount ability for the images built-in to the web gui. However, because I get "TEST FAILED: guestmount is missing (libguestfs-tools)" that feature doesn't work. Is it safe to manually install the libguestfs-tools manually from the dockers' console, or is this something Binhex can add to the image? I also don't currently know how to install libguestfs-tools, but I'm sure it's something simple like sudo apt get libguest blah blah blah.

  7. On 9/11/2021 at 2:07 AM, matty2k said:

    Is it possible to define unassigned external USB device as backup target?

    Would like to backup some user shares to external USB HDD.

    regards

    You could set the mount point of the external disk as a path in the container and map that in settings backup storage path.

  8. I have file integrity set to generate automatically. It seems to keep up on a daily basis. The hashes are stored in the metadata in the filesystem (If I understand the process correctly). Check export, if done after build and export, should verify the hashes. Mine performs with thousands of checks when I do it. I also maintain separate hash catalogs and par2 for the really really important data. You could be safe with par2 as it generates hashes. I'm really surprised to see that not more people are using par2 as an action plan for corruption when restoring from backup. Obviously this is only practical for archival data and not constantly modified data.

  9. I can pass escape characters to ddrescue and it works fine in terminal. So the problem has something to do with the ripper.sh script. What I don't know is why it fails when the script sends $ISOPATH to it with a volume with spaces in its name. I'm not good enough in linux to diagnose or fix that yet. I don't know what I need to change in the script to make that work. I would like to have it replace the spaces with underscores, but I don't know the syntax to get that done.

  10. In attempting to solve this my google search may have revealed that ddrescue struggles with spaces in volume names. Which isn't a problem with ripper, it might be a problem with ddrescue. The data discs I was trying to rip did indeed have spaces in their volume names. How do I overcome this?

     

    edit: maybe struggles isn't the right word. Something to do with the "escape characters" like this "file\ with\ spaces\ in\ it"

    Maybe the ripper script is feeding ddrescue the volume name without escape characters or ddrescue is not liking the spaces.

    I think I've identified the problem, but I don't know how to solve the problem.

     

  11. I'm having an unusual problem. First time I'm trying to use the Ripper docker to rip an iso. I've already used it successfully to rip audio, dvd, blu-ray. When I try to rip an iso (insert data disc):

    The unraid docker log shows it loading the disc

    then it "looks like it works"

    then it says it completed successfully

    then it ejects the disc

    Upon inspection of the out path - the folder structure for the disc is created in ../DATA, but no iso or files are created in the folder.

    Inspection of the Ripper.log reveals the line: ddrescue: Too many files

     

    I opened the ripper.sh in nano to inspect the command. I can manually type in my ssh unraid terminal: ddrescue /dev/sr0 iso.iso and it works to create an iso from the disc.

     

    What do I need to do to correct the error? Multiple data discs were tried with the same result. I am a bit of a noob so please go easy on me I might be missing something obvious.

  12. 11 minutes ago, JorgeB said:

    For raid0 you can't use the GUI, you can do it manually, if interested I can post the instructions.

    I would greatly appreciate it. I'm a noob, but not scared of the command line, I just grew up on the other side of the tracks with commodore and dos.

    I had played with the btrfs command line tools with some spare disks in a usb enclosure a little while ago, but I don't remember much. I really like that filesystem.

  13. Hopefully, this will be a quick one. I have a four drive SSD cache pool in raid 0 btrfs. I want to remove one because it overheats and performs poorly compared to the other three. The pool is only 25% full. Using using Unraid 6.9.2. Not sure if this is a super easy GUI task or if I need to run a btrfs command from terminal first. I read the post from 2020 about this, but I think that was for an older unraid version and the documentation needed to be updated.

     

    Is the procedure correct as follows:

    1. Disable docker and vm (runs from said pool)

    2. stop array

    3. Disable disk desired to be removed in gui

    4. start array

    5. allow balance to complete (moving the data from the removed disk automagically)

    6. stop array

    7. remove fourth disk slot from pool.

    8. start array

    9. reenable docker and vm

     

    My fear is that the programming may or may not be there to handle the automated removal for the raid 0 situation. If not, then my procedure would lead to data loss/corruption. I have a backup of all data on cache so I'm not worried about that.

×
×
  • Create New...