Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About Energen

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For whatever it's worth, the SMART attributes said the drive failed .. "SMART overall-health:Failed" and when I tried to preclear the drive for removal it failed preclear/erase also... I've already opened up a warranty claim to RMA it just to be safe, but I won't use the replacement for anything critical.
  2. So I haven't had any more issues since removing this cache drive.... it seems it was the root of all problems! Lost my VMs since I couldn't move over the files but there was nothing essential there, and dockers reinstalled with no major issues. My cache drive was a 7-8 month old Mushkin SSD.... I guess it didn't work out too well. I'll eventually look to replace that. Thanks for the help.
  3. Ok will try that first. Thanks for the help. I'll try to move any files off the cache drive and remove it from the array and go from there.
  4. What about the read only shares though? The cache is trying to write to the shares, yet I can't find anywhere that they could be set to read only, or any reason why they would have been set to read only. Appdata is read only also, somehow. Those are my two biggest problems.
  5. I've been experiencing a number of problems within the last week or so, that all seemingly started out of nowhere.,,, last version upgrade maybe? I've had the GUI/server essentially crash for some unknown reason which was fine after a reboot, but I rebooted again last night to try and resolve some issues and ended up in a boot loop because the USB drive was not detected, or something. Got that resolved after a hard reset. I had a number of warnings about a drive or two with read errors yet all drives pass all checks. Currently my biggest problem is that some shares are read only even though read only was never set on any shares, and again started out of nowhere. I ran Docker Safe New Perms to go through everything and reset any permissions but I still have read only shares. I have a number of "some or all files are unprotected" on the Shares list because of these read only issues. The Mover gets jammed up in the log "UNRAID move: move: create_parent: /mnt/disk8/appdata error: Read-only file system" Fix Common Problems is currently giving me these two errors: Unable to write to cache Drive mounted read-only or completely full. Unable to write to Docker Image Docker Image either full or corrupted. What the hell is going on here? Last week when I was having read errors I put the array into maintenance mode and scanned all the drives, no errors reported, array restarted fine and didn't have any problems (known anyways) until now. My system log has a bunch of bad looking stuff in it .. is this all included in the diagnostics zip if it would help to figure anything out? Aug 21 23:20:07 UNRAID dhcpcd[1795]: br0: failed to renew DHCP, rebinding Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): parent transid verify failed on 857849856 wanted 13396518 found 13393366 Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device sdl1) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: BTRFS info (device sdl1): forced readonly Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS warning (device loop2): chunk 13631488 missing 1 devices, max tolerance is 0 for writeable mount Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in write_all_supers:3716: errno=-5 IO failure (errors while submitting device barriers.) Aug 21 23:30:45 UNRAID kernel: BTRFS info (device loop2): forced readonly Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: loop: Write error at byte offset 17977344, length 4096. Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 35112 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): pending csums is 12288 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): pending csums is 4096 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 131072 csum 0x1079e3d3 expected csum 0x73901347 mirror 1 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 262144 csum 0xafa74aad expected csum 0xfa3d3f16 mirror 1 So one thing at a time, how do I fix the read only issues? Thanks.
  6. I was interested in playing around with any generation of one to see if I could make my own digital picture frame that performs in a better way than my Nixplay digital frame does, but even if I managed to get it all together and mount an LCD in a not-ugly frame of some type the software would be my issue..... I would want, ideally, something that would display photos from an unraid share in a random order, or from google photos. Maybe in some sort of configurable way, since there are a LOT of photos. Not sure what else I would use a pi for.. that's just something that I've thought about for a while. I'll probably never do it.
  7. I had the same problem and tried a bunch of things, none of which worked. Then I gave up. I was bored this morning after waking up at 2AM and decided to give it another try. Last time the most success I had was to use the NativePC instead of the VMWare image but still had an issue trying to boot it and gave up on that as well. You will need some linux tools, I had a Debian VM installed already so I used that. Download the DietPi Native PC (BIOS/CSM) image. If you try to create an Unraid VM with that image you will eventually get into a loop of not being able to download updates because of no available free space (that's the hint). This time, I did some more research and tried a few more things.. and seem to have it working. I used the info provided here to resize the image https://fatmin.com/2016/12/20/how-to-resize-a-qcow2-image-and-filesystem-with-virt-resize/ Use a couple tools to show you info about the image and then to resize it.. qemu-img info DietPi_NativePC_BIOS-x86_64-Stretch.img displays the disk size as 602M.. not enough to be usable. I don't know what the minimum size should be, I added entirely too much at 30gigs but I'm just testing at this point so I don't care. qemu-img resize DietPi_NativePC_BIOS-x86_64-Stretch.img +30G cp DietPi_NativePC_BIOS-x86_64-Stretch.img DietPi_NativePC_BIOS-x86_64-Stretch-orig.img sudo virt-resize -expand /dev/sda1 DietPi_NativePC_BIOS-x86_64-Stretch-orig.img DietPi_NativePC_BIOS-x86_64-Stretch.img I had to use sudo since I wasn't logged in as root Now you can take that new, resized DietPi img and use it as your unraid vm hard drive and install DietPi. To save you some time and effort, here's a fresh 4GB image that can be used for Unraid. DietPi_NativePC-BIOS-x86_64-Stretch-4GB-UNRAID.7z - 121.7 MB
  8. I installed this docker for a couple minutes last night and just didn't seem to get it working correctly, but I jumped in fast and didn't do much research, so that's on me. Having things run as a docker might be easier and more convenient but what I ended up doing was installing a Debian VM and installing pi-hole that way, and it seems to be working pretty good so far. I wonder, though, if somewhere in the 25 pages of this thread has anyone asked about DNS fallback if Unraid/docker/pihole is down? Is it just a matter of setting a secondary DNS server on your router if it can't communicate with pi-hole? Now on to looking for something else to do with the server......
  9. This tool has been pretty useful to me so I'll continue to use it. Thanks for your efforts with keeping it updated and always useful. Question: Would it be possible to modify it so that even after doing all those steps, detect/determine if the action is going to delete the entire array (or just do something very bad) and display a severe warning and have to explicitly allow the action? Might it be as easy as checking the path selected for deleting and making sure the path is a a folder and not the whole share?
  10. Ah I see, so if I set them both to /mnt/user or /mnt/user/Media, as previously mentioned, that would be considered the same mount points? Doing that quickly then makes my movie file paths /movies/Movies/<name>? So when I change the existing movie file path it doesn't register the existing file. All in all it seems to break everything. If sonarr works fine, why does radarr work so poorly? The radarr devs shouldn't make such drastic changes to simple things. Not sure where to go from here, maybe start over completely. ---------------- Edit -- this completely makes no sense to me. Maybe that's my problem, but why is this so complicated? If I change radarr's /movies path to /mnt/user/Media/Movies the movie path is correct in radarr... but even though the media file is there it's not detected, says missing. Also, with this mapping (and no /downloads) I can't import from any other folder. If I map a /downloads and a /movies then it copies files instead of moving them quickly. What is the proper way to configure this? No way seems to be the correct way.
  11. I just started playing with radarr, and have everything working ok-enough... except for imports. My radarr is also copying files rather than moving them and can't figure out why. Everything is mapped on the same share so it "shouldn't" be an issue of mount points. And, the mounts are essentially the same as they are with sonarr, which works perfectly ok. I've tried toggling the use hardlinks setting in radarr with no difference. What am I missing here? Also, any ideas on why radarr is not detecting some movie files and keeps them as missing? The files are there but they are not scanned.
  12. Excellent, thanks for the confirmation. My next task will be to create a cache pool. Just for fun. What I didn't mention was that I actually put 2 SSD's in but one of them was from a PS4 and apparently I can't get it recognized by the system to format it... either that or I forgot to plug something in. Problem for another day when I want to tinker with it.
  13. Added a 500gb SSD cache drive after roughly a year or so of having my server running.. didn't need to add the cache drive, was just something to do to scratch my build-something itch.. simple SATA card and drive addition cured me for a little bit. So using the various threads and guides that I found I'm pretty sure I have everything set to the way it should be, for example appdata, domains, and system shares are set to Prefer cache disk, and other shares are set to Yes for using the cache disk. All seems well, haven't noticed anything going wrong. But now what? Am I supposed to make any other changes to existing dockers/vms? How about going forward? I didn't specifically see any 'how-to usage' of the cache drive mentioned anywhere but I might have missed it. In essence what I am asking is for any existing or new dockers and/or vms do I use the cache disk as the location for anything or do I continue to use the normal filesystem mappings and the usage of the cache drive is handled by unraid on it's own? Do I write a (or change an existing) VM's hard drive to /mnt/cache/domains/SomeVM and then it's mirrored to the array or do I still map it to /mnt/user/domains/SomeVM and the cache drive is used automatically? Kind of like a symbolic link. Similarly if I'm sending data to a share that has use cache disk = yes, for example a downloads folder, do I write to /mnt/cache/downloads or to /mnt/user/downloads? I'm sure this is covered somewhere but I couldn't find anything on this topic. Point me in the right direction. Thanks for indulging my noob question.
  14. The only thing that seems to stand out is in Sync.. the default folder location was something like /sync/Resilio Sync or /root/Resilio Sync although neither of those are used, the shared folders are saved on shares.. those seem to be the only paths that aren't mapped to a share.... could that be the issue? I read the FAQ but except for these everything seems to be mapped correctly.. for the most part nothing else would be downloading enough external data to fill up a multi-gb docker image. I might delete all the Sync containers and start over anyways, as one of them I seem to have forgotten the GUI password, lol. I have 3 Sync containers for remote and local sync stuff... wish there was a better way to do it, or could be done through one.