Jump to content

Energen

Members
  • Content Count

    49
  • Joined

  • Last visited

Everything posted by Energen

  1. For whatever it's worth, the SMART attributes said the drive failed .. "SMART overall-health:Failed" and when I tried to preclear the drive for removal it failed preclear/erase also... I've already opened up a warranty claim to RMA it just to be safe, but I won't use the replacement for anything critical.
  2. So I haven't had any more issues since removing this cache drive.... it seems it was the root of all problems! Lost my VMs since I couldn't move over the files but there was nothing essential there, and dockers reinstalled with no major issues. My cache drive was a 7-8 month old Mushkin SSD.... I guess it didn't work out too well. I'll eventually look to replace that. Thanks for the help.
  3. Ok will try that first. Thanks for the help. I'll try to move any files off the cache drive and remove it from the array and go from there.
  4. What about the read only shares though? The cache is trying to write to the shares, yet I can't find anywhere that they could be set to read only, or any reason why they would have been set to read only. Appdata is read only also, somehow. Those are my two biggest problems.
  5. I've been experiencing a number of problems within the last week or so, that all seemingly started out of nowhere.,,, last version upgrade maybe? I've had the GUI/server essentially crash for some unknown reason which was fine after a reboot, but I rebooted again last night to try and resolve some issues and ended up in a boot loop because the USB drive was not detected, or something. Got that resolved after a hard reset. I had a number of warnings about a drive or two with read errors yet all drives pass all checks. Currently my biggest problem is that some shares are read only even though read only was never set on any shares, and again started out of nowhere. I ran Docker Safe New Perms to go through everything and reset any permissions but I still have read only shares. I have a number of "some or all files are unprotected" on the Shares list because of these read only issues. The Mover gets jammed up in the log "UNRAID move: move: create_parent: /mnt/disk8/appdata error: Read-only file system" Fix Common Problems is currently giving me these two errors: Unable to write to cache Drive mounted read-only or completely full. Unable to write to Docker Image Docker Image either full or corrupted. What the hell is going on here? Last week when I was having read errors I put the array into maintenance mode and scanned all the drives, no errors reported, array restarted fine and didn't have any problems (known anyways) until now. My system log has a bunch of bad looking stuff in it .. is this all included in the diagnostics zip if it would help to figure anything out? Aug 21 23:20:07 UNRAID dhcpcd[1795]: br0: failed to renew DHCP, rebinding Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): parent transid verify failed on 857849856 wanted 13396518 found 13393366 Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device sdl1) in btrfs_run_delayed_refs:2935: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: BTRFS info (device sdl1): forced readonly Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS warning (device loop2): chunk 13631488 missing 1 devices, max tolerance is 0 for writeable mount Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in write_all_supers:3716: errno=-5 IO failure (errors while submitting device barriers.) Aug 21 23:30:45 UNRAID kernel: BTRFS info (device loop2): forced readonly Aug 21 23:30:45 UNRAID kernel: BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure Aug 21 23:30:45 UNRAID kernel: loop: Write error at byte offset 17977344, length 4096. Aug 21 23:30:45 UNRAID kernel: print_req_error: I/O error, dev loop2, sector 35112 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device loop2): pending csums is 12288 Aug 21 23:30:45 UNRAID kernel: BTRFS error (device sdl1): pending csums is 4096 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 131072 csum 0x1079e3d3 expected csum 0x73901347 mirror 1 Aug 21 23:30:47 UNRAID kernel: BTRFS warning (device sdl1): csum failed root 5 ino 4631484 off 262144 csum 0xafa74aad expected csum 0xfa3d3f16 mirror 1 So one thing at a time, how do I fix the read only issues? Thanks.
  6. I was interested in playing around with any generation of one to see if I could make my own digital picture frame that performs in a better way than my Nixplay digital frame does, but even if I managed to get it all together and mount an LCD in a not-ugly frame of some type the software would be my issue..... I would want, ideally, something that would display photos from an unraid share in a random order, or from google photos. Maybe in some sort of configurable way, since there are a LOT of photos. Not sure what else I would use a pi for.. that's just something that I've thought about for a while. I'll probably never do it.
  7. I had the same problem and tried a bunch of things, none of which worked. Then I gave up. I was bored this morning after waking up at 2AM and decided to give it another try. Last time the most success I had was to use the NativePC instead of the VMWare image but still had an issue trying to boot it and gave up on that as well. You will need some linux tools, I had a Debian VM installed already so I used that. Download the DietPi Native PC (BIOS/CSM) image. If you try to create an Unraid VM with that image you will eventually get into a loop of not being able to download updates because of no available free space (that's the hint). This time, I did some more research and tried a few more things.. and seem to have it working. I used the info provided here to resize the image https://fatmin.com/2016/12/20/how-to-resize-a-qcow2-image-and-filesystem-with-virt-resize/ Use a couple tools to show you info about the image and then to resize it.. qemu-img info DietPi_NativePC_BIOS-x86_64-Stretch.img displays the disk size as 602M.. not enough to be usable. I don't know what the minimum size should be, I added entirely too much at 30gigs but I'm just testing at this point so I don't care. qemu-img resize DietPi_NativePC_BIOS-x86_64-Stretch.img +30G cp DietPi_NativePC_BIOS-x86_64-Stretch.img DietPi_NativePC_BIOS-x86_64-Stretch-orig.img sudo virt-resize -expand /dev/sda1 DietPi_NativePC_BIOS-x86_64-Stretch-orig.img DietPi_NativePC_BIOS-x86_64-Stretch.img I had to use sudo since I wasn't logged in as root Now you can take that new, resized DietPi img and use it as your unraid vm hard drive and install DietPi. To save you some time and effort, here's a fresh 4GB image that can be used for Unraid. DietPi_NativePC-BIOS-x86_64-Stretch-4GB-UNRAID.7z - 121.7 MB
  8. I installed this docker for a couple minutes last night and just didn't seem to get it working correctly, but I jumped in fast and didn't do much research, so that's on me. Having things run as a docker might be easier and more convenient but what I ended up doing was installing a Debian VM and installing pi-hole that way, and it seems to be working pretty good so far. I wonder, though, if somewhere in the 25 pages of this thread has anyone asked about DNS fallback if Unraid/docker/pihole is down? Is it just a matter of setting a secondary DNS server on your router if it can't communicate with pi-hole? Now on to looking for something else to do with the server......
  9. This tool has been pretty useful to me so I'll continue to use it. Thanks for your efforts with keeping it updated and always useful. Question: Would it be possible to modify it so that even after doing all those steps, detect/determine if the action is going to delete the entire array (or just do something very bad) and display a severe warning and have to explicitly allow the action? Might it be as easy as checking the path selected for deleting and making sure the path is a a folder and not the whole share?
  10. Ah I see, so if I set them both to /mnt/user or /mnt/user/Media, as previously mentioned, that would be considered the same mount points? Doing that quickly then makes my movie file paths /movies/Movies/<name>? So when I change the existing movie file path it doesn't register the existing file. All in all it seems to break everything. If sonarr works fine, why does radarr work so poorly? The radarr devs shouldn't make such drastic changes to simple things. Not sure where to go from here, maybe start over completely. ---------------- Edit -- this completely makes no sense to me. Maybe that's my problem, but why is this so complicated? If I change radarr's /movies path to /mnt/user/Media/Movies the movie path is correct in radarr... but even though the media file is there it's not detected, says missing. Also, with this mapping (and no /downloads) I can't import from any other folder. If I map a /downloads and a /movies then it copies files instead of moving them quickly. What is the proper way to configure this? No way seems to be the correct way.
  11. I just started playing with radarr, and have everything working ok-enough... except for imports. My radarr is also copying files rather than moving them and can't figure out why. Everything is mapped on the same share so it "shouldn't" be an issue of mount points. And, the mounts are essentially the same as they are with sonarr, which works perfectly ok. I've tried toggling the use hardlinks setting in radarr with no difference. What am I missing here? Also, any ideas on why radarr is not detecting some movie files and keeps them as missing? The files are there but they are not scanned.
  12. Excellent, thanks for the confirmation. My next task will be to create a cache pool. Just for fun. What I didn't mention was that I actually put 2 SSD's in but one of them was from a PS4 and apparently I can't get it recognized by the system to format it... either that or I forgot to plug something in. Problem for another day when I want to tinker with it.
  13. Added a 500gb SSD cache drive after roughly a year or so of having my server running.. didn't need to add the cache drive, was just something to do to scratch my build-something itch.. simple SATA card and drive addition cured me for a little bit. So using the various threads and guides that I found I'm pretty sure I have everything set to the way it should be, for example appdata, domains, and system shares are set to Prefer cache disk, and other shares are set to Yes for using the cache disk. All seems well, haven't noticed anything going wrong. But now what? Am I supposed to make any other changes to existing dockers/vms? How about going forward? I didn't specifically see any 'how-to usage' of the cache drive mentioned anywhere but I might have missed it. In essence what I am asking is for any existing or new dockers and/or vms do I use the cache disk as the location for anything or do I continue to use the normal filesystem mappings and the usage of the cache drive is handled by unraid on it's own? Do I write a (or change an existing) VM's hard drive to /mnt/cache/domains/SomeVM and then it's mirrored to the array or do I still map it to /mnt/user/domains/SomeVM and the cache drive is used automatically? Kind of like a symbolic link. Similarly if I'm sending data to a share that has use cache disk = yes, for example a downloads folder, do I write to /mnt/cache/downloads or to /mnt/user/downloads? I'm sure this is covered somewhere but I couldn't find anything on this topic. Point me in the right direction. Thanks for indulging my noob question.
  14. The only thing that seems to stand out is in Sync.. the default folder location was something like /sync/Resilio Sync or /root/Resilio Sync although neither of those are used, the shared folders are saved on shares.. those seem to be the only paths that aren't mapped to a share.... could that be the issue? I read the FAQ but except for these everything seems to be mapped correctly.. for the most part nothing else would be downloading enough external data to fill up a multi-gb docker image. I might delete all the Sync containers and start over anyways, as one of them I seem to have forgotten the GUI password, lol. I have 3 Sync containers for remote and local sync stuff... wish there was a better way to do it, or could be done through one.
  15. I just updated from 6.6.3 and everything seems ok except my shares list is empty?? unraid-diagnostics-20181109-0927.zip
  16. Ok so I need some guidance here... why the heck is my docker image still getting full!?! I already increased it from the 20gb to 60gb and now I'm getting image full again... currently 99%! I don't have a ton of containers.. and as far as I can tell by my mappings nothing should be downloaded to the docker itself... everything is mapped to a share. I 'feel' like it might be something with relilio-sync since those are the newest containers I've added, but still everything is mapped out. There's nothing that will tell me where the space is being used? Here's my containers and mappings... What am I missing??
  17. That's fine and dandy also, but for any of us that don't want to run a headless instance and would rather have an instance with a GUI through VNC the modifications made to allow VNC were an excellent thing. But not having an updated version modified as new releases come out is not as excellent. And if you refer back to your original thread, you only asked about the template, not necessarily only because it was headless. Your intent might have been a headless version, but it has evolved into other people wanting a VNC GUI. FWIW, one of the reasons I want/need a GUI is because every so often kodi hangs up with an error message that will not auto-close, I forget now exactly what it says but generally something about SMB timeout and can't access the scan paths.. and this prevents the database from updating until I log into the GUI and close the message and scan the library. Headless = no way to know that the error message is up and the database is not updating. Further, correct me if I'm wrong, but there'd also be no way to configure kodi's settings and such, installing addons, etc.. in headless mode.
  18. Just hoping to revive this project... any plans on updating to the latest version of LibreElec? It seems that the changes to the source code that allow for vnc never made it into the final source code.. the most recent releases of LibreElec all fail to run after installation because of the GPU issue. Can't seem to figure out how to work around it..
  19. Lately Davos has been incredibly slow with transfers. I don't know necessarily where the problem is but it seems that Davos is the culprit. Downloading from my ftp to my server with Filezilla speeds are normal, but with davos they are so slow that davos doesn't even report the speed in the schedules tab. Anyone else having issues? Is there anywhere in Unraid that will show me the network speeds of specific dockers rather than the overall throughput? And are there any other dockers that are similar in function to davos? It's so useful to me that I almost can't live without it now.
  20. It's currently 37%, after I deleted 2 dockers (Krusader and Dolphin) which also brought my btrfs subvolmes size down to 54GB. The warnings I had been getting were docker image utilization over 70%, as high as 85% at one point.
  21. As far as I can tell neither of those situations apply to me. Any docker app that has a directory to configure is mapped to a share. I don't think there's anything downloading to the image. Logs... du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 Returns about 20MB of logs. Not sure what else to look at? Edit -- I examined /var/lib/docker/btrfs/subvolumes and there are 167 directories, far more than apps/dockers I've got installed if that matters... some which are quite large. Example: 1.2G 60ba2059df9b6ee30f3dac1084f7041bc5ff0018c77d34cef3d71aede6ba7076 1.2G fd00c8da84099fb7c380e06ff82feccdc87862706ef8b371a86c5afaf7e66617 985M bcd719fbcb62e15f4bb96fe56b441d005a55dede653fd49cfe9b4400f4e482f8 985M c6658bfc2643ad274c31a16229f8bb77edf257ccbd712635cd87c7f325752163 So how do I go about figuring out what is what and cleaning some of it up? I can't possibly need all of this to be here.... 167? I've got 14 dockers and 4 VMs.. seems excessive.
  22. I've recently been getting warnings also... and it would appear that, at least in my case, btrfs is the culprit. root@UNRAID:~# du -h -d 1 /var/lib/docker/ 16M /var/lib/docker/containers 85G /var/lib/docker/btrfs 26M /var/lib/docker/image 46M /var/lib/docker/volumes 0 /var/lib/docker/trust 88K /var/lib/docker/network 0 /var/lib/docker/swarm 2.4M /var/lib/docker/unraid 0 /var/lib/docker/tmp 0 /var/lib/docker/plugins 16K /var/lib/docker/builder 85G /var/lib/docker/ 85GB for....... what exactly? I've seen a number of sites talking about this issue and it seems that most people end up rebuilding their dockers in order to reduce the size....... I'm not interested in doing that, at all. https://github.com/moby/moby/issues/27653 https://gist.github.com/hopeseekr/cd2058e71d01deca5bae9f4e5a555440 https://github.com/moby/moby/issues/9939 Short of simply increasing the size of the docker as @Squid has outlined, is there any way to non-destructively "refresh" btrfs to reduce it's size?
  23. Just wanted to say thanks for the work involved on this.... I had been waiting for a newer LibreElec headless docker to use so that I could update all my devices to a newer version of Kodi. Hopefully the work will continue with the latest versions of LibreElec..