syrys

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by syrys

  1. Updated OLD unraid server with some difficulty. But you are spot on, i can now use unassigned devices to mount the drive. Thank you so much for the help
  2. Hmm, yeah thought it was something like that. Hmm, seems a bit risky though, thats one of the reasons im starting fresh with a new build. Are there any good tools that allow backups and rollbacks if the update didnt work as expected? Maybe a tool to take a backup of the flash drive and restore it (assuming its that simple to roll back, hopefully)?
  3. This is a brand new drive, i put it in the NEW server and initialised a new unraid array with two drives (this is one of the two) with no parity. then i unplugged it from the NEW server, and then plugged it into the OLD server (SATA directly to the MB), and then used unassigned devices plugin on the OLD server to click mount (what happened then is in the post above). long story short, the error is from the OLD server. im mounting a new server's drive into the old server.
  4. ok, so i moved the array disk from the NEW unraid server to OLD unraud server. clicked MOUNT under the unassigned devices for the 18TB drive, the button changes to "mounting" and then unraid loading animation pops up, then page refreshes and the drive still had the MOUNT button (as if mount failed). The system logs show the following: Mar 10 00:31:30 karie unassigned.devices: Adding disk '/dev/sdb1'... Mar 10 00:31:30 karie unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdb1' '/mnt/disks/ST18000NM000J-2TV103_ZR53FQDA' Mar 10 00:31:30 karie kernel: XFS (sdb1): Superblock has unknown read-only compatible features (0x8) enabled. Mar 10 00:31:30 karie kernel: XFS (sdb1): Attempted to mount read-only compatible filesystem read-write. Mar 10 00:31:30 karie kernel: XFS (sdb1): Filesystem can only be safely mounted read only. Mar 10 00:31:30 karie kernel: XFS (sdb1): SB validate failed with error -22. Mar 10 00:31:30 karie unassigned.devices: Mount of '/dev/sdb1' failed. Error message: mount: /mnt/disks/ST18000NM000J-2TV103_ZR53FQDA: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error. Mar 10 00:31:30 karie unassigned.devices: Partition 'ST18000NM000J-2TV103_ZR53FQDA' could not be mounted... Does that give any insights?
  5. There are raid controllers involved (but i followed some steps to correctly configure a raid controller according to unraid tutorial like 8 years back - on my old server), but im plugging it directly into a motherboard sata post on the old server. Can you clarify, if i initialise an array on the new server (without parity), and then unplug one of the drives of said array, then plug it directly on to the OLD unraid server (sata connection directly to the motherboard), should i then be able to click MOUNT on this migrated drive on unassigned devices without doing anything else on the old server (without deleting partitions, without formatting or anything)? If this is suppose to work (it doesnt for me), please give me a confirmation and i will attempt it again tomorrow and post an actual error msg here (i didnt bother because the error felt like it was never meant to work that way, i figured that the array used a different file structure/format or or some config that unassigned devices doesnt understand). also note that the OLD server is running unraid 6.8.x (so unassigned devices plugin is probably pretty old - probably havent updated in many years), and NEW server is ... well... NEW (unraid 6.12.x etc).
  6. No absolutely, i think sadly everything implies faulty UPS. It is what it is, unfortunately its just a bad purchase for me. atleast on the bright side, i can repurpose it for a non smart usecase - battery backup my router or something (the UPS is still very functional, just not the data). Thank you for all the sanity checks and suggestions
  7. Current state: I have an old unraid server with 10x 8tb drives. I just built a new unraid server with a bunch of 16tb drives (I haven't really started the array or setup any data/apps yet, it's new new). At this moment, I have both servers running at the same time. I do not intend to reuse the 8tb drives from the old server, I have enough new 16tb drives (about 5 of them) at hand for my needs. Also note, old server is running unraid 6.8.x and new is 6.12.x - incase I need to watch out for anything? What I want to do: I want to copy over about 40tb of data from the old unraid server to the new server. What are my best options? I do also like to minimise any downtime (I will eventually fully setup the new server and just swap the two once data is migrated and all the apps/dockers are installed), and I want to do the migration as soon as possible (no huge hurry though). I could easily copy data over the network. But that's limited to 1 gigabit (I only have a gigabit network and old server only supports gigabit even if network supports it). I understand that I can not put a parity in the new server to speed up copy, but it's still limited to gigabit. I tried initialising the array on new sever (without parity), and then physically moved (and plugged in) one of the array drives to the old server hoping that I can mount it as unassigned devices to copy data over to it (copy inside the same machine using sata speeds rather than saturating the network), but i was unable to mount the new drive into the old machine (I assume since I initialise it as an array on the new server, file systems are different? What's the correct/best way here? Are there any tools I can use or steps I can follow?
  8. The cable that was on the box looks like that, are there variants of the cable that I should be concerned about? Also the cable from my old apc ups also looks like that (this was maybe 8 years old), and I also tested with that cable.
  9. Im just playing around with it on windows. The powerchute software complains that it cant find the device. When i plug it in, sometimes windows device manager's USB section displays nothing new. But sometimes device manager shows "unknown usb device (device descriptor request failed)".
  10. Unfortunately its not under warranty as far as im aware as i purchased it secondhand (new/unused/unopened but dont have any original paperwork for warranty). Is damaged USB controller a common issue? do you know if there is anything i can do to make sure that this is the case, or is my debugging basically prove it already? Or do these things have like firmware upgrades sort of thing that i may need to do (never had to do anything like that for the old UPS, but this seems more modern than the old one i have)?
  11. just to clarify the new Unraid server that i was testing this on has never had any other ups plugged in (still now). its a fresh install of unraid (couple of days ago). So there shouldn't be any older config to worry about. The older unraid server i used to double check the UPS on however had another UPS plugged in (and that ups works just fine using the built in apc ups daemon).
  12. I just recently picked up an APC BE850G2 UPS. Ive also recently just built a new Unraid system. Was trying to connect up the new UPS into the new system, and im unable to get it connected. Basically what i did was, connect up the data port of the UPS to the Unraid server, then used the built in UPS settings to enable APC UPS Daemon, and it always just says "Lost communication". Screen looks like this: Ive tried different USB ports (incl USB 2.0 ports according to the MB manual), still no difference. ive done `lsusb` on command line and it doesnt show anything that implies the UPS root@Mounty:/etc/apcupsd# lsusb Bus 002 Device 002: ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 004: ID 0951:1666 Kingston Technology DataTraveler 100 G3/G4/SE9 G2/50 Bus 001 Device 003: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 001 Device 002: ID 0b05:19af ASUSTek Computer, Inc. AURA LED Controller Bus 001 Device 009: ID 8087:0026 Intel Corp. AX201 Bluetooth Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub As i mentioned, i also already have an OLD Unraid server that also has an OLD APC UPS connected to it. So to test things, i connected the new UPS to the old server (using the same OLD cable that already works on the same USB port that already works on the old server), and the old server doesnt detect this either (same "Lost communication"). So, its not the USB port, its not the USB cable. Its either the software, drivers, or faulty UPS. New Unraid running on "6.12.8". Old Unraid running on "6.8.3". When googling around, ive seen comments from other users implying that this model of UPS works fine with unraid (for them). ive also tried using NUT (i dont think i actually need to), but what ever i try, it seems to imply that it cant find a device. Can anyone help me debug this or let me know if they have any suggestions? Assume in a bit of a noob. Any help is appreciated, im at the end of the line atm. I guess the next thing to try is to plug it into a windows machine and see if that can detect the UPS (unsure how to test this, but ill try figure that out over the weekend).
  13. Alright, here is my "hacky" solution to the above problem. It works for now, if someone has a better solution, let me know. Install User Scrips plugin (if you dont have it already), and add the following script: #!/bin/bash mkdir /mnt/disks/rclone_volume chmod 777 /mnt/disks/rclone_volume obviously you can set -p flag on the mkdir if you need nested directories or if you have issues with sub directories not being there, but from trial and error on my unraid setup, at boot (before array starts), `/mnt/disks/` exist. edit the script to include all the mount folders you want (if you have multiple mounts), and chmod 777 each of them. Set the above user script to run on every array start. Just to make sure my container doesnt start prior to this finishing (unsure if it can happen?), i added a random other container above my rclone container (a container that doesn't need drives to be mounted), and set a delay to 5 secs (so rclone container waits 5 seconds). This might be unnecessary. Hope it helps someone.
  14. Hmm, ive been banging my head against the desk all day, can someone here give me some advice on how to fix this? So this issue that was already mentioned several times, i get this. But the solution mentioned does not work after server restart. Executing => rclone mount --config=/config/.rclone.conf --allow-other --read-only --allow-other --acd-templink-threshold 0 --buffer-size 1G --timeout 5s --contimeout 5s my_gdrive: /data 2020/09/02 14:00:21 mount helper error: fusermount: user has no write access to mountpoint /data 2020/09/02 14:00:21 Fatal error: failed to mount FUSE fs: fusermount: exit status 1 First of all, i have the docker installed, and all the settings as mentioned throughout this thread. I do also have couple of extra rclone flags passed in, but these arnt the issue. So, lets say the mount point defined is `/mnt/disks/rclone_volume`, when i restart the server (docker auto starts), and i see the above mentioned error. If i stop the docker, and do `ls -la` i see the ownership is `root:root` for `/mnt/disks/rclone_volume`. alright, sure, `chmod 777`, `chown 911:911` the rclone_volume, then restart the docker, cool, everything works. `/mnt/disks/rclone_volume` gets mounted correctly (`ls -la` shows 911:911 great), i can browse the files, no errors in the docker logs. Sweet, everything is sorted right? No, unfortunately not. The moment i reboot the unraid server (remember the docker auto starts), i get the above mentioned error on the docker logs, and obviously the drive is not mounted. so back to `ls -la` on the `/mnt/disks/rclone_volume`, and its back to `root:root` and `755`. So basically, every time start my server, i have to manually `chmod 777` and/OR `chown 911:911` the `/mnt/disks/rclone_volume`, and then start the docker? Any idea whats causing this? I cant be the only one having this issue can i? So, essentially, for this docker to successfully mount a drive, it needs the mount destination to either be `777` or `911:911`. But for what ever reason, at rebbot/start or unraid, the ownership of `/mnt/disks/rclone_volume` gets reset to `root:root` even if you had set it to `911:911` prior to restart (i assume user 911 doesnt exist at the very start, so it defaults to root?). at the start of the boot, unraid (?) also sets `/mnt/disks/rclone_volume` to 755 (even if you had it set to 777 before restart). wtf? could this be related to another plugin i might have?
  15. oh wow, that sounds worrying. alright, ill follow cache pool issue instructions from that link. as for the docker image recreation, are there any instructions i should be following for this? or is it basically manually re creating a new docker, and re downloading all the previous dockers ive been using (plex, sonarr, etc) and setting up the same docker settings for each of them as before?
  16. Sorry for the late response. I waited a little bit so the issue triggers again so i can get the exact error and get the diagnostic file when it happens. week ago when the server was having similar issues, it didnt even let me download the diagnostic file, sigh. Anyway, same error happened again. Here are some errors on the system log: Jul 15 01:08:35 karie kernel: print_req_error: I/O error, dev sdj, sector 6078419056 Jul 17 01:05:49 karie kernel: BTRFS: error (device loop2) in btrfs_finish_ordered_io:3107: errno=-5 IO failure hmm, unsure what the first error is either. sdj is my unassigned device drive, the drive i have all my downloads going to (and then they get picked up and moved to the array, nothing important in sdj). Any help is appreciated. Thanks in advance karie-diagnostics-20200717-1249.zip
  17. Hey all, i think i need some serious help with my unraid (cache?). My unraid setup was running nice and smooth (minor unrelated hiccups) for few years now. I have about 10 drives, 1 parity, 1 unassigned drives drive (i use this for downloads etc), and 2x ssds (cache). since couple of months ago, ive been getting some weirdness. my SABnzbd docker was the first to complain (atleast that i noticed) with errors like this on every download: Traceback (most recent call last): File "/usr/share/sabnzbdplus/cherrypy/_cprequest.py", line 663, in respond self.body.process() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 989, in process super(RequestBody, self).process() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 558, in process proc(self) File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 223, in process_multipart_form_data process_multipart(entity) File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 215, in process_multipart part.process() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 556, in process self.default_proc() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 715, in default_proc self.file = self.read_into_file() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 729, in read_into_file fp_out = self.make_file() File "/usr/share/sabnzbdplus/cherrypy/_cpreqbody.py", line 512, in make_file return tempfile.TemporaryFile() File "/usr/lib/python2.7/tempfile.py", line 511, in TemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags) File "/usr/lib/python2.7/tempfile.py", line 244, in _mkstemp_inner fd = _os.open(file, flags, 0600) OSError: [Errno 30] Read-only file system: '/tmp/tmpoyMrxR' This issue happens after a day or so of the system running, basically the unraid system somehow "breaks", and then no more downloads work (above error). Simply restarting the unraid server fixes it (restarting docker does not, sometimes it wont even allow restarting docker, says something went wrong). Anyway, last line of above error is interesting. Something is in read only mode? my cache drive maybe? Then exploring my unraid logs, i see things like this: BTRFS: error (device loop2) in btrfs_finish_ordered_io:3107: errno=-5 IO failure some btrfs issue? only btrfs usage is the 2x cache ssds i believe. so something going on with that? Ive googled the above error, and got to a place where people suggested that the cache might be corrupted. But i wasent really clear on how to know for sure, and how to go about fixing this. any help will be appreciated. i am a total noob with all this just to clarify. when i restart, everything works fine. ive disabled couple of dockers i use (radarr, deluge) which i dont use as much, and with those disabled the weirdness/errors are less frequent. so basically, when i restart the server, everything works fine (other dockers run fine: plex, sonarr, sabnzbd), but after couple of days of running, i notice weirdness like sonarr no longer imports files, plex sometimes crashes and dies, sab sometimes fails to download anything (first error i mentioned).
  18. Hmm interesting. Thanks for the info guys. So, that means, if i want to get 4k transcoding to be done in my plex docker, i need to upgrade to like a 7th/8th gen intel CPU/mobo (since GPUs arnt supported)? If i were to upgrade, does it matter what the CPU is, can i just upgrade to a cheap 8th gen i3 like the i3 8100? would that be enough? I guess my alternative is to run the plex server in a windows VM, thats probably a good option. Is there any instructions on how to pass through the GPU to my windows VM? Has anyone done this successfully?
  19. Hey Guys, Im trying to get some step by step instructions about how to actually install a GPU on my unraid server and get plex server (running as a docker on unraid) to use that GPU to do transcoding. My unraid server runs on a i7 4770 cpu which isnt enough for 4k h265 transcoding, so i just installed a GTX 970 (might move to a 1030 or 1050 later) on the box. Although the hardware is installed, im not sure how to: 1. Check if the GPU is correctly installed (that OS/Unraid can actually see it)? 2. How to get my plex docker (linuxserver/plex) to actually make use of this GPU. Ive seen couple of threads about this, but the instructions wernt too clear for me (im a noob at these things). Can someone give me some instructions if they can? would be greatly appreciated I tried to follow this: On this step modprobe i915 chmod -R 777 /dev/dri I got the following error: chmod: cannot access '/dev/dri ': No such file or directory There were couple of other threads about editing syslinux.cfg and "go" file, but i have no idea where they are, how to edit them, and what to edit them to. So im pretty stuck Any help appreciated. PS: im running unraid 6.6.5
  20. ah, fail. i thought it used to auto update OS =/ ah well, time to update
  21. my unraid is set to auto update but currently at v6.4.1. i dont see this option. i assume its not currently on the main releases.
  22. I know you guys keep suggesting that this is a user/config error time and time again. but after a year or so of struggling with this, my issue turned out to be jackett. i dont really know much about docker, and no one really gave me any good way to calculate disk usage per container, so i just deleted one container after another (order of least important). my docker size was going up by about 2-3%/day, so i would delete one container each day to see when the increase stopped. turned out that the culprit for me was jackett. it was likely saving logs or something somewhere it shouldnt have as it was configured correctly. As i mentioned, ive been struggling with this for a long time, ive made sure that every configurable setting in every container was configured correctly. something else you can try is, earlier in this thread, someone answered my question about ssh-ing into docker. If you have time to try it, you could try ssh, then use some disk/folder usage command to narrow down exactly which folder/files might be the cause, then submit a bug report to them. hope this helps.
  23. The culprit is Radarr. As @John_Msuggested, i ended up turning off some containers that i didnt use too much. The % stopped going up. And i narrowed down the problem to Radarr. Soo... How that i know the culprit container, how do i actually go about fixing it? I spent a lot of time making sure all the setting are correct to the best of my ability. I suspect the storage usage i comes from a non configurable path. Any suggestions?
  24. Yeah ive thought about that. Thus the reason why i asked if anyone know a way for me to check/monitor the docker container space usage. I still dont really have an answer to that Sure, as you mentioned, i can turn off some of the containers and see if that would stop the usage creeping up, but the culprit is something im likely heavily use/rely on (heck, i only have 8 containers running, and its likely the top 4 on the list i posted). Ill leave the turning off thing as a last resort, and am hoping that unraid/lime-tech or atleast someone who is familiar with docker to give me some sort of proper way to debug this. 89%
  25. Hey Mate, Thanks for the link. I did have a look in there and made sure the basic things like that log file limit etc are set. But i dont see anything in the faq about a way to find which image uses up how much space or anything along those lines. So i still have no way of knowing what image is the culprit, and how to go about fixing it. As i mentioned earlier, my settings/configs should be correct, as i have been using these for years without much issue. The issue is likely coming from a recent update for one of these images, that is likely saving some files in a location it shouldnt be. I just have no way to actually find this out? FYI: im at 87% now and have no idea what to do