bnevets27

Members
  • Posts

    576
  • Joined

  • Last visited

Everything posted by bnevets27

  1. Looks like I can't connect securely anymore. If I try to run a test I get this in the dialog box and log when secure connection is enabled. Looks like when enabling secure connection Host: is blank. First log entry is secure connection disabled, second is secure connection disabled. Internet bandwidth test started Retrieving speedtest.net configuration... Could not retrieve speedtest.net configuration: HTTP Error 404: Not Found Jun 18 19:57:19 Excelsior emhttp: cmd: /usr/local/emhttp/plugins/speedtest/scripts/speedtest-xml --verbose Jun 18 19:57:19 Excelsior speedtest: Internet bandwidth test started Jun 18 19:58:04 Excelsior speedtest: Host: Host Name Redacted (xxxx, xx) [10 m] Jun 18 19:58:04 Excelsior speedtest: Ping (Lowest): 524.81 ms | Download (Max): 30.64 Mbit/s | Upload (Max): 5.11 Mbit/s Jun 18 19:58:04 Excelsior speedtest: Internet bandwidth test completed Jun 18 19:58:41 Excelsior emhttp: cmd: /usr/local/emhttp/plugins/speedtest/scripts/speedtest-xml --verbose Jun 18 19:58:41 Excelsior speedtest: Internet bandwidth test started Jun 18 19:58:43 Excelsior speedtest: Host: Jun 18 19:58:43 Excelsior speedtest: Jun 18 19:58:43 Excelsior speedtest: Internet bandwidth test completed
  2. Well it's not just plex, sabnzbd/sickrage etc. when they move files (every single file) I get one (actually 2 or 3 at the exact same time stamp) of those entries in the log. From what I can tell its harmless and no ones has said other wise but it does fill the log file over time. If I could just mute those errors..... Or go back to mnt/cache/? But you said going back and forth *may* cause trouble. What kind of trouble could it cause?
  3. I don't think its mover. But I can/will disable mover logging. How would you disable mover logging? I can't see a setting for it. I guess more info from the log might be more helpful. Jun 15 21:05:45 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/system/docker/appdata/plex/Library/Application Support/Plex Media Server/Cache/Transcode/Sessions/plex-transcode-xxxxxxxxxxx (39) Directory not empty Jun 15 21:10:48 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:10:48 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:11:46 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Jun 15 21:11:46 Excelsior shfs/user: err: shfs_rmdir: rmdir: /mnt/cache/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (39) Directory not empty Seeing as none of my dockers are pointed at /mnt/cache anymore and seeing /mnt/cache in the logs I figured it could be the trouble that you were referring to. I did get here from the following post:
  4. I've had the info bellow spamming my log for a while now, not causing any real issue but it does fill the log. I used to use /mnt/cache/ for most/all of my dockers until at some point moved everything to /mnt/user/ (Like on your advice). I'm now thinking that making that change may have caused this issue as you mention it may cause trouble when switching. What would be the way to rectify the issue if making the switch has cause this? /mnt/cache/xxxxxxxxxxxxxxxxxx (39) Directory not empty
  5. Interesting. Since there is no gui button in docker for that I assume it has to be run on cmd line. I wonder why there isn't a gui button for balance in the docker config but there is for the cache drive. Also would it make any sense to run balance on a cron job? In the case of docker.img it shouldn't really be changing much so I guess it can be run manually if the user has cleaned up the img. In the case of the cache drive, with constant adding and removing of files off the cache drive it seems to make sense to run a balance on it periodically, possibly a scrub too? Sent from my SM-N900W8 using Tapatalk
  6. Yeah I changed some paths and likely messed one up the other day. Yesterday I went from 30% usage to 99% in about 2hrs. As far as I know I corrected that and did a bit of cleaning which I guess is how I got it back to 23%. Without nuking the docker.img image I assume the allocated space won't ever come down. Having the allocated space 90% of the amount of space assigned to docker I assume isn't a problem? Just for further understanding. I assume this is why you can't shrink the docker.img. So docker will started to allocate more space to itself when it needs it. Kind of like expanding a partition? Later if data is removed then the allocated space never comes back down but you do end up freeing up space from the allocated space (partition). The size that's set in the docker settings is just a limit to how much docker can expand its allocated space? Sent from my SM-N900W8 using Tapatalk
  7. Thanks jonnie.black. Not exactly sure all what happened, I don't think I had size = sized used but can't remember. Ran balance and scrub and everything looks to be normal. was remove so couldn't look at the cable but the log looks clean for now. Next issue is my docker.img went nuts recently. Made some adjustments to what I think was the cause. But whats going on with my docker.img? Is it at 90% usage or 23%? Label: none uuid: 6b55a11a-d534-4432-aeb9-5589cd54b47a Total devices 1 FS bytes used 9.78GiB devid 1 size 50.00GiB used 41.41GiB path /dev/loop0 Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop0 52428800 11628636 39225460 23% /var/lib/docker
  8. Neat, this could have saved me a ton of time in the past. But will now likely save me a bunch of time in the future. Does this pose any higher risk for ransomware over a normal share (not mapped) being accessed by the same user/computer as the root share would be accessed from? Sent from my SM-N900W8 using Tapatalk
  9. I thought the cache drive getting full the other day (is what I guess happened) is what caused the corruption. Though I have no idea how it got full but it is possible. I did run a scrub after the initial hard shutdown/"full cache" Isn't the log complaining that my appdata folder is full? I didn't see the log complaining about the docker.img itself. If this is just a straight BTRFS issue then this will be the final straw to get rid of it. I thought a raid 1 cache pool would protect me from headache not cause them. Of the 4 machines I've had with various people all BTRFS formated cache drives have had multiple issues. All on single drives so converting to xfs is a easy solution for them. Sent from my SM-N900W8 using Tapatalk
  10. can I obtain it via cmd line? nvm found out how
  11. Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:42:55 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:16 Excelsior shfs/user: err: shfs_create: open: /mnt/cache/system/docker/appdata/plexpy/plexpy.db-shm (28) No space left on device Jun 12 15:43:16 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:16 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device Filesystem 1K-blocks Used Available Use% Mounted on rootfs 20562224 817328 19744896 4% / tmpfs 20637576 588 20636988 1% /run devtmpfs 20562240 0 20562240 0% /dev cgroup_root 20637576 0 20637576 0% /sys/fs/cgroup tmpfs 131072 24904 106168 20% /var/log /dev/sdb1 4013568 620864 3392704 16% /boot /dev/md3 3905078064 3280289896 624788168 85% /mnt/disk3 /dev/md4 3905078064 3606664900 298413164 93% /mnt/disk4 /dev/md9 3905078064 3673819200 231258864 95% /mnt/disk9 /dev/md10 3905110812 2882537336 1022573476 74% /mnt/disk10 /dev/md11 3905110812 1981380952 1923729860 51% /mnt/disk11 /dev/md12 3905110812 3108918476 796192336 80% /mnt/disk12 /dev/md13 3905110812 137634948 3767475864 4% /mnt/disk13 /dev/md14 3905110812 1803370240 2101740572 47% /mnt/disk14 /dev/md15 3905078064 3924448 3901153616 1% /mnt/disk15 /dev/md16 3905078064 3471582932 433495132 89% /mnt/disk16 /dev/md22 2928835740 2849459684 79376056 98% /mnt/disk22 /dev/md23 2928835740 2460082128 468753612 84% /mnt/disk23 /dev/md24 2928835740 259663840 2669171900 9% /mnt/disk24 /dev/sdk1 488386552 281144908 207175988 58% /mnt/cache shfs 47837451600 29519328980 18318122620 62% /mnt/user0 shfs 48325838152 29800473888 18525298608 62% /mnt/user /dev/loop0 41943040 17939664 22928304 44% /var/lib/docker /dev/loop1 1048576 17296 925776 2% /etc/libvirt Complaining about no space left on cache but its clearly only 58% used. Whats going on?
  12. I've used that before successfully but forgot about it. I'll try it next time. So this will hopefully bring the array offline cleanly and the hard shut down won't cause a parity check. I wonder if this will work with docker/VMs running and possibly writing the the array. Thank you bjp999 Sent from my SM-N900W8 using Tapatalk
  13. I think that's the problem, it's docker hanging, and personally I think it's a big one. I tried everything to kill docker, nothing worked. And therefore had to hard power down. So if docker or VM get out of control then you are forced to do a hard shutdown. And I swear in 6.2 with power down plugin it didn't have any trouble with that. Or I never had docker hang. Either way it really kills the stability of unRAID. I used it for a long time and way back plugin could cause unraid to lock up and forcing a hard shutdown. With the advent of dockers the whole idea was to isolate these additions to the base unRAID and not allow them to hurt the stability of unRAID. Sure you can argue there is nothing wrong with unRaid and its adding "crap" (plugins, VM, docker) is the cause of the instability, but limetech added those features to improve unRAID's feature set, attract more people and allow way more functionality. Therefore limetech should handle how they effect unraid. If hanging docker can't be killed for some technical reason then I guess we will have to live with that unfortunately but it's really painful to have unraid's stability take a hit for it. Sent from my SM-N900W8 using Tapatalk
  14. And again. Can't shut the server down with poweroff. No matter how hung the server may have been the powerdown script always powered it off cleanly. Whats wrong with the built in powerdown?
  15. Well personally I could care less about about the transcoding quality, the quality of my transcodes is likely going to be terrible anyhow do to remote connections being bad. Nothing locally will be transcoded anyway. And with the fact that it will take some stress off my cpu will help. Cost wise for me is nothing for the gpu because I already have it so it would a free addition and finding a useful purpose for the card. I agree the newer hardware will benefit and work more easily but that's tends to always be the case. Sent from my SM-N900W8 using Tapatalk
  16. This may be useful to users that want to use the gpu in their cpu: https://forums.lime-technology.com/topic/53388-enabling-i915-for-host/?page=2 So looks like the first hurdle is having the drivers for what ever you want to pass through to a docker loaded into the unraid kernel. That's a steep first step if you can't compile kernels (that would be me) Sent from my SM-N900W8 using Tapatalk
  17. Well Ockingshay I'm not sure if i should be happy or upset that you posted this. I knew gpu transcoding was in the works but I didn't look too deeply into it. I have an older Radeon that looks like it might work so I'll likely play with trying to get that to work at some point in the future. I would now also like an answer to this question. Passing the gpu through to the docker would be my preference if that's possible. Would be nice to off load some work onto the gpu. So I'm glad you brought it up as it be work out well but now it's just one more thing to play with. Sent from my SM-N900W8 using Tapatalk
  18. Putting aside the actual issue that is/has caused my server to hang (GUI,dockers,SMB all down). I have telnet access still. Issued a powerdown (muscle memory from the plugin days) and it responds "The system is going down for system halt NOW" but doesn't actually shut down. Issue poweroff and get the same response "The system is going down for system halt NOW" but nothing happens. Why can't unraid be forced down via cmd line? back when using the plugin if you had telnet access and issues powerdown it always worked. I've rarely had the built in poweroff command work leading to unnecessary hard shutdowns. Why doesn't it work? I have the server in the hung state currently and accessed the log, there's nothing in there indicating why it won't shutdown. I can keep it up for a bit to try other commands but its likely going down hard. I'm just upset that it used to work and now doesn't.
  19. To clarify Zan quoted me. I said my tuner was not supported and was not surprised it didn't work. My post was mainly to confirm if using --device=/dev/dvb to pass through the tuner was the correct thing to do. Assuming Zan also used that, it looks like he has confirmed he has passed through the tuner but plex doesn't see it. He hasn't mentioned what tuner he has.
  20. Has anyone passed through a pci tuner to plex and have it working?
  21. Yeah I'm a little confused too. Linuxserver releases updates on Fridays so you would expect that the current release isn't the latest. But I'm on Version 1.7.2.3878 and Plex changelog shows 1.7.2 as the latest and checking for updates inside of plex doesn't show any updates as being available. I have my tuner passed through with this in the container Extra Parameters: --device=/dev/dvb Plex doesn't find it but its also not supported so was just going to try...
  22. Was on Sickbeard from day one. Then SickRage, now Medusa which is the natural progression. But a little while ago installed Sonarr and have it running also. Each has their pros and cons. Hopefully both will improve on the cons, at that point there might be little difference. From what I can tell most of the features that are on one and missing from the other are already added to the list of features that will be added in the future to their respective projects. The major difference is episode quality management. Medusa allows you to set allowed and preferred. It will try to always get the preferred and won't stop trying till it does. But with the way Medusa works you can't work your way up the quality ladder. So if you set the following as allowed, HDTV 720p , WEBDL 720p and bluray 720p as preferred. Medusa will download HDTV 720p first (if its the first it sees available). If a WEBDL 720p is released it won't download it as its just allowed like HDTV 720p is. It will however download and overwrite the file if it finds bluray 720p. Sonarr will upgrade to each quality. So from HDTV 720p to WEBDL 720p then to bluray 720p. Quality control: Sonarr allows you to set the size limits (minimums and maximums) of each quality. For example. you can set bluray 720p to have a min size of 2GB and max size of 4GB. (Medusa doesn't have this feature) Sonarr has "Must include" and "Must not include" in file/release name. This can be used to block subs or certain release groups or force subs/groups, can be used for many other things also. Soanrr also has tags but I'm not well versed in how they really work. Medusa has "Must include" and "Must not include" in file/release name, same as sonarr. Medusa also have "Preferred" and "Undesired". (Sonarr doesn't have this feature) Which is really helpful. For example you can prefer 5.1/DD etc for audio. If Medusa finds a release with 5.1/DD audio in the name it will prefer that over one that does not. Since its preferred then if it can't find a file with 5.1/DD it will download a release with 2.0. If one was to put 5.1/DD in "Must include" then if no files are found with 5.1/DD then any 2.0 release is blocked and you might end up with getting episodes. Torrent/newsgroups Both support Torrent/newsgroups. I know in Sonarr you can prefer one method over the other. In Medusa you can set the order in which trackers/indexers are searched. No experience as to how each do this in practice. Manual searching: Both Medusa and Sonarr allow you to search manually and see all releases that match the show. Looks like SickGear should be able to do this also. SickRage can't do this. Post processing: Both do failed download handling, renaming and moving. Sickrage/gear/Medusa use nzbtomedia script. Sonarr does have better integration with sabnzbd/nzbget (no experience with nzbget). Sonarr shows download progress from the client to sonarr. Setup of post processing on sonarr is extremely simple. Though both get the job done. GUI Both of course have different GUI's. I find both have their strengths. Medusa allows a bit of customizing what you see inside a show. Summary of the two: Sonarr: Nice looking, simple not too much info in shown at once (pro or con depending). I find it has no less info then Medusa but requires more clicking around to get it. Medusa: In some ways isn't as pretty. But I actually prefer this GUI possibly due to starting way back on sickbeard. I do find it better for actually managing shows due to the amount of detail displayed without having popups and tabs which are used in sonarr. Simple things also like going to the details of each show without having to go back to the show list, like you have to in sonarr. CP vs Radarr That's easy CP was working pretty poorly. Radarr is working well and is very actively being developed. Not every feature from CP is in or work in Radarr yet but for the most part its all there.
  23. There is a thread around here somewhere with someone trying to get android running on unraid. I think they partly succeeded but it wasn't really usable. On mobile so search sucks. Sent from my SM-N900W8 using Tapatalk
  24. Rsync is good for this. Once you figure out the switches parameters you want to use. Add in source and destination paths you're good to go. If you want syncs after the fact trigger the Rsync command with cron. I'm by no means fluent in Linux. I did manage to move all my data over to a backup server with Rsync. I have yet to setup a cron to sync though. Rsync will definitely be the most lightweight solution. I said this before and maybe it's just a pipe dream but being able to backup one unraid server to another via the gui should be something that's built into the gui. Everyone is constantly told unraid is not a backup plan. I'm sure it would encourage more users to buy a second unraid license and build a second box if it's as easy and loading up another unraid USB stick and selecting shares that should be backed up to another box. Sent from my SM-N900W8 using Tapatalk