-
Posts
1,129 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by jeffreywhunter
-
-
Bump? I'm stuck at this point. Guess I install a docker from another ?
-
Forgive this potentially misplaced post - I'm having problems finding the right place to get support for the PlexMediaServer docker from PlexInc. The support page from the docker app lands on the Plex website, but not a support page. I've posted the following on the Plex Forums, but no answer yet. Hoping someone in the Unraid community has a perspective on how to diagnose this odd and confusing issue.
Plex Server Version#: Version 4.76.1
Unraid Server Version#: Version 6.9.2 2021-04-07
Plex Server has been Running in Unraid Docker for several years with no issues.
Repository plexinc/pms-docker (Last Update:|May 16, 2022)Server Log shows Starting Plex Server appears ok (not sure what libusb_init failed, something to do with DVR? (Server Logs attached)
<[s6-init] making user provided files available at /var/run/s6/etc…exited 0. [s6-init] ensuring user provided files have correct perms…exited 0. [fix-attrs.d] applying ownership & permissions fixes… [fix-attrs.d] done. [cont-init.d] executing container initialization scripts… [cont-init.d] 40-plex-first-run: executing… [cont-init.d] 40-plex-first-run: exited 0. [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: executing… [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: exited 0. [cont-init.d] 50-plex-update: executing… [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done. Critical: libusb_init failed>
Circumstance - Server locked up (no idea why) after running for months.
Server appears to start normally (Server Log shows: May 20 19:26:08 HunterNAS rc.docker: PlexMediaServer: started succesfully!)
Plex Docker running as usual and accessible from normal WebUI from Dashboard
THE PROBLEM: However, when I log in to the Plex App on my Unraid server from my local browser (as I have hundreds of times), I do not see my local Plex server in the Plex WebUI, nor do I see any of my local libraries. I see other shared plex servers and I see the Plex videos, just not my server. Its like the the server isnt running, but I’m accessing the plex docker from my server (local ip address)!Very confused. Do not know how to diagnose this.
Help me Obiwan!
-
I've been using Shrmns Docker for years (actually started this post a several years ago). I have 1 license of Goodsync on my local PC. There is no cost for Goodsync for Linux and the necessary Goodsync Connect account (the Linux side of the backup conversation). You can learn more about Goodsync for Linux here: https://www.goodsync.com/for-linux The goodsync connect account (https://www.goodsync.com/goodsync-connect) is required to make it work. Once the docker is installed and configured it "just works". The original Instructions are pretty clear. Valid GoodSync Connect account required. The WebUI credentials are the same as your GoodSync Connect account credentials that you set as GS_USER and GS_PWD. These are NOT the same as your goodsync connect credentials.
Interestingly enough, you can actually use Goodsync Connect as a private cloud from any device, although I've not tried that...
That said, the GoodSync Connect Server docker was originally created by Shrmn, but last updated May 2019. It currently has a problem with a version issue which causes a bogus error in the log (Apr 19 10:02:55 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process `gs-server'.), this accumulates over time and eventually causes the server to crash. I've posted the question to the github support, but its not been updated.
While the goodsync connect works REALLY well and is very fast, I've fallen back to the SMB connection into Unraid via shares. Works well enough. Disappointed, because Goodsync for Linux works really well...except for filling the log. And the issue isn't with Goodsync, it is (according to Goodsync support) related to Linux and not GoodSync. Linux - netlink: 4 bytes leftover after parsing attributes in process needs to be addressed in the OS/docker somewhere.
Goodsync connect instructions:https://help.goodsync.com/hc/en-us/articles/360007572092-Server-Advanced-Options-demystified-
Again, all that said, SMB works well enough. Unless we can find someone who knows how to create (and support!) a docker, we're stuck. Per my original post (and that still stands), I'd love to see a group of us come together and support someone...supporting this docker...
-
UR 6.9.2, Using Goodsync v11 on Windows 10 (latest build) via SMB shares to server. Diagnostics attached.
These are the shares (My\ Backups/ is the share reporting no space)
root@HunterNAS:~# ls -lah /mnt/user total 21G drwxrwxrwx 1 nobody users 328 Mar 24 08:54 ./ drwxr-xr-x 19 root root 380 Mar 24 08:44 ../ drwxrwxrwx+ 1 nobody users 26 Dec 11 23:23 Acronis/ drwxrwxrwx 1 nobody users 6 Dec 7 18:02 Downloads/ drwxrwxrwx 1 nobody users 43 Mar 8 07:45 Drive\ D\ 10TB\ Backup/ drwxrwxrwx 1 nobody users 4.0K May 20 2020 Dropbox/ drwxrwxrwx 1 nobody users 143 Mar 5 2019 EBH\ Backups/ drwxrwxrwx 1 nobody users 21 Feb 22 2017 FTP/ drwxrwxrwx 1 nobody users 6 Oct 21 2018 GSContainerPath/ -rw-r--r-- 1 root root 28M Dec 16 2017 My drwxrwxrwx 1 nobody users 294 Mar 21 10:17 My\ Backups/ drwxrwxrwx 1 nobody users 110 Mar 11 2021 My\ Backups\ (LT)/ drwxrwxrwx 1 nobody users 19 Feb 21 2017 Pydio/ drwxrwxrwx 1 nobody users 76 Dec 24 2017 RawVideoFiles/ drwxrwxrwx 1 nobody users 328 Sep 25 14:12 appdata/ drwxrwxrwx 1 nobody users 4.0K Mar 22 01:19 archives/ drwxrwxrwx 1 nobody users 121 Feb 21 2017 cachebackup/ -rw-rw-rw- 1 nobody users 20G Mar 24 14:37 docker.img drwxrwxrwx 1 nobody users 46 Feb 6 2018 homemovies/ drwxrwxrwx 1 nobody users 81 Oct 11 11:57 iSpy/ drwxrwxrwx 1 nobody users 99 Jun 21 2020 jwhbackup/ drwxrwxrwx 1 nobody users 4.0K Apr 29 2017 lost+found/ drwxrwxrwx 1 nobody users 173 Dec 30 2019 media/ drwxrwxrwx 1 nobody users 140 Mar 24 08:46 movies/ drwxrwxrwx 1 nobody users 8.0K Jan 26 01:30 music/ drwxrwx--- 1 nobody users 148 Sep 25 14:18 nextcloud/ drwxrwxrwx 1 nobody users 289 Mar 19 22:32 pictures/ drwxrwxrwx 1 nobody users 4.0K Dec 22 03:00 tv/ drwxrwxr-x 1 nobody users 24 Mar 16 2021 web/
On User Shares page, share reporting 13.4 TB of free space. No cache applied (receives very large backup files from workstations.)
My Backups Archive of backups (Drive E) Public - No Compute... 13.4 TB
Error from Goodsync app
141251 Copy New 'E:/My Backups/_JWH/JWHWIN10-20201130/Samsung SSD 960 PRO 512GB 2B6QCXP7-0162.tibx' -> '//HUNTERNAS/My Backups/_JWH/JWHWIN10-20201130/Samsung SSD 960 PRO 512GB 2B6QCXP7-0162.tibx' (140,928,155,648) - ERROR: Error copying file: There is not enough space on the disk. (error 112) 141309 Error copying file: There is not enough space on the disk. (error 112)
Stumped why I'm getting 'drive full' error? What am I missing?
Thanks in advance!
-
2 years later and I still have this "HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process `gs-server'." every 60 sec's in my log. Obviously not a show stopper, but it IS an annoyance clogging up my log. Any ideas out there? Goodsync Connect is an excellent way to backup my PC's to unRaid. Just wish it didn't clog my log...
-
unRaid v6.9.2. Exporting 11 drives from the Integrity utility. Three of the drives are throwing memory errors in the GUI (See screenshot). In addition, the log is showing many Bunker "no export of file" errors.
Feb 13 13:38:35 HunterNAS bunker: error: no export of file: /mnt/disk3/iSpy/CJVMF/thumbs/2_2021-09-06_15-02-19_828.jpg
Feb 13 13:38:35 HunterNAS bunker: error: no export of file: /mnt/disk3/iSpy/CJVMF/thumbs/2_2021-09-06_15-01-23_737_large.jpgNot sure what to do about it. Diagnostics Attached.
Thanks in advance!
-
I have a modest server with this configuration:
1. I5 2500k w/32gb Memory (unRaid 6.9.2 - religiously keep all plugins/docker apps up to date)
2. Internal SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
3. 2 - Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
4. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 09)
I have one 2TB HGST drive that I'm trying to mount outside of the array as a scratch drive available via SMB. I'm able to successfully mount the drive (see image of mains), ensuring that Read-Only is set to NO (see image of mount config), when I try to move a file to the drive, I'm getting an error saying the drive is write protected (see image).
The config clearly shows Read Only = No, I must be missing a configuration somewhere.
Thanks in advance!
-
While there are a number of discussions around file transfer performance in the forum, I've not been able to find a specific discussion on file transfer performance between PC's and the unRaid server across a network and how Cache affects that. I know there are many variables that can affect performance, but I have a specific question based on recent experience.
I have a modest server with this configuration:
1. I5 2500k w/32gb Memory (unRaid 6.9.2 - religiously keep all plugins/docker apps up to date)
2. Internal SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04)
3. 2 - Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
4. Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 09)
15 total drives in the system (variety of HGST/Segate 2-6TB drives) w/39TB total storage (2 parity), 1 512GB SSD Cache
(See attached Drive list for details)
I run typical apps, Maria DB, Plex, GS Dock (Goodsync), Nextcloud, Krusader, and the server is rarely stressed.
I've been moving a lot of files (ISO) lately as I have acquired a new library of movies. In doing so I ran into a problem where the Cache drive was filled and through my own misconfiguration (i.e. using 'prefer cache' for shares) had problems.
Prior to the change turning off cache for the ISO backup share, I saw 113mb/s copying ISO files (6-8GB) to a share I'm storing those files on. After I changed the Cache for the ISO share to Cache=no, I no longer run into problems, but I'm seeing performance roughly 50% of what it was.
The approximate throughput for Gigabit Ethernet without jumbo frames and using TCP is around 928Mbps or 116MB/s. I'm pretty sure that the 113mb/s to the SSD was limited by the gigabit network speed (plus a bit of overhead). Great discussion of all the factors that affect file transfer over gigabit ethernet can be found at (https://www.cablefree.net/wireless-technology/maximum-throughput-gigabit-ethernet/). Thus, I'm getting what I paid for when I use the unRaid Cache=yes.
So this is my question: when I set the ISO share to be no cache, is the 68mb/s (typical) speed I'm seeing a limit of the physical drive itself or is the performance hit of file management overhead of unRaid? The slowest drive I have, the 2TB HGST's have a sustained transfer rate of 133mb/s, so it seems it should be able to consume what's coming.
So it seems that the difference in performance between Cache=yes vs Cache=no is unRaid file management overhead? Is that 50% hit correctly identified, or am I missing something important...
Thanks in advance for your wisdom and education...
-
Reopening this conversation a year later. The problem has continued through a number of 6.9 updates. Now at v6.9.2, all plugins have been kept up to date, but I still see the log entries filling up my log and crashing the server.
Jan 2 14:11:05 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process `gs-server'.
Server locks up every few days - even the console locks up and the PC is unresponsive (won't even respond to numlock key). I'm at a loss about what to do next with the annoyance. Would it make sense to do a daily delete of the log files with a CRON job (hack I know, but...)? Other ideas how to keep my server from crashing all the time...
-
On 12/10/2021 at 4:52 PM, trurl said:
Probably there is nothing to move because your user shares are set so that all files are already where they should be.
Mover ignores cache-no and cache-only shares, moves cache-yes shares from cache to array, and moves cache-prefer shares from array to cache.
Attach diagnostics to your NEXT post in this thread.
Diagnostics attached. I think I didn't fully understand some of the settings. Specifically Cache-prefer - didn't realize that would prefer to leave the files on the cache, I thought it meant use the cache unless full... I physically moved the files myself using MC, so good for now. I've looked at all the shares, they are either Cache-yes (for shares that don't have huge files - documents, music, movies, etc) and Cache-no for very large files (like PC backups) which now go directly to share. I'm having a problem where the server runs for a few days then locks up. I suspect it an unrelated logging issue but I'm not sure. I see a lot of these in the log...
Dec 24 10:00:30 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process `gs-server'.
The GS-Server works flawlessly and there's no issues with it running - other than the log messages, I've not had much luck figuring this out. But that's a topic for a different thread!
-
5 hours ago, trurl said:
Also, running mover more often won't help anything. Mover is intended for idle time. It is impossible to move from cache to slower array as fast as you can write to cache. If you intend to write more than cache can hold at one time, set the user share to cache-no.
My cache appears to be 'stuck' for some reason. Even when I run mover, nothing happens. How can this be?
-
So it seems then, that for shares used for moving large files (aka device backups), it makes sense to have those shares be cache-no. Otherwise, you'd have to set the minimum free to be very large.
Thanks!
-
8 minutes ago, alturismo said:
read about "min free space" to get the expected behaviour, i also wonder why you have to set it but read up into it
as note, when you fill up the array (like new setup etc or just mass moving new files) rather use /mnt/user0/... which is the share but array disks ONLY without using the cache, like this you spare the extra disk usage by moving from cache to array which is wanted anyway in this scenario.
sample with a share called "Media" with "cache: yes"
/mnt/user/Media/ <- will copy to cache first and move when mover is triggered
/mnt/user0/Media <- will copy directly to the array drives instead without using the cache
here you see the difference, i have a 2 tb cache nvme
Thanks! I was aware of User0, however, I thought that resulted in slower performance copying the files? Probably not a big deal. I get 113MB/s with the cache. Also, how would I setup the User0 disk for an SMB share from Windows 10? I don't see that option in the included disks.
-
Latest version of unRaid. I'm copying a LOT of files via SMB (windows 10 to unRaid share) which works great. I have 500gb Cache drive, but the drive is filling up and then windows copy hangs. I thought that once the cache disk fills up, that it would skip cache and write directly (but slower) to the physical drive. Am I misinformed? Is there a setting somewhere I've missed? Mover is set to execute every 3 hours.
Here's the share settings
Share name: Acronis Comments: Acronis Backups Use cache pool (for new files/directories):Yes Select cache pool:Cache Enable Copy-on-write:Auto Allocation method: High-water Minimum free space: 0KB Split level:Automatically split any directory as required Included disk(s):All Excluded disk(s):None
Thanks in advance!
-
Latest Unraid (6.9.2), all plugins updated.
I'm not sure that I have this utility setup properly, or perhaps I just don't fully understand the outputs. After running through the various functions (Build, Export, Check), I saw a bunch of "bunker: error: no export of file: /mnt/..." errors in the syslog (attached). It seems like its flagging these files? What should I do about these errors?
Thanks in advance!
-
1 minute ago, jeffreywhunter said:
Love the app, thanks for creating it. Just trying to mount and share the drive to get things started. The drive is shown on main. Its an NTFS drive already formatted. It mounted the drive and i can see it in Main, and click on it to access files/directories. I've gone into settings and turned on SHARE. When I go back into settings I see "Mount Point: 2TB_Scratch_Disk". But I'm not seeing an SMB share available. If I go to http://SERVER/2TB_Scratch_Disk, a blank page displays. I've reread through the instructions in the first post, but must be missing something.
Thanks in advance!
Perhaps it just takes a couple minutes to push the share out. In the time it took me to reread the instructions, then to post my query, the share now shows up and I can access it just like any share on the drive. I had opened a new file explorer (from Win 10) to view the share, but when I went back after posting, the share was there. Thanks again, handy tool.
-
Love the app, thanks for creating it. Just trying to mount and share the drive to get things started. The drive is shown on main. Its an NTFS drive already formatted. It mounted the drive and i can see it in Main, and click on it to access files/directories. I've gone into settings and turned on SHARE. When I go back into settings I see "Mount Point: 2TB_Scratch_Disk". But I'm not seeing an SMB share available. If I go to http://SERVER/2TB_Scratch_Disk, a blank page displays. I've reread through the instructions in the first post, but must be missing something.
Thanks in advance!
-
1 hour ago, trurl said:
what version Unraid? Newer kernel on 6.9 don't know if it would help or not
Sorry, should have said. Version 6.8.3 2020-03-05. If 6.9 has a new kernel, then perhaps that is the resolution. Is there a way to validate plugin compatibility with 6.9?
-
I continue to see this error every few minutes. I've researched the problem with the developers of GoodSync and they have identified it as an update needed to the kernel. Is there a formal process to get this reviewed with the UnRaid team? This is what the developer posted as a response:
William replied (2020/12/20 01:23 pm EST) I googled it and think they say that you need to update kernel. you can do the same search. You wrote (2020/12/20 12:32 pm EST) Are you aware of a solution or way to suppress the message? William replied (2020/12/20 11:32 am EST) this is a known bug in Linux, has nothing to do with GS Mike replied (2020/12/19 11:30 am EST) Hello, The error you are receiving is related to Linux and not GoodSync. Linux - netlink: 4 bytes leftover after parsing attributes in process You will need to address this with your device directly. Thank you. You wrote (2020/12/19 09:55 am EST) Using Goodsync Connect on Linux. I'm seeing entries in my Linux log (see below). Shows up every 4 minutes. Any idea what it is or how to turn this off? Config issue? Bug? Thanks in advance! ================ example line from LOG DEC 19 08:02:30 HunterNAS bunker: exported 277738 files from /mnt/disk1. Duration: 00:01:27 Dec 19 08:02:40 HunterNAS bunker: exported 408250 files from /mnt/disk11. Duration: 00:01:37 Dec 19 08:05:37 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process gs-server'. Dec 19 08:09:37 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process gs-server'. Dec 19 08:13:37 HunterNAS kernel: netlink: 4 bytes leftover after parsing attributes in process gs-server'.
Thanks a bunch. This is one of those things that just fills up the log... I'm not sure of a way to suppress the log entry from the APP.
-
24 minutes ago, JorgeB said:
Maybe a firmware glitch? It should never go up.
I guess I'll just watch it. I have 2 parity disks, so if I lose one, no big deal...
-
16 minutes ago, jonathanm said:
But helium ALWAYS goes up. Right?🤣
LOL!
-
7 hours ago, JorgeB said:
Not going to help with the helium level, if it keeps dropping the disk is going to stop working.
Duh, now that you put it that way... I ran an extended test last night. HE is at 100? How is that possible?
22 Helium level 0x0023 100 100 025 Pre-fail Always Never 100
-
6 hours ago, JorgeB said:
Yep, not a good sign, keep an eye on on that attribute, though since this is the second time you have issues with this disk it might be really failing anyway.
Would it help to run the disk through Preclear a couple times?
-
18 hours ago, Squid said:
The read error that's also listed on the Main page. Without diagnostics before you reboot, it's hard to say why
Apologies, attached! I've not rebooted since the error was displayed.
Problems with Plex from plexinc/pms-docker - Plex docker running, but local server not available
in General Support
Posted
Just discovered something interesting. I have several users on my system and have shared this server with others. They can all see the local server, but my user, which is the main account, isn't able to see the server. How can this be?