gubbgnutten

Members
  • Posts

    377
  • Joined

  • Last visited

Everything posted by gubbgnutten

  1. Since you get "69.9MB/s according to Windows", at least something is working. What does your network look like? What is your "essentially, a 100MB link"? A power line adapter or something? How does everything fit together? Network switches? Hubs? Routers? Firewalls? Wifi? Carrier pigeons? If you copy from your workstation to downstairs instead, do you experience the same problems?
  2. My first guess would be that the docker image is full, probably due to one or more misconfigured dockers writing stuff to the image instead of the intended mapped paths. Post diagnostics (see forum stickies).
  3. Probably should be mentioned that the command "diagnostics" can be executed from the console (if Telnet/SSH is working or a keyboard/screen is available). If the web interface is not reachable, it is problematic to use the web interface to gather diagnostics. :-)
  4. The syslog says that because you tried to restart emttp manually, it does not work that way. Instead, we need to figure out why the original instance is not responding. Probably it is stuck waiting for something. In any case, the syslog also says that you are running 6.1.3. Try the current version first.
  5. Thanks, you’re the man!! I’m at work now but was able to do a quick test with a trial key on server with 4GB RAM Screencap 1 using default values, screencap 2 using: sysctl vm.dirty_background_ratio=80 sysctl vm.dirty_ratio=90 And yes my server is on a UPS, would not do it otherwise. @strike on V6 md_write_limit can only be changed on the flash drive, flash/config/disk.cfg Alrighty then, in my case, with default settings: Transfers of around 4GiB (with 16GiB RAM) are limited by the gigabit network, then a bit after that the parity related limit kicks in. Large enough for most casual transfers, explains why I didn't really notice it earlier so thanks for pointing it out With turbo writes I get 90-100MB/s throughout the transfer of a 10GB file. With the two dirty settings quoted, full network speed throughout the 10GB file (should've generated a larger dummy file)...
  6. Strange. I was under the impression one didn't have to do anything to make it use any free RAM for buffering. Will have to confirm tonight and see if I'm mistaken.
  7. Since 6.1.4 includes a fix for the parity check problem affecting many with that card, it is not surprising that you are not seeing a problem While looking at "Estimated speed" during a parity check can be relevant early on in the process just to confirm that there are no strange bottlenecks, the interesting measurement in my opinion is the duration for a complete parity check (or the average speed for a complete check). The current speed will vary a lot during the check, especially in your case with a mix of 2TB, 3TB, 4TB and 5TB drives.
  8. Not exactly. If you create a share, it will be a top level folder on cache and/or one or more of the array data disks depending on the share settings. Conversely, if you create a top level folder on cache and/or one or more of the array data disks, it will be a share whether you explicitly create it as a share or not. If you don't configure a share it will have default settings. Rather meant that it is not necessary to use a top level folder, and leaving unassigned devices as an option. In any case, there seem to be at least two potential problems here. One is that Plex is expecting its configuration to be located under /mint/cache/appdata when the share is not cache only, and then we have smart.
  9. If you create a share it will be a share. If you use a folder that is not a share, well, it won't be a share. I have set up my appdata as a cache only share, with the plex configuration stored there. The main reason for me is that basic operations and browsing in a plex client won't require any of the array disks to be spun up.
  10. First of all - your appdata IS a user share. That is not a problem. What could be a problem on the other hand is that you have configured the Plex container to look for its configuration on the cache drive, without having set the appdata share to "cache only". Setting "Use cache disk" to "yes" will cause the files to be moved away from the cache disk to the array whenever Mover runs. Still, you should absolutely investigate those SMART warnings...
  11. Sure looks like you are trying to copy files from the array before the array has been started. How about copying them from the flash drive instead?
  12. Good to hear that you found the problem, but shouldn't you say that in the thread you started instead?
  13. Cool. The plugin update method actually worked properly for me for the first time in a couple of releases. Looking forward to see how the parity check speed turns out now, and I'm sure the 6.2 beta will be yummy.
  14. Great news! Just curious, are the tweaks specific for SAS2LP, or will for example other Marvell users benefit?
  15. Not necessarily. I've had the same symptoms once, and others have as well. Posts scattered around the forum. Basically, the system initially boots just fine but the config folder is completely empty except for freshly generated ssh files, so nothing works or works with default settings. Could possibly be some race condition if the USB drive is slow to mount and the ssh files are created first, preventing mounting of the flash drive. Perhaps DayDreamer could try another USB drive/port?
  16. Probably, there are some rough guidelines on Plex Support. I went for the i5 in a similar situation, the difference in price over the estimated lifetime of the system wasn't that big in my case. Pretty sure the i3 would have been sufficient.
  17. Interesting, you picked the permission fix suggestion from a member with 1 post in the forum over the suggestion of the one with 490 posts. Shouldn't prevent Plex from seeing files though, but could cause problems writing stuff so consider the new permissions script.
  18. As jonathanm asked, what instructions are you referring to? If you only map folders on disk1 to the Plex docker container, then quite predictably only media in those folders on disk1 will be available to Plex. If your media is spread out over more disks, then it is not strange at all that half of your library is missing from Plex... I can't see any reason not to map the media shares through user shares. Just don't confuse "/mnt/user" with "/usr", the latter is off limits :-) Previously I just assumed that you referred to Tools, New Permissions, but not I'm not so sure anymore. How did you try to "redo permissions"?
  19. Are all of your media files really only on disk1? In my case, I have too many media files for all of them to fit on one disk, so I'm using user shares and simply map "/mnt/user/Movies" to Plex.
  20. Is this what you want to see? See attachment No. Go to the Docker tab and then your Plex Media Server container. There you'll find the relevant volume mappings.
  21. Is the computer connected to the same switch as the unRAID server? What does your network look like? Are you sure the Windows computer is connected with 1000Mbps? Have you tried restarting the switch? That has solved similar issues in the past, believe it or not.
  22. The screen shot Task Manager says 91.9 Mbps send, which would be roughly 11 MB/s. Seems consistent. Edit: Is the Task Manager showing the graph with 100 Mbps as max because it looks good, or because the Windows computer's network is only running 100 Mbps? Not familiar with that particular version.
  23. Post diagnostics, see the sticky "Need help? Read me first!". Have you verified that the unRAID server is getting 1000Mbit/s and not 100Mbit/s? In the web-UI, check Dashboard, System status (if I recall correctly). Edit: Interpreting "the same connection that the unraid machine is connected" as just having tested the same connection with some other device, not unRAID.
  24. If you copy (not move) the data from your current array to your new build, sure. Writing data to a server without parity protection is quite fast. [*]Set up the new server with data disks, but don't assign parity [*]Start array on new server [*]Copy data from old server [*]Stop array on new server [*]Assign parity [*]Start array on new server [*]Wait for rebuild [*]Verify data witch checksums if possible [*]Do whatever you want with the old server. Keep in mind that backups are nice, though
  25. Zero replies sounds really strange, I've always received prompt replies from Tom. Sounds like something's wrong somewhere.