Xaero

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Xaero

  1. Use AndroidX86: https://www.android-x86.org/releases.html In a normal x86 vm. Arm is not required for this. @OP - are there any specific limitations that prevent you from running the native x86 versions of those packages or configurations in say, an ArchLinux or Debian VM? I mean, testing rpi images is a fine use case for it. But crosscompiling for ARM is faster on native x86 if you are planning on using it for any sort of dev purposes.
  2. On the note of the bolded line - this is as easy as using /etc/modprobe.d/blacklist.conf (or in some situations /etc/modules-load.d/blacklist.conf) to blacklist nouveau, nvidia, fglrx, amdgpu etc modules. And attaching that to a toggle in the WebUI. Have this toggle default to the "OFF" (blacklist) position so that no drivers are loaded by default. Allow granularity for users to enable which drivers should and should not load. But default to NONE loading out of the box. Makes it painless for VM users, and a single setting for docker and baremetal graphics acceleration users. And on the note of the "new driver" issue - DKMS would take the brunt of this. Have the officially packaged unraid version bundled up in a neat and ready to go, and community driven DKMS releases for the bleeding edge. There's literally zero reason to update your nvidia driver every single time it comes out. But people are going to argue over that, too.
  3. How do you mean? You can use command substitution if you want the warning to display the output of a command: WarningText="$(somecommand -someoption somefile.someextension)" then use "$WarningText" where you would like the output of that command.
  4. This is good advice Also, I note that your speeds might indicate parity disks negotiating a SATAII (300mb/s) link, as that is roughly the ballpark of the speeds you are seeing. Currently my server is limited in this exact way due to using an HP port expander that supports SAS6g but only SATAII.
  5. One thing that sticks out to me is consistent errors being logged for a bad mount on a docker in your docker.log: time="2020-02-10T03:00:28.206543882-06:00" level=warning msg="8bef78bb005287a7ed6759abcaba9f88efd104e42233fcce69b723c3b9544115 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8bef78bb005287a7ed6759abcaba9f88efd104e42233fcce69b723c3b9544115/mounts/shm, flags: 0x2: no such file or directory" We can also see in the ps output (line 327) the alarming memory usage here: nobody 3913 21.7 92.8 64691356 61261236 ? Ssl Feb10 510:05 | \_ /usr/lib/plexmediaserver/Plex Media Server Could you post the docker run command for the Plex container? I thought I had asked for it before, but am mistaken. I don't see anything suggesting "why" plex is using so much ram yet. It also looks like the plex docker log is not included in this diagnostics zip, unfortunately.
  6. Nothing - I never got this fully working. I haven't had much time to look at it lately and I was kind of stuck sicne I couldn't justify using fuser -kv as a solution since it would kill other processes besides just Plex. Ultimately, the permanent solution needs to come from Plex. EDIT: This script, should work to kill the processes that need to be killed. You can probably add a call to it in a wrapper for the Plex Transcoder, and it would kill any offending processes as long as no transcodes were already keeping the priority locked. You could also just run it on a schedule and it would probably curb the issue a little bit: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685/43
  7. Sanity check here; I want to use this container to take advantage of DiskWarrior. I don't own any Apple devices, but I have family members that do and they have issues with HFS+ volumes that are corrupt. The directory tree is broken and FSCK can't fix it. DiskWarrior supposedly can. I also don't have terabytes of storage to do this sort of thing locally on my machine. Enter my unraid server. Can I: Access unraid shares within the OSX Guest? Pass thru the USB disks or the physical SATA volumes to this OSX Guest? If I edit the VM and add the things I want and save, it no longer boots. I get to the Clover screen, but am dead stopped there. I'm at work currently so it's difficult to obtain the vm XML or logs since we have a DPI firewall that prevents access over most protocols (ssh, vnc, rdp, even chrome remote desktop is blocked by DPI, and ports for common VPN are just completely disabled.)
  8. FoldersToCreate=\{"Folder1/SubFolder","Folder2","Folder3","Folder4"\} eval mkdir -p /mnt/user/public/test/"$FoldersToCreate" This works. You need to store the brackets in the variable, and then also evaluate those brackets. So originally, you aren't storing the brackets, so that whole long chunk is treated as a single string (note my strings are separated by the commas) In my case, I store the brackets. mkdir -p takes its arguments literally which means it includes special characters and doesn't do any bash handling. eval continues until the next carriage return (\r) and evaluates that entire chunk. So it expands the variable "$FoldersToCreate" to "{Folder1/SubFolder",...}" We use double quotes to support spaces in the folder names.
  9. Post the docker log. Thanks for reminding me to read. I reinstalled the docker - which got rid of my version=latest. It was falling back on docker version. I set to latest and it updated. It hung after updating (nothing in the log, other than complaining about not being able to delete the init directory. Just the usual services startup followed by "Done." at the end but no webui) restarting the container again fixed that. A bit strange behavior but seems to be up and working again.
  10. https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/c2100-backplane-speed-6gb-s-with-ssd-drives/td-p/4525346 https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/c2100-expander-question/td-p/4061496 Dell support also says they are Sata 3gb/s
  11. The reported version is read from the drive's controller as the supported link speed, the current is the negotiated link speed. Cable length, quality, and controller support all matter for the negotiated link. According to the spec sheet for the C2100 (https://www.dell.com/downloads/global/products/pedge/en/PoweEdge_C2100_Spec_Sheet_122710.pdf) it's Sata II on board. It has support for SAS 6Gb/s and Sata II (so 3gb/s) Much like the HP Port expander.
  12. Correct, /mnt/user is the array itself, including cache. It's all merged together into one big drive there. You could also just map /mnt/moves over and it will grab all movies from both the DVD and Blu-Ray folders. It goes as deep as the filesystem nests.
  13. Interestingly, the current update doesn't seem to be happening on my server, as it has in the past. I'm currently running 1.18.4.2171 And plex is reporting an update is available to 1.18.5.2309 I've restarted, force updated, uninstalled and reinstalled. I imagine this issue will probably end up resolving itself eventually.
  14. His cables have a sata power connector on the back of them. I was suggesting he do the mod there, rather than on the drive.
  15. "The PWDIS function responds to the presence of 2.1-3.6V on pin 3 of the plug and presence of this voltage shuts down a hard drive with this function." The White Label drives had this feature supported. That's why the PIN 3 mod applied to them. It would be worth testing this with one cable and seeing if the issue is resolved (since you can modify your cable, rather than your drive in this case)
  16. The primary takeaway: You can connect SATA drives to an SAS port and it is expected to work. You cannot connect SAS drives to a SATA port. Quoted from: https://www.ixsystems.com/community/resources/dont-be-afraid-to-be-sas-sy.48/ EDIT: Disregard, you are using SAS to SAS cables. This shouldn't be the problem. EDIT2: Those cables are reported as not working with drives that have the PWDIS feature. Looking at the data sheet for those hard drives I see Power Disable is listed on the connector pinout, making me believe that PWRDIS is supported by the drive. You may be able to do the Pin 3 mod to a cable to see if you can get a single drive to connect.
  17. I have personal preference of Intel 10gbe nics as they usually support individual stubbing of each port. For example on my unraid rig I am planning to do an OPNSense VM as well. I have a dual port 10gbe Intel x540 (if memory serves) and each port has it's own IOMMU group. That means I can technically have one go to OPNSense and one go to Unraid if I wanted to.
  18. Indeed - it would require shipping a DKMS capable environment, or possibly even offering a DKMS plugin would be an option.
  19. I think the better feature request would be DKMS support for OOT drivers. That would allow LS.IO (or anyone really) to build DKMS packages for basically any hardware without requiring a full rebuild of the bzimage.
  20. There isn't enough information in this post to accomplish what you are trying to do. PYTHONIOENCODING=utf8 Is an environment variable for the Python processor. Are you trying to set this environment locally on unraid, or remotely on the server the code will end up running on? Here's an example using a `heredoc` for the python script to be run on the SSH target: ssh -q user@host << ENDSSH PYTHONIOENCODING=utf8 /path/to/script.sh ... ... ENDSSH You can get fancier with stuff like capturing the return code etc, but this basic example shows how you can use data in a heredoc to execute effectively an entire script. Everything between << ENDSSH and ENDSSH is sent as commands over the established ssh connection.
  21. You can use: set sftp:connect-program 'ssh -o StrictHostKeyChecking=no' To disable the hostkeychecking, but this isn't super secure. Someone could (in theory) MITM your seedbox and you'd be none the wiser. It's probably not a huge deal, but it could be. The best way would be to use instead: set sftp:connect-program 'ssh -o UserKnownHostsFile=/boot/config/known_hosts' Copy your known_hosts file from /root/.ssh/known_hosts to /boot/config As far as the getting stuck on sync, I'm not sure as I'm not super familiar with mirror and lftp.
  22. You'll need to post the log while memory is full or near full. Keep an eye on memory usage and before it reaches the point of being unusable try capturing the diagnostics. Sadly, no process seems to be using an abnormal amount of memory in these logs, and I don't see any kernel OOM entries logged in syslog. MiB Mem : 64420.0 total, 60521.1 free, 1782.2 used, 2116.6 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 61152.4 avail Mem It may not be the plex process itself eating all your ram, it could be one of it's child processes.
  23. Correct, and the data will show up in both locations because User is a merged filesystem of User0 (parity array) and Cache (cache) This way you can (for example) have a complete set of data, with small files that are frequently read/write on the cache SSD next to files that are large sequential reads or writes or hardly get used on the platters. The OS is none the wiser to the fact that the files are physically located two different places.