Jump to content

John_M

Members
  • Posts

    4,727
  • Joined

  • Last visited

  • Days Won

    12

Report Comments posted by John_M

  1. 5 hours ago, soulskill said:

    Which module should I enable for my AMD R9 Nano (Fiji Arch.) ?

     

    I think the radeon module would be the one for this GPU.

     

    5 hours ago, soulskill said:

    How to see if AMD is working within dockers?

    How to add the AMD GPU to a docker?

     

    Both depend on what the docker container is trying to do and how it's built. Obviously software support needs to be built into the container. In the case of a container that wants to leverage the compute capability of an AMD GPU (such as a FoldingAtHome or BOINC container) the OpenCL subsystem - essentially a compiler - would need to be included. That's included in the amdgpu-pro driver package. For a container aimed at transcoding video (such as Handbrake) the Video Acceleration API (vaapi) needs to be included. From a software point of view, that's libva and associated libraries and from a hardware point of view, that's the Direct Render Interface devices in /dev/dri.

     

    I haven't been aware of any containers that are able to make use of AMD GPUs, until now. Very recently, both a Plex container ( https://forums.plex.tv/t/got-hw-transcoding-to-work-with-libva-vaapi-on-raden-apu-ryzen-7-4700u/676546 ) and a Jellyfin container ( https://forums.unraid.net/topic/102787-support-ich777-jellyfin-amdintelnvidia/ ) have been released that can make use of some AMD GPUs. The Plex container's GUI indicates when hardware transcoding is in use. The Jellyfin one doesn't but it's easy to see the difference in CPU usage between software and hardware transcoding.

     

     

    • Thanks 1
  2. 1 hour ago, greyday said:

    I was able to do a diagnostics dump, is there anything I should delete before posting the log?

     

    I suggest you start a new thread in the General Help section and include your diagnostics zip file exactly as you retrieved it. It is anonymised by default. You're likely to have problems specific to your setup and you can get help in a separate thread without the noise of all the other conversations going on here.

    • Like 1
  3. For a long time there was only one pool and it was called "cache". The ability to have multiple pools was only added in the beta phase of 6.9. I've explained how it is, but I'm not going to argue about why. To understand it better you might find this article useful:

     

     

    It was written before multiple pools were possible. The concept of the cache pool was extended to allow for multiple pools. You can cache files being written to a given user share in any one of the pools you have available. Traditionally there was only one and it was called "cache". Now it can be called something else. Additionally, a pool doesn't have to be used exclusively, or even partially for caching. In fact, the typical use for a pool (even the one called "cache") is to store application data for Docker containers and virtual disk images for VMs. The terminology currently in use could probably be improved.

     

    You can move files from a pool to the array and from the array to a pool (the same or different - you can choose) by careful use of the "Cache:" and "Pool:" options and running the Mover.

     

     

    • Like 1
  4. 11 minutes ago, adams95 said:

    I previously had my IOMMU group containing the APU bound to VFIO at boot, and had forgotten to disable this. 

     

    As a matter of great interest, you presumably did that in an attempt to pass the GPU through to a VM? Did you have any success? I haven't and I don't think it's possible but I'd be happy to be proved wrong!

     

    As another matter of great interest :) have you found a docker container that can actually make use of the Radeon Vega's video encode/decode or compute capability? I know that the Linux version of Plex isn't built to make use of it and therefore the dockerized version can't, either. It's recognised as a GPU by the FoldingAtHome container but the necessary OpenCL subsystem isn't included so it can't be used. Maybe a Handbrake container could make use of it?

     

    • Like 1
  5. I don't think you're "boned" at all. Your files are all still where you put them, they are just lost in the confusion of all the renaming.

     

    It's obvious from what you wrote that you're not clear quite how user shares work and how multiple pools operate. So let's go through each step.

     

    Steps 1 to 3. Ok, everything is as expected.

     

    In Step 4, you create a new share called "data" with the settings cache:only and pool:docker. What that does is to create a new empty directory in the root of the "docker" pool. That's what a new user share is - an empty directory in the root of one of your data disks or pools. Your files are still there, in the directory called "appdata".

     

    Step 5, therefore, is as expected.

     

    Step 6, is as expected and explained in Step 4, above.

     

    Step 7 is confusing because you mention the renaming of a share called "app" to "appdata". That's the first time you mentioned  the "app" share, so I don't know what it contains. You say you need to share the "appdata" folder, but you're already doing that by the fact that, in Step 3, you say that the new "docker" pool contains a directory called "appdata".

     

    Step 8, is not surprising because you renamed your "appdata" share to "appcache" in Step 7 and renamed the "app" share to "appdata". So your containers are now expecting to find their files in the directory that was previously called "app", whose contents (to me, at least) are unknown.

     

    In Step 9 you discover that the "appdata" share is empty. That implies that the previously named "app" share was empty. At this point, the files you want are safe inside the "appcache" share.

     

    Step 10. As expected and explained above.

     

    Step 11. Changing the name of a share doesn't move data. It just changes the name of the folder where it exists in the root of each disk and pool.

     

    Step 12. More renaming. The "appdata2" share now contains what the "app" share originally contained - nothing. The "appdata" share is back as it was.

     

    You can't unlink the "/mnt/docker/appdata" directory from the "appdata" share because the "appdata" share is a union of the following directories, some of which may not be present and some of which may be present but empty:

    • /mnt/cache/appdata
    • /mnt/docker/appdata
    • /mnt/any-other-pool/appdata
    • /mnt/disk1/appdata
    • /mnt/disk2/appdata
    • /mnt/diskN/appdata

    If you rename the "appdata" share, as you did, to "appcache", then all these root level directories that exist get renamed.

     

    So, after all the renaming your appdata files are still where you originally put them, in the /mnt/docker/appdata directory and, by definition of what a user share is, in the "appdata" user share. You just need to do a couple of things:

     

    1. Reconfigure each of your docker templates so that they map to /mnt/user/appdata (or /mnt/docker/appdata, if you wish) instead of to the location on your unassigned device tha you previously used (which was probably something like /mnt/disks/unassigned-disk-name/appdata).
    2. Tidy up. On the Shares page of the GUI click the "Compute All" button and look at the row corresponding to the "appdata" share. It will show you which disks and pools are involved. Keep the "appdata" directory on the "docker" pool but delete the ones on other disks/pools - obviously check their contents first to make sure they are either empty or that any files are duplicates of the ones on the "docker" pool. You'll want to remove the spurious empty "data" and "appdata2" shares that you created too, but again, make sure you didn't accidentally save anything you want to keep in them. It might be worth creating a temporary user share and moving anything you're not sure about into it and later deleting it at your leisure.

     

    So, not a bug at all.

     

  6. I have a server with a Ryzen APU. I'm not in a position to reboot it at the moment because it's busy but if I load the amdgpu module manually I get a /dev/dri folder:

     

    root@Pusok:~# ls -lR /dev/dri
    /bin/ls: cannot access '/dev/dri': No such file or directory
    root@Pusok:~# modprobe amdgpu
    root@Pusok:~# ls -lR /dev/dri
    /dev/dri:
    total 0
    drwxr-xr-x 2 root root        80 Feb  9 20:38 by-path/
    crw-rw---- 1 root video 226,   0 Feb  9 20:38 card0
    crw-rw---- 1 root video 226, 128 Feb  9 20:38 renderD128
    
    /dev/dri/by-path:
    total 0
    lrwxrwxrwx 1 root root  8 Feb  9 20:38 pci-0000:09:00.0-card -> ../card0
    lrwxrwxrwx 1 root root 13 Feb  9 20:38 pci-0000:09:00.0-render -> ../renderD128
    root@Pusok:~# lspci | grep 09:00.0
    09:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Raven Ridge [Radeon Vega Series / Radeon Vega Mobile Series] (rev cb)
    root@Pusok:~# 

     

    It usually runs headless but I'll do some tests with GUI mode when I get an opportunity.

     

    • Like 1
  7. 5 hours ago, nathan47 said:

    smbd keeps panicing while TimeMachine from my Mac attempts to discover and connect to the share, just to enable Time Machine with the share.

     

    If it only happens when Time Machine initiates a connection I wonder if it's "fruit" related. I gave up using network shares as Time Machine backup destinations some time ago and have since disabled the "Enhanced macOS interoperability" option in SMB Settings.

     

  8. 1 minute ago, theruck said:

    sorry but having the SMB issues what would be the reason to remove AFP support in the new release then? It will just piss of more MAC users

     

    I regret the loss of AFP too as I have a collection of older Macs that don't work so well with SMB, but I understand that Netatalk has always been a bit of a dog. Have you considered using NFS instead? MacOS is Unix-like and connects to NFS shares if you start the NFS service first.

  9. 3 hours ago, mgutt said:

    I don't think the SHFS overhead will be solved by a newer kernel.

     

    I wasn't suggesting that it would. I was just saying that all the development effort was going into moving forward to 6.9, rather than fixing remaining issues with 6.8, but it is taking longer than anticipated because the move to the new kernel has been problematic.

     

    Aspects of SHFS have been improved though, such as the dereferencing of paths to VM vdisks.

     

  10. 25 minutes ago, nathan47 said:

    This was not an issue with 6.8.3 with the exact same configuration. Wont booting without my plug disable my ZFS pools? 

     

    Yes, it will. But that's the point. This area of the forum is for reporting bugs in pre-release versions of Unraid so it must be tested clean, without any plugins. Plugins need to work with Unraid, not the other way round. What version of the plugin are you using? It seems it was updated very recently. Have you tried the new version?

     

     

  11. 9 hours ago, SavellM said:

    Or is it automatically installed and enabled when it detects a NVIDIA GPU?

    The driver isn't automatically installed because some people want to pass the GPU through to VMs. To install the driver use the Nvidia Driver plugin - just search for it in Community Applications.

  12. I think if self-tests were run automatically at intervals I'd be more concerned about it but, in fact, they are entirely optional. There's nothing to be gained by banging a drum and declaring that "this" bug is more important than "that" bug. Looking at other reports, it's clear that the whole spin-up, spin-down, SMART temperature reading mechanism had been re-written recently and there are a few issues with it. I'm sure they'll all be sorted out together in the next rc.

     

    • Like 2
  13. 2 hours ago, Strayer said:

    I disabled the spindown just to be sure and will see what happens tomorrow.

     

    Diasbling the spin-down allows the self-test to complete. If yours doesn't then you have some other problem.

     

    33249084_ScreenShot2021-01-31at00_02_12.png.507cab307d89815fce18ce2b8a02717a.png

     

    2 hours ago, Strayer said:

    I don't think this is an annoyance. While it is certainly not urgent, I think it is quite critical that the system isn't able to reliably do SMART tests on the drives.

     

    This bug doesn't cause data loss or crash the server and it doesn't affect functionality either because long self-tests are run infrequently and on the occasion one does need to be run you can work round it by disabling the spin-down. By the Priority Definitions (on the right) it is therefore an Annoyance.

     

     

×
×
  • Create New...