• Content Count

  • Joined

  • Last visited

Community Reputation

4 Neutral

About jmbrnt

  • Rank
    Advanced Member


  • Gender
  • Location
    Cambridge, UK
  • Personal Text
    i7 6700k @ 4.6g | 32GB DDR4 | GTX 1070 | Mini-ITX

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have only seen this DRM message on Linux (Budgie 20.20) using Firefox. Still gives me the heebie jeebies
  2. I admit I don't really know what 'DRM features' in Firefox actually means, but here is this:
  3. Today when I went to use the web interface for Plex, Firefox asked me to enable DRM features. Yuck. Not keen at all. Apparently it's something to do with Plex partnering with Warner Bros. I am increasingly aware that Plex isn't the media streaming solution for me any more - but where can I go? What do you use, if not Plex, and how does it compare? I use Plex mainly on iPad, PS4 and the PC (web) very rarely.
  4. Thanks I am now running a scrub, thanks for the tip. Scrub looks OK
  5. Hi Woke up this morning to no DNS (I run unbound on my Unraid server).. Checked and most of my containers had died. Tried to stop and start, no go. Followed a quick Google and tried to remove and re-generate docker.img, which seems to have given me some grace (all containers deleted from GUI, but I can re-add them easily and the data persists...)... However, it seems the problem is one of my cache drives. It's not showing errors on the GUI, but btrfs commands show errors galore. It's a Dell R720, so I don't think it's a SAS cable as many of the other drives
  6. This might be unrelated, as I am running the official Plex docker container (rather than binhex or linuxserver) - but - without doing a thing, the past couple of days/weeks my Plex has been wack. 1) The port that was set to be fwd via my firewall spontaneously changed (had been stable for 2 years) 2) Something seems to have changed with TLS - my PS4 lost the ability to talk to Plex, as did some external folks who share my library. I had to tweak the TLS settings under Settings > Network and change secure settings to preferred. Somehow it had defaulted to strict
  7. Something seems really broken with this container, or at least the job F@H is sending me. It folds for a couple of seconds, then dies. Log below. 21:21:13:WU00:FS00:0xa7:ERROR:------------------------------------------------------- 21:21:13:WU00:FS00:0xa7:ERROR:Program GROMACS, VERSION 5.0.4-20191026-456f0d636-unknown 21:21:13:WU00:FS00:0xa7:ERROR:Source code file: /host/debian-stable-64bit-core-a7-avx-release/gromacs-core/build/gromacs/src/gromacs/mdlib/domdec.c, line: 6902 21:21:13:WU00:FS00:0xa7:ERROR: 21:21:13:WU00:FS00:0xa7:ERROR:Fatal error: 21:21:
  8. Thanks Squid. I have 96 cores running this dang thing, didn't want to miss the boat
  9. Anyone know how to force this to fold for covid-19? I have updated the container and no go...
  10. Thanks for checking that. You can see which driver your NICs have under Settings > Network Settings > Interface rules. The driver name is in parentheses. I am wondering if there are variants (or older/incompatible versions) of the driver on Unraid in use that don't play nice with a Centos VM. Narrowing down the problem in lieu of any advice from the Unraid developers might take a few shots. For the record, I don't think it's a problem with Unraid 6.8.2 or Centos, as it works flawlessly for me, without needing to pass through the PCI device - simply setting the VM
  11. The driver I'm referring to is on the unraid host system, nothing to do with the VM. My theory based on comparing a working and non-working system was due to a difference in broadcom drivers seen in Unraid OS itself. I also suggested some XML edits you were asking about to try, did they have any effect?
  12. Hi @eds I have just had a look at this issue. The XML for my test Centos 8 VM I just fired up (on Unraid 6.8.1)... <interface type='bridge'> <mac address='52:54:00:e0:a3:3c'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> That is the default, generated by Unraid on my behalf (I selected br0 is my NIC to pass to the VM, as per usual).
  13. You beaut, that seems to have solved it. Thanks very much.
  14. I just changed it to point at /mnt/disk1, and Docker is all go again. Should I just wipe the cache out and re-add it, copy the stuff from /mnt/disk1 back to the cache? The logic here is I did set it to /mnt/cache and still didn't work.
  15. OK - I thought they were on the cache... I'm still not sure why anything changed though, was all working before I fixed the xfs. I'll give it a shot moving to /mnt/disk1 and see..