NeoDude

Members
  • Posts

    298
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • URL
    http://spotify.dune-v.com
  • Location
    Scotland

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

NeoDude's Achievements

Contributor

Contributor (5/14)

7

Reputation

  1. No more errors with my new CPUs
  2. Ok, I think I'm missing something here. I have had this setup for a while, as per the Spaceinvaderone video, with all of my appdata folders as datasets, using the script to create snapshots and replicate to a ZFS disk in the array. Today is the first time I've had to try and rollback/restore and I'm at a loss. So, my calibre-web install appears to have reverted to a new install, and I can't even login. No problem I thought, I'll rollback to a snapshot from last week, when I knew it was working. Doing this results in an empty appdata folder. Weird I thought, but no problem, I have these replicated. And so my first issue, there is no documentation anywhere on how I get my replicated snapshots from Disc1 back to the Cache. Do I even need to do this? Can I not just restore the appdata folder from Disk1? Any ideas why rolling back the snapshots on Cache are resulting in empty folders? So confused.
  3. What's the general consensus or best practice with whether or not to have recordings going via SSD Cache Pool or Straight to array? Is SSD wear with constant writes still a consideration with modern drives?
  4. Thanks for the reply, nothing extra in the Enhanced Log. I'm going to upgrade to a pair of E5-2697s in the next month so will see what happens then.
  5. Perhaps, but it's been happening consistently up until today. I'll keep an eye on it
  6. Weird. When I rebooted into safe mode, everything unmounted fine. Both rebooting into safe mode, and then rebooting normally. No issues.
  7. This seems to be an ongoing issue, whenever I try to stop the Array, I get this repeating in the log... Aug 7 12:49:52 Gandalf root: cannot unmount '/mnt/cache/system': pool or dataset is busy Aug 7 12:49:52 Gandalf emhttpd: shcmd (336): exit status: 1 Aug 7 12:49:52 Gandalf emhttpd: Retry unmounting disk share(s)... This also results in a Parity Check after every reboot. I've seen others with the issue solve it by updating to the latest version, but I'm already on the latest version. Any ideas? Diags Attached. gandalf-diagnostics-20230807-1248.zip
  8. I'm getting this error repeated in my syslog over and over every minute or 2, nothing else seems to be affected and the server is rock solid stable. I have carried out an overnight memory test also, without issue. Any ideas... Jul 15 09:11:51 Gandalf kernel: mce: [Hardware Error]: Machine check events logged Jul 15 09:11:51 Gandalf mcelog: Running trigger `bus-error-trigger' (reporter: bus) Jul 15 09:11:51 Gandalf mcelog: CPU 8 on socket 1 received Bus and Interconnect Errors in Other-transaction Jul 15 09:11:51 Gandalf mcelog: Location: CPU 8 on socket 1 Diagnostics Attached. Thanks in advance for any insights gandalf-diagnostics-20230715-1026.zip
  9. Just a minor one. I have 2 GPUs in my system, a Quadro P600, and a GeForce GTX 1050. I have the 1050 successfully working with Frigate. 'Nvidia-SMI' shows that the 1050 is being used by ffmpeg, but the GUI in Frigate has the P600 listed... Any ideas?
  10. Think I found the issue. There was a missing underscore in the "NVIDIA_VISIBLE_DEVICES" key. Not sure if this is default on the container or if it's something I've accidently done, probably the latter
  11. I've deleted the unrequired VFIO Bindings. These weren't checked in the GUI so I don't know why they were in there. I have also disabled Privileged mode. (This was a recent thing to see if it made a difference). After a reboot, Plex is now using the correct GPU, but TDARR is not. Here's the Docker Run for TDARR... docker run -d --name='tdarr' --net='br0.50' --ip='172.16.50.250' --cpuset-cpus='2,3,4,5,18,19,20,21' -e TZ="Europe/London" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Gandalf" -e HOST_CONTAINERNAME="tdarr" -e 'serverIP'='172.16.50.250' -e 'TCP_PORT_8266'='8266' -e 'TCP_PORT_8265'='8265' -e 'PUID'='99' -e 'PGID'='100' -e 'internalNode'='true' -e 'NVIDIA_VISIBLE DEVICES'='GPU-04dd732e-60ad-a070-80b2-a0c4f284a9c1' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'nodeIP'='0.0.0.0' -e 'nodeID'='Gandalf' -e 'TCP_PORT_8264'='8264' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8265]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/selfhosters/unRAID-CA-templates/master/templates/img/tdarr.png' -v '/mnt/user/appdata/tdarr/server':'/app/server':'rw' -v '/mnt/user/appdata/tdarr/configs':'/app/configs':'rw' -v '/mnt/user/appdata/tdarr/logs':'/app/logs':'rw' -v '/mnt/user0/media/':'/media':'rw' -v '/mnt/cache/appdata/tdarr/temp/':'/temp':'rw' --runtime=nvidia 'haveagitgat/tdarr_acc:dev' 2f5017a5896ff9f586419bf25d1a736256d750b6e2e8c97a2fb2f96b22597c2a