sivart

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by sivart

  1. That is what I was looking for! For some reason, those settings did not show up right away. I reset BMC and suddenly they showed up. Many thanks!
  2. How does one do this? I am new to IPMI on SuperMicro and am at a loss on how to configure the minimum fan speed.
  3. I believe it is hung doing a soft reboot because it cannot unmount /mnt/user. Even trying to access /mnt/user from the terminal hangs.
  4. belfast-diagnostics-20220418-1839.zip Diagnostic data available after a hard reset.
  5. My /mnt/user drive basically stops responding after 18 hours after I boot. I am able to access each disk that makes up the /mnt/user shfs, but if I try to `ls /mnt/user`, it hangs to the point I cannot even ctrl-c out of it. root@Belfast:/mnt# mount proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) tmpfs on /dev/shm type tmpfs (rw) tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755) /dev/sdk1 on /boot type vfat (rw,noatime,nodiratime,flush,dmask=77,fmask=177,shortname=mixed) /boot/bzmodules on /lib/modules type squashfs (ro) /boot/bzfirmware on /lib/firmware type squashfs (ro) hugetlbfs on /hugetlbfs type hugetlbfs (rw) overlay on /lib/modules type overlay (rw,lowerdir=/lib/modules,upperdir=/var/local/overlay/lib/modules,workdir=/var/local/overlay-work/lib/modules) overlay on /lib/firmware type overlay (rw,lowerdir=/lib/firmware,upperdir=/var/local/overlay/lib/firmware,workdir=/var/local/overlay-work/lib/firmware) /mnt on /mnt type none (rw,bind) tmpfs on /mnt/disks type tmpfs (rw,size=1M) tmpfs on /mnt/remotes type tmpfs (rw,size=1M) tmpfs on /mnt/rootshare type tmpfs (rw,size=1M) nfsd on /proc/fs/nfs type nfsd (rw) nfsd on /proc/fs/nfsd type nfsd (rw) /dev/md1 on /mnt/disk1 type xfs (rw,noatime) /dev/md2 on /mnt/disk2 type xfs (rw,noatime) /dev/md3 on /mnt/disk3 type xfs (rw,noatime) /dev/md4 on /mnt/disk4 type xfs (rw,noatime) /dev/md5 on /mnt/disk5 type xfs (rw,noatime) /dev/md6 on /mnt/disk6 type xfs (rw,noatime) /dev/md7 on /mnt/disk7 type xfs (rw,noatime) /dev/md8 on /mnt/disk8 type xfs (rw,noatime) /dev/md9 on /mnt/disk9 type xfs (rw,noatime) /dev/md10 on /mnt/disk10 type xfs (rw,noatime) /dev/sdl1 on /mnt/cache type btrfs (rw,noatime,space_cache=v2,discard=async) /dev/nvme0n1p1 on /mnt/nvme type xfs (rw,noatime) shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) /mnt/cache/system/docker/docker.img on /var/lib/docker type btrfs (rw,noatime,space_cache=v2) I try to run diagnostics, but it hung here. All my drives are spun up as well... This is my second UnRAID install and the first one went great. This one is not going well at all.
  6. @limetech I apologize. I think it was the telegraf docker image. Even though it was reporting not running, I think it was continuously trying to restart an issuing SMART commands. Disabling telgraf allowed the drives to spin down.
  7. Correct, but the logs I posted after that were in safe mode. When I get home, I will post the diags in safe mode.
  8. I don't think so. I booted into safe mode and had the same issue.
  9. No joy. I forced the drives to spin down and they came back up almost immediately. Dec 10 20:52:40 ion root: unRAID Safe Mode (unraidsafemode) has been set <snip> Dec 10 20:53:40 ion emhttpd: spinning down /dev/sdf Dec 10 20:53:41 ion emhttpd: spinning down /dev/sdj Dec 10 20:53:42 ion emhttpd: spinning down /dev/sdk Dec 10 20:53:42 ion emhttpd: spinning down /dev/sdl Dec 10 20:53:43 ion emhttpd: spinning down /dev/sdd Dec 10 20:53:43 ion emhttpd: spinning down /dev/sdm Dec 10 20:53:44 ion emhttpd: read SMART /dev/sdj Dec 10 20:53:50 ion emhttpd: read SMART /dev/sdm Dec 10 20:53:50 ion emhttpd: read SMART /dev/sdk Dec 10 20:53:50 ion emhttpd: read SMART /dev/sdd Dec 10 20:53:50 ion emhttpd: read SMART /dev/sdf Dec 10 20:53:50 ion emhttpd: read SMART /dev/sdl Dec 10 20:54:01 ion emhttpd: spinning down /dev/sdh Dec 10 20:54:02 ion emhttpd: read SMART /dev/sdh Dec 10 20:56:20 ion emhttpd: spinning down /dev/sdg Dec 10 20:56:20 ion emhttpd: read SMART /dev/sdg Dec 10 20:56:24 ion emhttpd: spinning down /dev/sdi Dec 10 20:56:24 ion emhttpd: spinning down /dev/sde Dec 10 20:56:25 ion emhttpd: read SMART /dev/sde Dec 10 20:56:25 ion emhttpd: read SMART /dev/sdi Dec 10 20:56:29 ion emhttpd: spinning down /dev/sdf Dec 10 20:56:29 ion emhttpd: spinning down /dev/sdj Dec 10 20:56:30 ion emhttpd: read SMART /dev/sdj Dec 10 20:56:30 ion emhttpd: read SMART /dev/sdf Dec 10 20:56:33 ion emhttpd: spinning down /dev/sdk Dec 10 20:56:33 ion emhttpd: spinning down /dev/sdl Dec 10 20:56:34 ion emhttpd: spinning down /dev/sdd Dec 10 20:56:34 ion emhttpd: spinning down /dev/sdm Dec 10 20:56:35 ion emhttpd: read SMART /dev/sdm Dec 10 20:56:35 ion emhttpd: read SMART /dev/sdk Dec 10 20:56:35 ion emhttpd: read SMART /dev/sdd Dec 10 20:56:35 ion emhttpd: read SMART /dev/sdl Dec 10 20:56:40 ion emhttpd: spinning down /dev/sdh Dec 10 20:56:45 ion emhttpd: read SMART /dev/sdh
  10. I uninstalled disklocation and tried to put all my disks to sleep, and it immediately read smart status... Dec 10 20:41:45 ion emhttpd: spinning down /dev/sdg Dec 10 20:41:46 ion emhttpd: spinning down /dev/sdi Dec 10 20:41:46 ion emhttpd: spinning down /dev/sde Dec 10 20:41:47 ion emhttpd: spinning down /dev/sdf Dec 10 20:41:47 ion emhttpd: spinning down /dev/sdj Dec 10 20:41:48 ion emhttpd: spinning down /dev/sdk Dec 10 20:41:48 ion emhttpd: spinning down /dev/sdl Dec 10 20:41:49 ion emhttpd: spinning down /dev/sdd Dec 10 20:41:49 ion emhttpd: spinning down /dev/sdm Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdm Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdj Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdk Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdg Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdd Dec 10 20:41:50 ion emhttpd: read SMART /dev/sde Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdf Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdl Dec 10 20:41:50 ion emhttpd: read SMART /dev/sdi Dec 10 20:41:56 ion emhttpd: spinning down /dev/sdh Dec 10 20:42:00 ion emhttpd: read SMART /dev/sdh
  11. See attached. Thanks ion-diagnostics-20201210-2015.zip
  12. Interesting, I did not see this with beta35. My drives don't stay asleep with RC1. SMART queries shouldn't wake up the device.
  13. My drives (SATA) are not spinning down. When I manually spin down a drive, it spins up right away. I can see in the log that it does in fact try to spin down the drive. This was working on beta35.
  14. Thanks. I think the real issue is that the version of ubuntu this docker is built on (bionic) does not have the ffmpeg and libav libraries that have CUDA support. When and if this docker image gets upgraded to 20.04 it should 'just work', but I realize that is probably very low priority and I am ok with that.
  15. I am new to all this and keep seeing ES referenced. What is that?
  16. Yeah, it looks like that PPA doesn't include the libraries that zoneminder users. When I tried to force for CUDA mode in ZM using the ffmpeg option, it just reports this...
  17. So this repo has an ffmpeg build that should update the libraries too, but zmc still isn't using the video card https://launchpad.net/~savoury1/+archive/ubuntu/ffmpeg4?field.series_filter=bionic
  18. That is what I am looking at. It appears the ffmpeg-4 ppa there doesn't have nvidia support.. but someone has to have one!
  19. I am thinking there must be a debian ppa that has an ffmpeg with CUDA support. It looks like the one that is in the base image for zoneminder doesn't have it. It is 4.3.1 though. People are reporting the Ubuntu 20.04 works out of the box, probably because ffmpeg is built with CUDA support in that release... so I am gonna see if I can install that.
  20. I followed this post and now I see CUDA listed in in my nvidia-smi output. For those wondering, I had to add 3 options to the zoneminder config: extra parameters need to have "--runtime=nvidia" NVIDIA_VISIBLE_DEVICES Docker variable needs to be set with your video cards GUID NVIDIA_DRIVER_CAPABILITIES Docker variable needs to be set to 'all' I still don't see 'zmc' using the video card though, and ffmpeg doesn't list CUDA as an accelerator... so now I think I am having the same problem...
  21. Yup, they line up. I did indeed upgrade to 6.9 beta because the nvidia package was pulled from the CE repo and I just installed Unraid like a week ago, so this was the 'recommended' solution. CUDNN_RUN=libcudnn8_8.0.5.39-1+cuda11.1_amd64.deb CUDNN_DEV=libcudnn8-dev_8.0.5.39-1+cuda11.1_amd64.deb CUDA_TOOL=cuda-repo-ubuntu1804-11-1-local_11.1.1-455.32.00-1_amd64.deb CUDA_PIN=cuda-ubuntu1804.pin CUDA_KEY=/var/cuda-repo-ubuntu1804-11-1-local/7fa2af80.pub CUDA_VER=11.1 Versions look right.. except 455.32 vs 455.38. Is that the issue? Plex must get the right version from someplace. I'll take a look at the plex docker image.. that might give me some insight. Thanks!
  22. I am having the same issue. I am running UnRAID 6.9b35. I started my docker with almost all the same parameters I use to start my plex container (where I do have CUDA support). docker run -d --name='Zoneminder' --net='bridge' --privileged=true -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'INSTALL_HOOK'='1' -e 'INSTALL_TINY_YOLOV3'='0' -e 'INSTALL_YOLOV3'='0' -e 'INSTALL_TINY_YOLOV4'='0' -e 'INSTALL_YOLOV4'='1' -e 'INSTALL_FACE'='0' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-b94fe274-0c08-930f-c3c3-7acbb123d8f3' -e 'SHMEM'='50%' -p '18443:443/tcp' -p '19000:9000/tcp' -v '/mnt/user/appdata/Zoneminder':'/config':'rw' -v '/mnt/user/appdata/Zoneminder/data':'/var/cache/zoneminder':'rw' --log-opt max-size=50m --log-opt max-file=1 --runtime=nvidia --gpus=1 'dlandon/zoneminder' Inside my Zoneminder container: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: N/A | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 105... Off | 00000000:2B:00.0 Off | N/A | | 0% 40C P0 N/A / 72W | 0MiB / 4037MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ And inside my plex container +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 105... Off | 00000000:2B:00.0 Off | N/A | | 0% 40C P0 N/A / 72W | 0MiB / 4037MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ I can't quite figure out what is different between the plex container and the zoneminder container as to why the zoneminder one does not have CUDA