GG73

Members
  • Posts

    40
  • Joined

  • Last visited

Everything posted by GG73

  1. since upgrading to 12.6.6 unraid randomly lock up, becomes unresponsive and am unable to communicate/log n to gui. had to roll back to 12.6.4 glens-server-diagnostics-20240112_2208.zip
  2. I've been running Unraid now for a few years. Started off with an old Del powered T110i with Intel® Xeon® CPU E3-1230 V2 @ 3.30GHz & 32 GB Ram. I haven't really been able to fault it for what it cost me (apart from the HDD's) I paid around £80 but I now have now space for any additional drives, therefore though it a good time to upgrade to something newer, faster, more power efficient with space for a minimum of 10 drives. I currently have 5 HDD's + Parity + SSD Cache. But dont want to go overkill and spend too much but on the other hand, not go too cheap. I Want it to last a good few years. My use case: Mainly running Plex so would like it to be able to transcode 4K streams without fuss (max 4-5 streams at anyone time) I dont run VM's currently running 13 dockers (all the 'arrs, 1 of them being Shinobi Pro - CCTV docker) After spending quite a bit of time looking for something suitable, the more I look the more undecided I become. One thing I have decided on is the case: Fractal Design Define 7XL, I have plenty of room for it Other things I can decide on: CPU - 13th Gen Intel (but which one???? too many to choose from) PSU - but which one???? too many to choose from) + cooler recommendation Ram - DDR 4 or 5 ? is there much difference? M.2 nvme - any advice on these would be nice! Motherboard - again any recommendation would be good NIC - currently have 1gig up/down service. is 2.5gb enough or would you recommend a 10gb card. Other things: would a new style motherboard run 10+ HDD's or would I need something else added on? Any advice is good advice. Many thanks
  3. Changed Status to Open Changed Priority to Minor
  4. I think I have found the culprit: shinobipro docker! even now I have rolled back still having issues with it pinning my cpu at 100% and using 100% of remaining RAM up. stopped it for now and its back ticking over at 6% CPU with all the rest of my dockers running
  5. This is mora than annoying. now I have rolled back ti the earlier version, its happening again! knew I should have left it as it was and not upgraded
  6. Anyway I've given up for now and rolled back to the previous unread version. If anyone can give me a heads up I will give e it another try
  7. how do you do that? when I download the diagnostics, if I drag and drop the folder. the above happens. All the individual files upload ?
  8. Well I stoped docker service, updated to 6.12.0 rebooted. locked up again everything maxed out. managed to stop parity check after a while and disabled docker service again, re-started it and turned on the dockers 1 by 1...... Still the same, ok for a few mins the maxes out and becomes unresponsive. Take about a min or so just to load up a GUI page??? HELP lol domain.cfg flash.cfg secondcache.cfg cache.cfg docker.cfg smb-extra.conf editor.cfg super.dat ident.cfg share.cfg network.cfg go.txt disk.cfg listen.txt vfio-pci.cfg CT240BX500SSD1_2112E58E4145-diagnostics-20230616 cache (sdf).txt WDC_WD6003FFBX-68MU3N0_V9HZT1TL-diagnostics-20230616 disk1 (sdc).txt WDC_WD6003FRYZ-01F0DB0_V9H5ED0L-diagnostics-20230616 disk2 (sdd).txt ST1000NM0033-9ZM173_Z1W1G2M3-diagnostics-20230616 (sdg).txt V_Series_SATA_SSD_240GB_190810026401791-diagnostics-20230616 secondcache (sdi).txt ST2000VM003-1ET164_W5204F4D-diagnostics-20230616 disk4 (sdh).txt WDC_WD6003FFBX-68MU3N0_V9HZG6EL-diagnostics-20230616 disk3 (sde).txt WDC_WD141KRYZ-01C66B0_Y6H2ZLAC-diagnostics-20230616 disk5 (sdj).txt WDC_WD141KRYZ-01C66B0_Y6H31WAC-diagnostics-20230616 parity (sdb).txt SanDisk_Ultra_Fit_4C530001301226106482-0-0-diagnostics-20230616 flash (sda).txt cmdline.txt btrfs-usage.txt drm.txt unraid-api.txt plugins.txt motherboard.txt iommu_groups.txt ethtool.txt folders.txt lsmod.txt lsscsi.txt thirdparty_packages.txt loads.txt testparm.txt memory.txt lspci.txt zfs-info.txt meminfo.txt lsusb.txt urls.txt top.txt ps.txt vars.txt lscpu.txt lsof.txt df.txt ifconfig.txt dhcplog.txt docker.txt wg-quick.txt syslog.txt unraid-6.12.0.txt appdata.cfg T----------------------k.cfg M---a.cfg d-----s.cfg tv.cfg i--------e.cfg l-------t.cfg D-----------s.cfg A------V.cfg isos.cfg h--------------s.cfg O-----------------s.cfg f---s.cfg A---------s.cfg system.cfg d----r.cfg b---s.cfg G---------p.cfg d--a.cfg shareDisks.txt K-----V.cfg b------c.cfg d-------s.cfg s------e.cfg g-------e.cfg g---------------e.cfg L----------------p.cfg K--------s.cfg l--------------e.cfg b-----n.cfg T-------s.cfg P---------------r.cfg
  9. I will later on. I have rolled back at moment and served people are watching Plex in the house so will try that later and report back thanks
  10. Thanks for the reply. I turned all dockers off, still really high cpu usage compered to normal when all dockers on. Or is that not the same thing?
  11. Updated to latest 6.12.0 stable version. logged on to my GUI and literally everything maxed out and struggling even lo load the GUI pages. Took me a good 5 mins to be able to turn off my dockers 1 by 1. even when all dockers turned off, server was "idling" at approx 30% when usually about 5% when all dockers turned on. typed top in command line and shfs was the top command taking all the resources. Unfortunately had to roll back a version! dhcplog.txt docker.txt syslog.txt wg-quick.txt
  12. Anyone else getting this error? I'm signed in, checked network settings and confirmed server can be reached remotely. Tried signing in/out (no difference) used: unraid-api report -v command and it reports following:- <-----UNRAID-API-REPORT-----> SERVER_NAME: XXXX-SERVER ENVIRONMENT: production UNRAID_VERSION: 6.11.5 UNRAID_API_VERSION: 2.54.0 UNRAID_API_STATUS: running API_KEY: valid MY_SERVERS: authenticated MY_SERVERS_USERNAME: XXXX CLOUD: STATUS: [ok] IP: [52.40.54.163] RELAY: STATUS: [connected] MINI-GRAPH: STATUS: [disconnected] SERVERS: ONLINE: OFFLINE: GLENS-SERVER[owner="XXXX"] ALLOWED_ORIGINS: http://localhost, http://IPV4ADDRESS, https://IPV4ADDRESS:5443, http://XXXX-server, https://XXXX-server:5443, http://XXXX-server.local, https://XXXX-server.local:5443, https://IPV4ADDRESS.HASH.myunraid.net:5443, NCHAN_MODE: nchan </----UNRAID-API-REPORT-----> root@XXXX-SERVER:~# If I go on to the my servers drop down on GUI page there is a warning triangle stating "not connected to mothership" then if I go on to the My Servers dashboard, it briefly says I'm connected green (maybe a second or 2 before going red and saying offline. Thanks for any input. Cheers
  13. Hi all, apologies in advance for my noob status! I've been trying to get to the bottom of my issue,I have high CPU utilisation even when all dockers and my vm are turned off (16%) and a lot higher when they are turned on I have tried used the 'top' command and shfs seems to be the culprit. I have looked on the forum but can't seem to find a definitive answer other than turning off a plugin (which donthave installed anyway. only recently started doing this ? Any help much appreciated, many thanks unraid-6.10.0-rc4.txt
  14. If you go back a few pages, I posted similar thing. It’s been covered several times. I have had no joy as yet getting it to work. Set my windows VM up no problem but Mac one no playing ball!
  15. Thanks for your time and effort. I will try again later.
  16. Much appreciated, it’s probably totally fine and it’s more than likely my error but if you could that would be great. Many thanks
  17. Once again thanks but still cant get it to work. i don't see how you can use opencore configurator if it wont boot (with it being a mac os program) I've done the above like you said (which seemed to be working until the opencore part) i cant use opencore because its a mac os program. surely big sur needs to be able to boot before i can open a browser and use opencore? forgive my lack of knowledge and understanding.
  18. Hi guys, just installed Big sur through mackinabox using spaceinvaiderone's tutorial. everything seemed fine until the very end after running the macinabox helper script at the end. when i stert the vm and run novnc i get the following rather than it booting up: before i had chance to select anything it then proceeded to the image below: and thats as far as it would go. Any help much appreciated. I have attached the docker log file: also diagnostics file: Macinabox BigSur.xml
  19. Thank you. I managed to delete using krusader in the end