All Activity

This stream auto-updates

  1. Past hour
  2. Turn off Enable SMB Multi Channel. It is difficult to set up correctly and can cause issues. I'm still going through the diagnostics.
  3. This is from your Diagnostics File. (File path and name /home-diagnostics-20240428-1045.zip/home-diagnostics-20240428-1045/shares/shareDisks.txt) I have added the last two columns to this file. The ones with the red 'plus' symbol. What share are you trying to connect to? How are you connecting? I hope you are using Windows File Explorer. Also called Windows File Manager. Do you have any Mapped Drives to this server? It is also important that you restart the Windows computer every time that you make a change to either your server or the Windows computer. (Many times, once Windows SMB is set up, it refuses to make any changes to its configuration without a restart.) Troubleshooting SMB/SAMBA issues is very difficult. We are dealing with two totally different software packages. SMB on the Windows side. SAMBA on the Linux. They are suppose to talk to each other. When things work right, everyone is happy. When they do not work right, the problem is often difficult to fix. SMB is the creation of Microsoft and the code base is proprietary. SAMBA was reversed engineered to work with Microsoft's SMB. (Microsoft did release a set of specifications a few years back that describes how things are suppose to work. But they also have total control over that specification and I suspect that they have provided only the bare essentials.) Fixing things is complex with lots of landmines that have to be avoided. That is why @Batter Pudding and I put together that paper to assist folks in setting things the way Microsoft expects SMB to be used. There are hacks that one can use to avoid doing things the way Microsoft wants but those hacks can put one into the that minefield I spoke of earlier!
  4. Its 2024 and problem is still there. I'm not able to do anything in Fedora command line. Copy/paste not working. "|" character impossible to create. Really no one has figured this out?
  5. Hello i got a NAS with an r9 5950x 64GB ECC Memory 5 fans and 12 Disks + 2 Sata SSDs. And for gpu passthrough in a vm an 1660 Super. Idle Power usage is always aboth 170w. - With no VM and most of the Docker container turned off mostly about 230w. with vm and docker up Any better way to reduce power a bit? or did i made a wrong choice with the processor?
  6. After memtest was finished my helper booted the server up and it sat all night with the array stopped. When I went to start the array this morning this is what happened. I wonder if the appdata backup plugin maybe ran while the array was stopped and created that folder? I don't know how else it would have been created before array start. If I can get the server back up I just need to delete that folder with the array stopped then? Or how do I remove the "bad" version of the folder without affecting the actual share that contains my data? Will the folder persist in /mnt/user/ after the reboot if something isn't automatically recreating it at boot? I have no automated processes that touch /mnt/user/ (besides built in UnRAID stuff) so I would hope that a reboot would be all I need to resolve this.
  7. mine got solved. at least i can start the VM again. I updated the Nvidia driver on my unraid and restarted the server. I also changed the machine to Q35 not sure if that last one helped. But after it got back up i could start my VM again.
  8. No, because it already exists before array start, so something else is creating that folder.
  9. I'm not seeing any OOM error, but IIRC correctly this will trigger FCP: Apr 27 11:17:43 Hal-9000 emhttpd: cmd: /usr/local/emhttp/plugins/user.scripts/showLog.php Out Of Memory Check - Hrly Change that script name
  10. But that's just a share on my array? It's not just a random folder I created. It's a share on my other server as well and it isn't causing me any problems. It's from/for Squid's community app data backup plugin before it was taken over by the new dev.
  11. You have something creating this folder before the array starts: Apr 28 04:40:42 Node emhttpd: error: malloc_share_locations, 7199: Operation not supported (95): getxattr: /mnt/user/CommunityApplicationsAppdataBackup You need to correct that, or the shares won't then work correctly.
  12. Hey guys, Woke up this morning to an OOM, i have no clue whats wrong. Never had one as far as i can remember, and wasnt doing anything extra on the NAS. So any help would be appreciated. The only other thing i have been doing is trying to find the cause of high CPU usage recently. I have been seeing 'lsof' process high CPU usage. I tracked down lots of old posts pointing to cache_dir plugins, and file activity plugins. So i uninstalled a bunch of plugins that i don't really use. Active Streams, cache dirs, file activity, system stats, gui links, open files. Maybe removing all those without a reboot, caused this OOM? Is there something known about high CPU usage? In my hunt for answers i found myself here. At first i was seeing 'SH' listed in htop, which is how i found out about cache_dirs, but after removing that plugin, the problem persisted, but changed to 'lsof'. I don't see high CPU from any of my dockers, although i have noticed that sabnzb doesn't register CPU usage for all processes. When an nzb has been downloaded and is being unpacked, docker reports 3-5% but my system is using 90-100%. Then once the unpack has finished, down to normal usage again. Unless im having the lsof issues as well. This was all before this morning reboot though. So maybe its gone now? I dont know yet. Thank you in advance hal-9000-diagnostics-20240428-0841.zip
  13. The bad stick was located and array booted back up. Things have taken a turn for the weird. Sonarr was throwing all kinds of access denied to path errors. I'm missing a bunch of shares and there are strange errors in the log I've never seen before. It's throwing the below for most if not all of my shares. Node emhttpd: error: malloc_share_locations, 7199: Operation not supported (95): getxattr: /mnt/user/CommunityApplicationsAppdataBackup The data appears to be there on the individual disks still and unraid reported the array configuration was valid before I started it. I tried to reboot the server and am now stuck in an unresolvable retry unmounting shares loop. It claims /mnt/user is not empty but I don't see any open files preventing unRAID from stopping the array. help! EDIT: OK I managed to force a reboot. Waiting for it to come back up now. EDIT2: F*CKKKKKK its not coming back online. I'm going to have to get someone to go over there and see what the hell is going. In the meantime if anyone can shed some light on wtf happened to my shares that would be great. I've never seen this happen before and I don't think testing and removing a single RAM stick would cause all this nonsense. node-diagnostics-20240428-0615.zip
  14. So, just uninstall the original and install this one?
  15. Today
  16. Ich zitiere mich: "Auch hier als Vorbemerkung: ich habe dieses Mainboard NICHT für unraid erworben..." Dieses Mainboard kommt bei mir nicht unter unraid in den Dauerbetrieb und somit habe ich solche Probleme nicht erlebt.
  17. Interessant, bei meinem Build mit gleichem Board + CPU dasselbe, auch keine tiefen C-States. (i5 12400 ASRock Z690 Pro RS 2x 16GB G.Skill RipJaws V 3200MHz be quiet 550W Pure Power 12 M) Was bei mir aber noch der Fall war, dass es nicht stabil lief, sobald ich ASPM + C-States aktiviert habe, alle paar Tage (maximal 4 Tage) hat sich der Server selbstständig aus- & wieder eingeschaltet. Logs haben nichts angezeigt, als hätte man einfach den Stecker raus- & wieder reingemacht. Seitdem ich im BIOS ASPM + C-States wieder deaktiviert habe, läuft er seit Wochen problemlos. Hast du eine ähnliche Problematik erlebt? Oder lag es an mir?
  18. Due to issues with Docker and my cache drive (see here) I had to switch from using Docker with a directory to using it with the default BTRFS vDisk. I had originally used a directory due to Nextcloud AIO using volumes and storing its database and such there. Now, obviously, with a vDisk the location of volumes has changed. I've been trying to locate the new location for the past two days to copy my exisiting data to. Has anyone had to go through this before and knows more than I was able to find out? Thanks a lot!
  19. Parity Check ist ohne Fehler durchgelaufen. Jetzt macht er noch nen Smart Scan, mir bleibt nix erspart. Mal schauen wie lange mich das noch aufhält heute.
  20. Ja, nachdem Tremendous diese Info nachgereicht hat, war das klar. Nicht ich, sondern Tremendous. Nach einem "New Config" des Array kann das einem angeboten werden, wenn man eben Parityfestplatten einbindet. Dann kann es also durch das von mir vermutete "new config" sehr wohl triggern. Siehe Bild 1, 2 und 3
  21. @Squid the 2nd instance works great and installs fine a 2nd docker container. I configure DIFFERENT ports for this 2nd container and i have proper web-ui access in the manually configured port (8081) BUT I keep seeing the SAME PORTS as container-1 under the docker allocations... Why so?
  22. im currently looking into this.
  23. Trial does not prevent later using a paid license.
  24. Diags are after rebooting so we can't see what happened, but SMART looks fine, check/replace/swap cables to rule that out and re-sync parity, and if it happens again save the diags before rebooting. https://docs.unraid.net/unraid-os/manual/storage-management#rebuilding-a-drive-onto-itself
  25. Are you sure the SMART error indicates a disk problem? Some errors (e.g. CRC errors) are nothing to do with the drive having a problem but are due to external factors.
  1. Load more activity