theunraidhomeuser

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by theunraidhomeuser

  1. Hi there, I thought the issue had solved itself but with the recent update of unraid it's unfortunately back.. Dec 22 09:00:04 SUPER-NAS root: /etc/libvirt: 24 MiB (25128960 bytes) trimmed on /dev/loop3 Dec 22 09:00:04 SUPER-NAS root: /var/lib/docker: 59.9 GiB (64322048000 bytes) trimmed on /dev/loop2 Dec 22 09:00:04 SUPER-NAS root: /mnt/vm: 17.7 GiB (19046002688 bytes) trimmed on /dev/sdi1 Dec 22 09:00:04 SUPER-NAS root: /mnt/cache: 1.5 GiB (1661894656 bytes) trimmed on /dev/sdj1 Dec 22 09:00:07 SUPER-NAS crond[1513]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Dec 22 10:00:02 SUPER-NAS crond[1513]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Dec 22 10:00:04 SUPER-NAS root: /etc/libvirt: 24 MiB (25128960 bytes) trimmed on /dev/loop3 Dec 22 10:00:04 SUPER-NAS root: /var/lib/docker: 59.9 GiB (64321851392 bytes) trimmed on /dev/loop2 Dec 22 10:00:04 SUPER-NAS root: /mnt/vm: 17.7 GiB (18955059200 bytes) trimmed on /dev/sdi1 Dec 22 10:00:04 SUPER-NAS root: /mnt/cache: 1.5 GiB (1661894656 bytes) trimmed on /dev/sdj1 The syslog entries above are the very latest that happened before the machine went down. Nothing I can see that would explain what happened, all looks pretty regular to me except for the error that the move threw (however it does that every hour even when the machine doesn't crash). Am attaching the diags once more, if anyone has an idea. There was this issue with AMD CPUs and UNRAID and I think there could be something linked to that.. any ideas? Thanks all! super-nas-diagnostics-20221223-0857.zip
  2. Hi there, I have a problem I would please use some help with. My NAS, a self-built server with quite decent specs, constantly crashes after a few days. Not sure why as I don't know where to troubleshoot. A few months ago, somebody suggested to remove the RAM overclocking although the modules and system should be able to run 3200 MHz. Anyway, did that, and things improved. The server used to crash after around 24 hours and now it's up for 4-5 days before it crashes. What do I mean by crashed? GUI is no longer accessible, services like VMs don't respond anymore. Appreciate your thoughts and suggestions. I'm currently not at the server location but can troubleshoot remotely. Thanks! super-nas-diagnostics-20220724-1407.zip
  3. I can't find anything on power supply idle control, other forums mention this applies to older motherboards only, this MB is only around 18 months old.. no mention of c-states either, manual is attached... is MSI maybe calling this differently? Appreciate your time and effort to support me with this! Maybe worth adding that I had this same server running (and plotting CHIA coin) for nearly 10 months, 24 hours per day and with XMP settings). It's also running a VM that constantly has data traffic, these issues only occurred recently in the last 3-4 months or so... MSI MAG B550M MORTAR (01).pdf
  4. I thought the above tips for AMD machines was the solution, but just a few minutes ago, my UNRAID machine crashed again. I had gone into the BIOS and removed XMP stuff, with the consequence that my RAM was running at 2.400 MHz despite being 3.600 MHz. CPU overclocking wasn't enabled anyway, so that wasn't it. I couldn't find anything on C states in the BIOS though. Any other suggestions maybe also to troubleshoot things in the first place? ThankS!
  5. Hi there, I need some help please. I have three unraid servers, and one of them constantly crashes after a few days. I've not been able to get to the bottom of it, partially also because I don't know where to look in the logs. Could somebody please point me into the right direction? When unraid crashes, the VMs stop working and the GUI doesn't open anymore either. I have to then do a forecful power-off action to get the system to shut down. Thanks! super-nas-diagnostics-20220505-1938.zip
  6. Can you share the details of the part you bought? Sent from my iPhone using Tapatalk
  7. Hi there, I have the above message after my CHIA plotter probably used too much ram on my 128GB machine and the system started shutting down processes. My other issue is that I can NOT reboot the system right now, as the motherboard requires a screen to be attached and I am currently traveling (and there is no screen attached to the NAS right now). First: Is there a way to initiate the startup sequence that normally gets executed on the USB when rebooting, however WITHOUT the need to reboot? I could imagine that would fix things... Secondly: I know most of the mount points are in RAM but is there a way to limit the RAM used by a temp drive to 110 GB by mounting it somewhere else? Thanks everyone!
  8. Hi, I have a few external HDD enclosures, Terramaster and Orico USB 3. However, these rather low-cost (and quality) devices don't pass on the UUID for the UD plugin to see the drives. As such, UD is utterly confused when I attach 2 cases with 5 disks each, as it can't distinguish them and goes into an endless loop, trying to mount these devices. Is there a way I can edit a /etc/fstab and mount these drives (I have the UUIDs) into another folder, i.e. /media/drive1 etc.? I saw somebody mention using the userscripts plugin, but I'd feel better if I could edit the etc/fstab...Just can't find it in the /boot folder.. Thanks everyone!
  9. Hi there, had my machinaris running without issues for 3 weeks now. This morning:super-nas-diagnostics-20210828-1448.zip Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. I followed everythin here, unfortunately no luck.. https://github.com/guydavis/machinaris/issues/75 Does anybody have an idea? I understand this is a NGINX error, but I can't seem to see what's causing this. Diags attached, thanks for the help in advance!
  10. yes, but I am mounting with the UUID of the disks, so it's always consistent. you couldn't see my edit to my question (the one you responded to). I added the "PS" section 🙂 but you were faster..
  11. Hi everyone, can anybody help me how to make the above changes permanent please? I need to make permanent changes to: /etc/exports /etf/fstab Thanks! PS: to those of you suggesting to use the UD plugin: that won't work (read above). to those of you suggesting to add a user script: please explain how you would do that, I'm not great with bin bash stuff... to those of you suggesting to add to the go file, kindly advise how I can add to both files above the information requoired!
  12. Yes, I am testing now and will read / write to these mounts with non-critical data for a few days to see how robust this solution is. Key is that these IDs do not change.. why should they...
  13. Hi everyone, I have a Terramaster D5-300 USB enclosure with 5 bays, connected through USB3 Type C with my UNRAID server. In a previous post (linked below), I concluded that the terramaster does not pass the UUID of the devices on to unraid, which makes it virtually impossible to mount the drives as "single drives", which is what I am trying to do. Looking into it a bit more (and because it's really hard to find other enclosures right now where I live), I realized that each drives serial numbers ARE actually passed through but the UUID looks like this: sudo mount UUID=0B340F6C0B340F6C /mnt/sdt sudo mount UUID=0B3507380B350738 /mnt/sdu sudo mount UUID=0B35150F0B35150F /mnt/sdv sudo mount UUID=0B360EF20B360EF2 /mnt/sdw sudo mount UUID=0B3705760B370576 /mnt/sdx So without using the Unassigned Devices Plugin which can't handle this for some reason, I am able to mount all my drives "manually". The next step is that I want to make them available as public NFS shares (no security needed). I have hence edited the /etc/exports file as follows: "/mnt/sdt" -async,no_subtree_check,fsid=301 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdu" -async,no_subtree_check,fsid=302 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdv" -async,no_subtree_check,fsid=303 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdw" -async,no_subtree_check,fsid=304 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdx" -async,no_subtree_check,fsid=305 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) Note: I manually established the fsid and these ids are not in use by other mounts. Then mounting the shares with exportfs -a Doesn't throw any errors, so I am assuming it works. On the client, I mount the NFS shares in /etc/fstab as follows: 192.168.10.8:/mnt/sdt /mnt/sdt nfs defaults 0 0 192.168.10.8:/mnt/sdu /mnt/sdu nfs defaults 0 0 192.168.10.8:/mnt/sdv /mnt/sdv nfs defaults 0 0 192.168.10.8:/mnt/sdw /mnt/sdw nfs defaults 0 0 192.168.10.8:/mnt/sdx /mnt/sdx nfs defaults 0 0 Then I run mount -a Again, no errors. On the client, I can now "see" the mounts. So two questions to the experts please: Is the the right way to do this or will there be any issues down the line with the UD (unassigned devices) plugin? Am I missing anything? What would I need to do - once this is working on the host - to make this permanent, i.e. surviving a reboot on the host? Both the initial mount as well as the NFS shares.. Thanks in advance for your time! The initial post referenced above:
  14. Just for future visitors to this thread: the terramaster D5-300 works fine if you choose to use it in a RAID config, however for single disks, as mentioned above, it will not function reliably over time (you can get it working initially by formatting drives under Windows to NTFS but I wouldn't rely on this to be a safe or permanent solution).
  15. Hi everyone, I just bought a terramaster D5-300 and attached it to my Unraid server via USB-TypeC cable. All five disks are recognized, and the 5-bay storage is not running the software raid (I mounted the bays in Windows, ran their software and removed any raid). My aim is to mount 5 single disks in Unraid as I want to use the maximum space and don't care if I loose one disk (no redundancy required). When trying to mount the NTFS formatted disks, they all and up having the same mount point, like this: sdt 65:48 0 14.6T 0 disk └─sdt1 65:49 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdu 65:64 0 14.6T 0 disk └─sdu1 65:65 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdv 65:80 0 14.6T 0 disk └─sdv1 65:81 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdw 65:96 0 14.6T 0 disk └─sdw1 65:97 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdx 65:112 0 14.6T 0 disk └─sdx1 65:113 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 When I try and change the mount point for one drive, it changes them for all drives. Why could that be and is there a way I can mount these drives as independent disks? In Windows (Computer Management), I was able to see 5 individual NTFS-formatted disks, however in Unraid, something isn't working quite right... I wanted to format the drives to XFS and mount points like /mnt/disks/sdt1 Not sure where I took the wrong turn and still not entirely sure whether Unraid can maybe not get through to the individual disks through this terramaster enclosure with the raid (as all disks have the same serial number which is suspicious in itself...) Thanks team!!
  16. yes, I'm aware, I'm talking about speeds like 30 MB/s.. so far below that amount. I'd be happy if I managed sustained 110MB/s
  17. Hi there, I have an unraid server that I've been using for a couple of years now. All used to work fine until recently. The GUI is often unresponsive, the whole thing just crashes requiring a reboot, long story short, no fun. I've been trying to troubleshoot this as the main issue seems to be my VMs as I have one machine requiring 120GB of my 128GB total RAM. However, this VM used to run fine (now it's corrupted because of one of those crashes). What would you suggest? Clone USB key and transfer license in the hope that's the issue? I can't post my diagnostics file right now, as the machine just crashed again.. will do once I regain access..
  18. Iperf yields 933 Mbit/sec which is in line with expectations. I think the next bottleneck would be the SATA controller or BIOS settings (MSI board).
  19. I don’t get anywhere close to those speeds… hm.. need to troubleshoot over the weekend. It can’t even be competing drives as only drive at a time (2 with parity) are written to at the same time… my drives are getting a bit warm so I’ll try a different case to see if that helps… thanks!
  20. Thanks folks I have turbo write enabled since the very first day but still not a big win.. disabling the cache will solve the io bottleneck but still be epotentially slower than the gigabit lan connection, as the write speeds are somewhat slow and again just to one disk at a time… I really didn’t think about this impact of the parity drive at the beginning as I thought this would be able to cope with data being written to multiple disks at a time..so I guess I’ll just test without cache for now. thanks everyone
  21. that's a great idead, I didn't realize the disks could cope with that 1Gbit influx.. will try and revert
  22. Setup: 8 x 14 TB Seagate Exos 1 of which is a parity drive 1 Samsung 970 Pro 1 TB Cache SSD 1 PCIe SATA controller Asus 5 3600 CPU MSI ATX 490 motherboard 64 GB DDR4 RAM Hi there, I occasionally move large files to the NAS, then the cache drive fills up as the mover can't move the files quickly enough to the HDDs. I noticed that the mover only writes to one disk at a time when the other disks are idle. Would it not be fair to assume that if the faster SSD were to write to multiple disks at the same time, the mover would be far more efficient? I feel I'm missing the benefit of having 8 drives spinning and the IO bottleneck is quite annoying, as it always interrupts my file operations due to the cache drive filling up too quickly. I'm on a 1Gbit/s LAN so moving files IN to the NAS at reasonable 100 MB/s and I feel the SSD only writes at about half that speed to a single hard drive... Has anybody else had that issue and found a solution? I have use the onboard SATA for 4 drives and have a SATA controller for the other 4. Usually everything works fine and the speed of the mover is pretty much the same for the motherboard Sata controller and the PCIe one.. Is this a performance issue by design? I use 1 parity drive to keep things in sync, is that maybe the reason why only one hard drive can be written to at any time? Either way it's really frustrating. Thanks folks, appreciate the help as always!
  23. would be helpful to know which plugins.. in the spirit of helping others..