theunraidhomeuser

Members
  • Posts

    33
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

theunraidhomeuser's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hi there, had my machinaris running without issues for 3 weeks now. This morning:super-nas-diagnostics-20210828-1448.zip Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. I followed everythin here, unfortunately no luck.. https://github.com/guydavis/machinaris/issues/75 Does anybody have an idea? I understand this is a NGINX error, but I can't seem to see what's causing this. Diags attached, thanks for the help in advance!
  2. yes, but I am mounting with the UUID of the disks, so it's always consistent. you couldn't see my edit to my question (the one you responded to). I added the "PS" section 🙂 but you were faster..
  3. Hi everyone, can anybody help me how to make the above changes permanent please? I need to make permanent changes to: /etc/exports /etf/fstab Thanks! PS: to those of you suggesting to use the UD plugin: that won't work (read above). to those of you suggesting to add a user script: please explain how you would do that, I'm not great with bin bash stuff... to those of you suggesting to add to the go file, kindly advise how I can add to both files above the information requoired!
  4. Yes, I am testing now and will read / write to these mounts with non-critical data for a few days to see how robust this solution is. Key is that these IDs do not change.. why should they...
  5. Hi everyone, I have a Terramaster D5-300 USB enclosure with 5 bays, connected through USB3 Type C with my UNRAID server. In a previous post (linked below), I concluded that the terramaster does not pass the UUID of the devices on to unraid, which makes it virtually impossible to mount the drives as "single drives", which is what I am trying to do. Looking into it a bit more (and because it's really hard to find other enclosures right now where I live), I realized that each drives serial numbers ARE actually passed through but the UUID looks like this: sudo mount UUID=0B340F6C0B340F6C /mnt/sdt sudo mount UUID=0B3507380B350738 /mnt/sdu sudo mount UUID=0B35150F0B35150F /mnt/sdv sudo mount UUID=0B360EF20B360EF2 /mnt/sdw sudo mount UUID=0B3705760B370576 /mnt/sdx So without using the Unassigned Devices Plugin which can't handle this for some reason, I am able to mount all my drives "manually". The next step is that I want to make them available as public NFS shares (no security needed). I have hence edited the /etc/exports file as follows: "/mnt/sdt" -async,no_subtree_check,fsid=301 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdu" -async,no_subtree_check,fsid=302 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdv" -async,no_subtree_check,fsid=303 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdw" -async,no_subtree_check,fsid=304 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdx" -async,no_subtree_check,fsid=305 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) Note: I manually established the fsid and these ids are not in use by other mounts. Then mounting the shares with exportfs -a Doesn't throw any errors, so I am assuming it works. On the client, I mount the NFS shares in /etc/fstab as follows: 192.168.10.8:/mnt/sdt /mnt/sdt nfs defaults 0 0 192.168.10.8:/mnt/sdu /mnt/sdu nfs defaults 0 0 192.168.10.8:/mnt/sdv /mnt/sdv nfs defaults 0 0 192.168.10.8:/mnt/sdw /mnt/sdw nfs defaults 0 0 192.168.10.8:/mnt/sdx /mnt/sdx nfs defaults 0 0 Then I run mount -a Again, no errors. On the client, I can now "see" the mounts. So two questions to the experts please: Is the the right way to do this or will there be any issues down the line with the UD (unassigned devices) plugin? Am I missing anything? What would I need to do - once this is working on the host - to make this permanent, i.e. surviving a reboot on the host? Both the initial mount as well as the NFS shares.. Thanks in advance for your time! The initial post referenced above:
  6. Just for future visitors to this thread: the terramaster D5-300 works fine if you choose to use it in a RAID config, however for single disks, as mentioned above, it will not function reliably over time (you can get it working initially by formatting drives under Windows to NTFS but I wouldn't rely on this to be a safe or permanent solution).
  7. Hi everyone, I just bought a terramaster D5-300 and attached it to my Unraid server via USB-TypeC cable. All five disks are recognized, and the 5-bay storage is not running the software raid (I mounted the bays in Windows, ran their software and removed any raid). My aim is to mount 5 single disks in Unraid as I want to use the maximum space and don't care if I loose one disk (no redundancy required). When trying to mount the NTFS formatted disks, they all and up having the same mount point, like this: sdt 65:48 0 14.6T 0 disk └─sdt1 65:49 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdu 65:64 0 14.6T 0 disk └─sdu1 65:65 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdv 65:80 0 14.6T 0 disk └─sdv1 65:81 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdw 65:96 0 14.6T 0 disk └─sdw1 65:97 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdx 65:112 0 14.6T 0 disk └─sdx1 65:113 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 When I try and change the mount point for one drive, it changes them for all drives. Why could that be and is there a way I can mount these drives as independent disks? In Windows (Computer Management), I was able to see 5 individual NTFS-formatted disks, however in Unraid, something isn't working quite right... I wanted to format the drives to XFS and mount points like /mnt/disks/sdt1 Not sure where I took the wrong turn and still not entirely sure whether Unraid can maybe not get through to the individual disks through this terramaster enclosure with the raid (as all disks have the same serial number which is suspicious in itself...) Thanks team!!
  8. yes, I'm aware, I'm talking about speeds like 30 MB/s.. so far below that amount. I'd be happy if I managed sustained 110MB/s
  9. Hi there, I have an unraid server that I've been using for a couple of years now. All used to work fine until recently. The GUI is often unresponsive, the whole thing just crashes requiring a reboot, long story short, no fun. I've been trying to troubleshoot this as the main issue seems to be my VMs as I have one machine requiring 120GB of my 128GB total RAM. However, this VM used to run fine (now it's corrupted because of one of those crashes). What would you suggest? Clone USB key and transfer license in the hope that's the issue? I can't post my diagnostics file right now, as the machine just crashed again.. will do once I regain access..
  10. Iperf yields 933 Mbit/sec which is in line with expectations. I think the next bottleneck would be the SATA controller or BIOS settings (MSI board).
  11. I don’t get anywhere close to those speeds… hm.. need to troubleshoot over the weekend. It can’t even be competing drives as only drive at a time (2 with parity) are written to at the same time… my drives are getting a bit warm so I’ll try a different case to see if that helps… thanks!
  12. Thanks folks I have turbo write enabled since the very first day but still not a big win.. disabling the cache will solve the io bottleneck but still be epotentially slower than the gigabit lan connection, as the write speeds are somewhat slow and again just to one disk at a time… I really didn’t think about this impact of the parity drive at the beginning as I thought this would be able to cope with data being written to multiple disks at a time..so I guess I’ll just test without cache for now. thanks everyone
  13. that's a great idead, I didn't realize the disks could cope with that 1Gbit influx.. will try and revert