theunraidhomeuser

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by theunraidhomeuser

  1. Hi, I have a few external HDD enclosures, Terramaster and Orico USB 3. However, these rather low-cost (and quality) devices don't pass on the UUID for the UD plugin to see the drives. As such, UD is utterly confused when I attach 2 cases with 5 disks each, as it can't distinguish them and goes into an endless loop, trying to mount these devices. Is there a way I can edit a /etc/fstab and mount these drives (I have the UUIDs) into another folder, i.e. /media/drive1 etc.? I saw somebody mention using the userscripts plugin, but I'd feel better if I could edit the etc/fstab...Just can't find it in the /boot folder.. Thanks everyone!
  2. Hi there, had my machinaris running without issues for 3 weeks now. This morning:super-nas-diagnostics-20210828-1448.zip Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. I followed everythin here, unfortunately no luck.. https://github.com/guydavis/machinaris/issues/75 Does anybody have an idea? I understand this is a NGINX error, but I can't seem to see what's causing this. Diags attached, thanks for the help in advance!
  3. yes, but I am mounting with the UUID of the disks, so it's always consistent. you couldn't see my edit to my question (the one you responded to). I added the "PS" section 🙂 but you were faster..
  4. Hi everyone, can anybody help me how to make the above changes permanent please? I need to make permanent changes to: /etc/exports /etf/fstab Thanks! PS: to those of you suggesting to use the UD plugin: that won't work (read above). to those of you suggesting to add a user script: please explain how you would do that, I'm not great with bin bash stuff... to those of you suggesting to add to the go file, kindly advise how I can add to both files above the information requoired!
  5. Yes, I am testing now and will read / write to these mounts with non-critical data for a few days to see how robust this solution is. Key is that these IDs do not change.. why should they...
  6. Hi everyone, I have a Terramaster D5-300 USB enclosure with 5 bays, connected through USB3 Type C with my UNRAID server. In a previous post (linked below), I concluded that the terramaster does not pass the UUID of the devices on to unraid, which makes it virtually impossible to mount the drives as "single drives", which is what I am trying to do. Looking into it a bit more (and because it's really hard to find other enclosures right now where I live), I realized that each drives serial numbers ARE actually passed through but the UUID looks like this: sudo mount UUID=0B340F6C0B340F6C /mnt/sdt sudo mount UUID=0B3507380B350738 /mnt/sdu sudo mount UUID=0B35150F0B35150F /mnt/sdv sudo mount UUID=0B360EF20B360EF2 /mnt/sdw sudo mount UUID=0B3705760B370576 /mnt/sdx So without using the Unassigned Devices Plugin which can't handle this for some reason, I am able to mount all my drives "manually". The next step is that I want to make them available as public NFS shares (no security needed). I have hence edited the /etc/exports file as follows: "/mnt/sdt" -async,no_subtree_check,fsid=301 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdu" -async,no_subtree_check,fsid=302 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdv" -async,no_subtree_check,fsid=303 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdw" -async,no_subtree_check,fsid=304 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) "/mnt/sdx" -async,no_subtree_check,fsid=305 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash) Note: I manually established the fsid and these ids are not in use by other mounts. Then mounting the shares with exportfs -a Doesn't throw any errors, so I am assuming it works. On the client, I mount the NFS shares in /etc/fstab as follows: 192.168.10.8:/mnt/sdt /mnt/sdt nfs defaults 0 0 192.168.10.8:/mnt/sdu /mnt/sdu nfs defaults 0 0 192.168.10.8:/mnt/sdv /mnt/sdv nfs defaults 0 0 192.168.10.8:/mnt/sdw /mnt/sdw nfs defaults 0 0 192.168.10.8:/mnt/sdx /mnt/sdx nfs defaults 0 0 Then I run mount -a Again, no errors. On the client, I can now "see" the mounts. So two questions to the experts please: Is the the right way to do this or will there be any issues down the line with the UD (unassigned devices) plugin? Am I missing anything? What would I need to do - once this is working on the host - to make this permanent, i.e. surviving a reboot on the host? Both the initial mount as well as the NFS shares.. Thanks in advance for your time! The initial post referenced above:
  7. Just for future visitors to this thread: the terramaster D5-300 works fine if you choose to use it in a RAID config, however for single disks, as mentioned above, it will not function reliably over time (you can get it working initially by formatting drives under Windows to NTFS but I wouldn't rely on this to be a safe or permanent solution).
  8. Hi everyone, I just bought a terramaster D5-300 and attached it to my Unraid server via USB-TypeC cable. All five disks are recognized, and the 5-bay storage is not running the software raid (I mounted the bays in Windows, ran their software and removed any raid). My aim is to mount 5 single disks in Unraid as I want to use the maximum space and don't care if I loose one disk (no redundancy required). When trying to mount the NTFS formatted disks, they all and up having the same mount point, like this: sdt 65:48 0 14.6T 0 disk └─sdt1 65:49 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdu 65:64 0 14.6T 0 disk └─sdu1 65:65 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdv 65:80 0 14.6T 0 disk └─sdv1 65:81 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdw 65:96 0 14.6T 0 disk └─sdw1 65:97 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 sdx 65:112 0 14.6T 0 disk └─sdx1 65:113 0 14.6T 0 part /mnt/disks/ST16000N_M000J-2TW103 When I try and change the mount point for one drive, it changes them for all drives. Why could that be and is there a way I can mount these drives as independent disks? In Windows (Computer Management), I was able to see 5 individual NTFS-formatted disks, however in Unraid, something isn't working quite right... I wanted to format the drives to XFS and mount points like /mnt/disks/sdt1 Not sure where I took the wrong turn and still not entirely sure whether Unraid can maybe not get through to the individual disks through this terramaster enclosure with the raid (as all disks have the same serial number which is suspicious in itself...) Thanks team!!
  9. yes, I'm aware, I'm talking about speeds like 30 MB/s.. so far below that amount. I'd be happy if I managed sustained 110MB/s
  10. Hi there, I have an unraid server that I've been using for a couple of years now. All used to work fine until recently. The GUI is often unresponsive, the whole thing just crashes requiring a reboot, long story short, no fun. I've been trying to troubleshoot this as the main issue seems to be my VMs as I have one machine requiring 120GB of my 128GB total RAM. However, this VM used to run fine (now it's corrupted because of one of those crashes). What would you suggest? Clone USB key and transfer license in the hope that's the issue? I can't post my diagnostics file right now, as the machine just crashed again.. will do once I regain access..
  11. Iperf yields 933 Mbit/sec which is in line with expectations. I think the next bottleneck would be the SATA controller or BIOS settings (MSI board).
  12. I don’t get anywhere close to those speeds… hm.. need to troubleshoot over the weekend. It can’t even be competing drives as only drive at a time (2 with parity) are written to at the same time… my drives are getting a bit warm so I’ll try a different case to see if that helps… thanks!
  13. Thanks folks I have turbo write enabled since the very first day but still not a big win.. disabling the cache will solve the io bottleneck but still be epotentially slower than the gigabit lan connection, as the write speeds are somewhat slow and again just to one disk at a time… I really didn’t think about this impact of the parity drive at the beginning as I thought this would be able to cope with data being written to multiple disks at a time..so I guess I’ll just test without cache for now. thanks everyone
  14. that's a great idead, I didn't realize the disks could cope with that 1Gbit influx.. will try and revert
  15. Setup: 8 x 14 TB Seagate Exos 1 of which is a parity drive 1 Samsung 970 Pro 1 TB Cache SSD 1 PCIe SATA controller Asus 5 3600 CPU MSI ATX 490 motherboard 64 GB DDR4 RAM Hi there, I occasionally move large files to the NAS, then the cache drive fills up as the mover can't move the files quickly enough to the HDDs. I noticed that the mover only writes to one disk at a time when the other disks are idle. Would it not be fair to assume that if the faster SSD were to write to multiple disks at the same time, the mover would be far more efficient? I feel I'm missing the benefit of having 8 drives spinning and the IO bottleneck is quite annoying, as it always interrupts my file operations due to the cache drive filling up too quickly. I'm on a 1Gbit/s LAN so moving files IN to the NAS at reasonable 100 MB/s and I feel the SSD only writes at about half that speed to a single hard drive... Has anybody else had that issue and found a solution? I have use the onboard SATA for 4 drives and have a SATA controller for the other 4. Usually everything works fine and the speed of the mover is pretty much the same for the motherboard Sata controller and the PCIe one.. Is this a performance issue by design? I use 1 parity drive to keep things in sync, is that maybe the reason why only one hard drive can be written to at any time? Either way it's really frustrating. Thanks folks, appreciate the help as always!
  16. would be helpful to know which plugins.. in the spirit of helping others..
  17. Hi there, I was test-riding the new 6.9.0-beta25 and everything works great, finally my AMD CPU and MB temperature is also recognized, awesome! One thing that is creating some sort of problem though (and I couldn't observe this on my stable release) is the connection loss the UPS. When I go into the UPS settings, I can see my device and driver etc, however there seems to be some sort of issue in the comms. I've attached a screenshot of my UPS settings. I'm not 100% sure this has to do with the BETA release but it's clearly not working here. Also, the communication is ALWAYS down on the main dashboard, i.e. it's not just failing occasionally. Thanks for all your work on the 6.9 release, can't wait for the stable release! Cheers, T data-nas-diagnostics-20200808-2329.zip
  18. Wow. I think I may have posted this in the wrong forum category? 0 replies, thoughts, comments. mmm. I've actually posted this with the guys at UNTANGLE as I felt network-related stuff may be best suited there. Am currently stuck as my Windows Server 2019 VM won't accept any incoming connections despite port forwarding and lot (!) of troubleshooting. I'll cross-reference the two posts once I find a solution, in case anybody stumbles across my post in the future and has the same challenge..
  19. great, thanks a lot, wasn't aware of the user0 thing and it all makes sense. Script worked well and I also know my way around the CLI but was more reassuring and easier to select the various shares. As a little follow-up, the appdata, system folders should all be CHOWN'ed by root, right?
  20. Current setup: [WAN]---[dedicated Untangle FW Machine]---[LAN]---[UNRAID NAS] Q: I'd like to setup a FTP server over SSH (SFTP) on the UNRAID NAS, to store webserver backups every night. My NAS has 32GB RAM and 6 core AMD RYZEN 5 3600. I was not sure whether it's "better" to spin up a little Ubuntu VM and run a FTP server with a dedicated share or rather use a docker container. Are there any massive pros or cons that you can think of, beyond the ones I list below? My Untangle Firewall would map a custom SFTP port to port 22 of the VM/Docker REQUIREMENTS - setting up a few users (4-5) for different FTP projects - each user can only access their own folder - key-based authentication for some, password for others UBUNTU VM (or other distro like XUBUNTU) + GUI - based, easy to manage + unattended upgrades ensure safety + UFW could be another layer of protection + fail2ban easy to configure + isolation with only access to one shared folder + could be used for other purposes + easy to establish bandwidth limitations if need be + maybe a little simpler to set up KEY authentication for less experienced user - needs more resources than docker DOCKER + needs fewer resources + dedicated to one task and one task only + isolation with only access to one shared folder - much more command-line - I've got less experience with Docker and security implications Any other thoughts or considerations? Thanks everyone!
  21. Hi everyone, I have a NAS with 2 x 14 TB IronWolf and 1 x 512GB NVMe as cache. I deliberately chose NO parity as I backup the VM on a remote storage via rsync every night. So here is my question with shares and drives. When I check the /mnt folder structure, I see: disk1 = 14 TB IronWolf #1 disk2 = 14 TB IronWolf #2 disks = mounted external USB drives (unassigned devices) user = not sure user0 = not sure either A few questions: Q1: My hope was that the 2 x 14 TB drives would be mounted as an array of 1 disk, rather than 2. Is that not the case? Should I worry about that at all or just continue using my SMB mounts that DO show the total available space? I.e. leave everything to unraid and forget about the two disks? Q2: Why are there two user folders? I DO have an unraid user called "user" but that explains only one of the two folders. Also, there are no folders for the other users that I have created. the user sub-dir shows a few SMB shares, excluding the system folders (i.e. only the shares I manually created) the user0 sub-dir shows all shares Sorry if this has been asked a thousand times but searching the forum for things with "user" leads to less than optimal search results... Q3: more related to permissions. As I am using rsync to restore my data as the root user, I understand that all the data in the NAS is now owned by root. Would a simple " chown -R nobody:users /mnt/user/* " be enough to sort this out, or should I apply the SMB directives "force user" and "force group"? That is what I used to do on my ubuntu server and it worked very well. Just not sure that will be messing around with UNRAID too much? I've used the "New Permissions" script on the shares for now, that seemed to have worked o.k. Any concerns as I didn't upgrade from UNRAID 5 but still used it? Thanks everyone!
  22. Guys, I think it's just the kernel that doesn't support the ASUS motherboards. I've got a ROG STRIX B450-I. Temperature sensors are detected by sensors-detect but nothing happens in UNRAID. UNRAID Admins: please have a look to support AMD a bit more! Thanks!
  23. Hi everyone, I managed to spin up a Win 10 VM and pass through the GPU and Audio with the great youtube tutorials of spaceinvader. VM works, GPU was detected, no issues, installed NVIDIA drivers. I then installed Handbrake and HBBatchBeast and ran it on a small folder of video files (5 files, each a few hundred MB). In Handbrake, I created a preset that uses "H.264 NVIDIA" encoding. I then took the .json with the presets and linked to it in the HBBatchBeast app: --preset-import-file "C:\Users\user\Desktop\hbbatchbeast-Windows-v2.1.5\presets.json" -Z "TH" The preset in the JSON is present and is called TH. I found the GPU performance in terms of stress/load to be very low. Does anybody have any idea what I am doing wrong here? In exchange, the CPU performance was too high... I had higher hopes with regards to the GPU helping out with video conversion, aware it's a small GPU (but I'm not in a rush :-). The only thing is that I don't want to destabilize my UNRAID PC with these 100% CPU cycles... Cheers!