HNGamingUK

Members
  • Posts

    134
  • Joined

  • Last visited

Everything posted by HNGamingUK

  1. Hello, UPDATE: So it seems that as I am unable to set static routes within the ISP provided router I would be unable to get this working. As a temporary measure I have put the server back into the working VLAN. I have contacted my ISP to confirm if there is actually a way to set static routes on it and if not then I will need to see if I can set it into modem mode and get a separate wireless router (such as TP Link) which will support static routes. I recently got a new Ubiquiti EdgeSwitch 48 Lite and it seems that other subnets are unable to access the internet. Example: My unRAID server is on VLAN 3 with the IP of 10.0.0.2 with the Switch being 10.0.0.1 My ISP provided router is 192.168.0.1 and connected to port 1 of the switch, which is part of VLAN 2 and has an IP of 192.168.0.2 I have routing on the switch and setup routing for each VLAN however it is still not working. Unsure if someone from here can help? unRAID network page: EdgeSwitch Route Table:
  2. I mean if you are thinking it to be equivalent to say a Dell storage array, then I can tell you that they don't have file moving facilities. Not to mention you can move/copy files with linux mv and cp respectively. or there is midnight commander
  3. If you want to get things within your docker share back on cache then the best method would be to set it to "Prefer" then run the mover this way any files on your disks will be moved to the Cache. It is then up to you if you want to set the Cache to "Only" or leave it to "Prefer"
  4. Yeh I'm unsure why it waited for the parity sync for it to display a notification about the removed disk.. Would need a more experienced user with unraid to see why...
  5. Ah so likely due to the parity sync running. Although I don't think that should be the correct course of action by unraid....
  6. Ah okay, I'm unsure on how it would deal with loosing a drive during a parity sync. As I'm aware most single parity hardware raid (say raid 5) would just die. And I don't have a system to test that with myself.
  7. If you setup the notification system under Settings > Notification Settings Then it should of notified you of the errors on the drive... I would need to test myself though Edit: After pulling my disk 5, I got a notification within 5 seconds (Discord) This is what my main screen looks like too: After this I stopped the array, unassigned the device, started the array. Then stopped the array and reassigned the device and started the array. Data is now rebuilding fine.
  8. I haven't tested myself, however when removing a disk without shutting array down etc. It should detect this and then emulate the storage using parity data. I would expect it to also show a notification stating something along the lines of "disk not detected"
  9. From what I am aware there are still limitations with disk sizes and adding disks to a pool. unraid does have the nice feature of mixing disk sizes. So if unraid added the feature for multiple arrays it would definitely be a pulling point for more people to use unraid.
  10. Yeh GlusterFS with ZFS pools is an option. However for people who are less skilled with Linux, I don't think they would know how to create ZFS pools via command line and then make shares etc. If unraid supported more than 30 drives (in an array or by having multiple arrays) then it would allow less Linux skilled people to have larger storage/more disks.
  11. The whole idea is that the Parity drive has to be the largest (or equal to the largest) in the array. In your case if you wanted dual parity (and didn't want to buy more drives) then you would need to make both of the 10TB drives parity and then use the 1TB and 4TB for data. This however would reduce your total capacity from 15TB to 5TB. To keep the same size storage as you currently have then you could buy a third 10TB drive for the dual parity.
  12. I know it's bad but I am probably not the only one impatient for 6.8
  13. +1 (Just because it would provide more features) However from looking through the VMWare forums etc, it seems that NFS is becoming the more suggested method on connecting to your storage..
  14. Did not know about Krusader existed... In which case it would be solved with the docker container...
  15. I would preferably say that this is not a needed feature. From my experience in using web based file explorers for website management (cpanel, etc) the performance is not very good. The best case would be to use FTP, SMB, etc as they will have much better performance over a web based solution. Not to mention the amount of work that will entail to make the web based version work.
  16. I would say that the best way like you state is to have multiple arrays. Max 30 drives per array as per the usual, however you can spread a user share across multiple arrays. This could easily increase the max capacity from ~448TB to ~896TB (16TB Drives RAW Capacity) with 2 arrays.
  17. Ah in which case that last part would make more sense as to why this does not work. The agent provided is using the latest tag, and the latest tag currently is Agent version 4.2, which isn't backwards compatible with 4.0.* I would suggest updating your Zabbix server to 4.2 minimum and it should work. (My Zabbix Server is on 4.2.6) Sorry for not making this clear in the main thread, as such I have updated the main thread post stating that this is Agent 4.2 and the minimum sever version.
  18. Hello, Reading through your reply I can confirm the template I use is "Template OS LInux" However one thing to note is that filesystem and networking is done under discovery. (by default Zabbix does discovery every 1hr) If you would like to force a discovery check then you can do the following from the Zabbix WebUI Configuration > Hosts > Select relevant host > Discovery rules You should then see the below: You can then click the relevant box for your rule you want to force check (In your case I would just select both) Once selected you can click the check now button. This will force Zabbix to complete a discovery check on your host. Please do let me know if this works 😉
  19. Hello, I am looking around backup solutions available and wanted to know which services do you use or think are the best? I have looked at CrashPlan PRO and Google(Using rsync) so far. The only issue I foresee is that I only have 35mbps upload, and by my calculations to backup my current data (~14TB) it will take 41 days or more at that speed.
  20. I have showed my friend this, and will be discussing options tonight with him.
  21. My guess is that this is how it will work, I personally don't see any better way that it would work.
  22. I'm assuming that there will be dark mode login too? 😉
  23. Okay I see, are you using any kind of user shares? or just disk shares? I would be careful deleting direct off the disk if user shares are in play as you will only remove the files on that disk. In any case the correct command would be: rm -r /mnt/disk1/TV As you are root you do not need to use sudo and you need the -r toggle to go recursive (ie delete directory, sub-directories and files)
  24. Root would simply be / What files in particular are you using to remove? I don't tend to touch any files in the root or created by unraid.
  25. Just double checking, were you ever able to ssh onto your unraid machine? AND Is the following setting enabled (It is located in Settings > Management Access):