Jump to content

Conmyster

Members
  • Content Count

    115
  • Joined

  • Last visited

Everything posted by Conmyster

  1. Hello, I have 2 scripts that automatically move files from local storage to GDrive. I currently have the logs going to a user share under /mnt/user/rclone/logs I also have setup logrotate in /etc/logrotate.d to rotate the logs daily and keep 7 of them. I wanted to know if it's okay to store logs in /var/log instead?
  2. One thing I just noticed is that when you echo the current PID into the pid file you use the full path instead of using the $PIDFILE to reference it? Equally the same once the script has run I should put rm -f $PIDFILE at the end
  3. Sorry I'm failing to understand how exactly that if statement works. I think if I am understanding it correctly it will check if the pid file exists then if it does it will compare the current pid with the pid in the pid file. However I'm unsure what it does after that...
  4. Hello, How would you go about doing this exactly?
  5. Okay, so I added the following to the start of the script that should work right? if pidof -o %PPID -x "$0"; then exit 1 fi Just thought I should add that I'm planning on having 2 scripts one for my TV Shows and one for Movies.
  6. Hello, I have a script which moves TV shows from local storage to my Google drive crypt. Can someone explain how the scheding works? For example I want to set the move script to run every 10mins. However if there are a large amount of files in the directory it could take a while to finish. How does user scripts handle this? Will it just not run the script or do I need to add some code in my script to handle this? Hope this makes sense.
  7. I love how easy it is to add more storage capacity compared to regular RAID. I also love how much unraid has improved over the years. In 2020, I would like to see multiple arrays and possibly multiple cache pools. Would be nice to have a share span those said arrays too. That way someone could have 60 or more drives and store close to or more than a 1PB in a 4U chassis.
  8. Hello, UPDATE: So it seems that as I am unable to set static routes within the ISP provided router I would be unable to get this working. As a temporary measure I have put the server back into the working VLAN. I have contacted my ISP to confirm if there is actually a way to set static routes on it and if not then I will need to see if I can set it into modem mode and get a separate wireless router (such as TP Link) which will support static routes. I recently got a new Ubiquiti EdgeSwitch 48 Lite and it seems that other subnets are unable to access the internet. Example: My unRAID server is on VLAN 3 with the IP of 10.0.0.2 with the Switch being 10.0.0.1 My ISP provided router is 192.168.0.1 and connected to port 1 of the switch, which is part of VLAN 2 and has an IP of 192.168.0.2 I have routing on the switch and setup routing for each VLAN however it is still not working. Unsure if someone from here can help? unRAID network page: EdgeSwitch Route Table:
  9. I mean if you are thinking it to be equivalent to say a Dell storage array, then I can tell you that they don't have file moving facilities. Not to mention you can move/copy files with linux mv and cp respectively. or there is midnight commander
  10. If you want to get things within your docker share back on cache then the best method would be to set it to "Prefer" then run the mover this way any files on your disks will be moved to the Cache. It is then up to you if you want to set the Cache to "Only" or leave it to "Prefer"
  11. Yeh I'm unsure why it waited for the parity sync for it to display a notification about the removed disk.. Would need a more experienced user with unraid to see why...
  12. Ah so likely due to the parity sync running. Although I don't think that should be the correct course of action by unraid....
  13. Ah okay, I'm unsure on how it would deal with loosing a drive during a parity sync. As I'm aware most single parity hardware raid (say raid 5) would just die. And I don't have a system to test that with myself.
  14. If you setup the notification system under Settings > Notification Settings Then it should of notified you of the errors on the drive... I would need to test myself though Edit: After pulling my disk 5, I got a notification within 5 seconds (Discord) This is what my main screen looks like too: After this I stopped the array, unassigned the device, started the array. Then stopped the array and reassigned the device and started the array. Data is now rebuilding fine.
  15. I haven't tested myself, however when removing a disk without shutting array down etc. It should detect this and then emulate the storage using parity data. I would expect it to also show a notification stating something along the lines of "disk not detected"
  16. From what I am aware there are still limitations with disk sizes and adding disks to a pool. unraid does have the nice feature of mixing disk sizes. So if unraid added the feature for multiple arrays it would definitely be a pulling point for more people to use unraid.
  17. Yeh GlusterFS with ZFS pools is an option. However for people who are less skilled with Linux, I don't think they would know how to create ZFS pools via command line and then make shares etc. If unraid supported more than 30 drives (in an array or by having multiple arrays) then it would allow less Linux skilled people to have larger storage/more disks.
  18. The whole idea is that the Parity drive has to be the largest (or equal to the largest) in the array. In your case if you wanted dual parity (and didn't want to buy more drives) then you would need to make both of the 10TB drives parity and then use the 1TB and 4TB for data. This however would reduce your total capacity from 15TB to 5TB. To keep the same size storage as you currently have then you could buy a third 10TB drive for the dual parity.
  19. I know it's bad but I am probably not the only one impatient for 6.8
  20. +1 (Just because it would provide more features) However from looking through the VMWare forums etc, it seems that NFS is becoming the more suggested method on connecting to your storage..
  21. Did not know about Krusader existed... In which case it would be solved with the docker container...
  22. I would preferably say that this is not a needed feature. From my experience in using web based file explorers for website management (cpanel, etc) the performance is not very good. The best case would be to use FTP, SMB, etc as they will have much better performance over a web based solution. Not to mention the amount of work that will entail to make the web based version work.
  23. I would say that the best way like you state is to have multiple arrays. Max 30 drives per array as per the usual, however you can spread a user share across multiple arrays. This could easily increase the max capacity from ~448TB to ~896TB (16TB Drives RAW Capacity) with 2 arrays.
  24. Ah in which case that last part would make more sense as to why this does not work. The agent provided is using the latest tag, and the latest tag currently is Agent version 4.2, which isn't backwards compatible with 4.0.* I would suggest updating your Zabbix server to 4.2 minimum and it should work. (My Zabbix Server is on 4.2.6) Sorry for not making this clear in the main thread, as such I have updated the main thread post stating that this is Agent 4.2 and the minimum sever version.
  25. Hello, Reading through your reply I can confirm the template I use is "Template OS LInux" However one thing to note is that filesystem and networking is done under discovery. (by default Zabbix does discovery every 1hr) If you would like to force a discovery check then you can do the following from the Zabbix WebUI Configuration > Hosts > Select relevant host > Discovery rules You should then see the below: You can then click the relevant box for your rule you want to force check (In your case I would just select both) Once selected you can click the check now button. This will force Zabbix to complete a discovery check on your host. Please do let me know if this works 😉