Jump to content

Conmyster

Members
  • Content Count

    96
  • Joined

  • Last visited

Posts posted by Conmyster


  1. 16 hours ago, ezhik said:

    This is a solid feature and I can attest to the importance of it. TOTP can be used with Google Auth, but I would strongly recommend Authy as it allows backing up the seeds and encrypting it. There is also multi-device support.

     

    Can we have TOTP for SSH as well? https://github.com/google/google-authenticator-libpam .

     

    NOTE: This will obviously have impact on 'not-so-tech-savvy-users', but those who sleep in tinfoil hats, will definitely appreciate it.

    From what I have seen anything that says "Google Auth" you can use Authy.


  2. Just now, Squid said:

    Doubt if ACS override will help here, but it won't hurt.  The problem is that your video card also includes a USB controller which must also be passed through to the VM.  

    Hmm and how would I go about passing the USB Controller of the GPU to the VM as that doesn't look to be an option within the GUI. I'm guessing I will need to switch to XML view and add it?


  3. Hello,

     

    I am getting the following error when trying to pass through my RTX 2060 Super to my virtual machine:

     

    "internal error: qemu unexpectedly closed the monitor: 2020-02-01T14:27:59.749795Z qemu-system-x86_64: -device vfio-pci,host=0000:01:00.0,id=hostdev0,x-vga=on,bus=pci.0,addr=0x6: vfio 0000:01:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver."

     

    Please see image of my iommu groups:

    Screenshot_20200201_143107.thumb.jpg.5477d5dd8ec82871974f92c8f32bd1f4.jpg

     

    Virtual Machine configuration:

    Screenshot_20200201_143312.thumb.jpg.d7f875f2cc8ebca2b48cab912b964df8.jpg

     

    Looking on some other posts it seems to suggest that I will need to turn on PCIe ACS.

     

    Wanted to see if anyone had any other suggestions as currently I'm running a part sync as I'm adding a second parity drive. So can't reboot at the moment. 

     


  4. 2 minutes ago, BRiT said:

    /var/log/ is is RAM.

    It will all be removed on reboots.

    Since it's in memory, the more you store the more it uses. If you don't have enough memory it will lead to OOM - Out of Memory errors and the kernel will pick randomly what programs to kill.

    Thought as much, one of the script has been running since this morning at 8:00 and is at 7.2KB.

     

    Will need to watch it over the next 7 days and see how large the files and rotated logs get.

     

    All it stores in the log is when the move starts and when it finished with how long it took in seconds. So 2 lines. The script is running every 10mins unless it is already moving. So it has the potential to write 2 lines every 10mins.


  5. Hello,

     

    I have 2 scripts that automatically move files from local storage to GDrive. I currently have the logs going to a user share under /mnt/user/rclone/logs

     

    I also have setup logrotate in /etc/logrotate.d to rotate the logs daily and keep 7 of them.

     

    I wanted to know if it's okay to store logs in /var/log instead?


  6. 3 hours ago, Squid said:

    Taken from the mover script,

    
    PIDFILE="/var/run/mover.pid"
    
    if [ -f $PIDFILE ]; then
      if ps h $(cat $PIDFILE) ; then
        exit 1
      fi
    fi
    
    echo $$ >/var/run/mover.pid
    
    .
    .
    .

     

    One thing I just noticed is that when you echo the current PID into the pid file you use the full path instead of using the $PIDFILE to reference it?

     

    Equally the same once the script has run I should put rm -f $PIDFILE at the end


  7. 35 minutes ago, Squid said:

    Taken from the mover script,

    
    PIDFILE="/var/run/mover.pid"
    
    if [ -f $PIDFILE ]; then
      if ps h $(cat $PIDFILE) ; then
        exit 1
      fi
    fi
    
    echo $$ >/var/run/mover.pid
    
    .
    .
    .

     

    Sorry I'm failing to understand how exactly that if statement works.

     

    I think if I am understanding it correctly it will check if the pid file exists then if it does it will compare the current pid with the pid in the pid file. However I'm unsure what it does after that...


  8. 6 minutes ago, Squid said:

    What I usually do is

     

    • See if /var/run/scriptName.pid exists
    • If it doesn't, write the pid to that file and execute the script
    • If it does, read the pid from the file and see if that pid is actually running.  If its not then execute the script

    Hello,

     

    How would you go about doing this exactly?


  9. 8 minutes ago, Squid said:

    this

    Okay, so I added the following to the start of the script that should work right?

    if pidof -o %PPID -x "$0"; then
       exit 1
    fi

     

    Just thought I should add that I'm planning on having 2 scripts one for my TV Shows and one for Movies.

     


  10. Hello,

     

    I have a script which moves TV shows from local storage to my Google drive crypt. Can someone explain how the scheding works?

     

    For example I want to set the move script to run every 10mins. However if there are a large amount of files in the directory it could take a while to finish.

     

    How does user scripts handle this? Will it just not run the script or do I need to add some code in my script to handle this?

     

    Hope this makes sense.


  11. I love how easy it is to add more storage capacity compared to regular RAID. I also love how much unraid has improved over the years.

     

    In 2020, I would like to see multiple arrays and possibly multiple cache pools. Would be nice to have a share span those said arrays too. That way someone could have 60 or more drives and store close to or more than a 1PB in a 4U chassis.

     

     


  12. Hello,

     

    UPDATE: So it seems that as I am unable to set static routes within the ISP provided router I would be unable to get this working. As a temporary measure I have put the server back into the working VLAN. I have contacted my ISP to confirm if there is actually a way to set static routes on it and if not then I will need to see if I can set it into modem mode and get a separate wireless router (such as TP Link) which will support static routes.

     

    I recently got a new Ubiquiti EdgeSwitch 48 Lite and it seems that other subnets are unable to access the internet.

     

    Example:

     

    My unRAID server is on VLAN 3 with the IP of 10.0.0.2 with the Switch being 10.0.0.1

     

    My ISP provided router is 192.168.0.1 and connected to port 1 of the switch, which is part of VLAN 2 and has an IP of 192.168.0.2

     

    I have routing on the switch and setup routing for each VLAN however it is still not working. Unsure if someone from here can help?

     

    unRAID network page:

    image.thumb.png.2f325fa25240b1159c9e6a7b1e40a5d5.png

     

    EdgeSwitch Route Table:

    image.thumb.png.ebdf9daeda87728310f22c114bf6bbad.png


  13. 15 hours ago, PhiPhi said:

    I so use Krusader and sometimes the performance is really good but inconsistent especially when moving or copying large amounts of data which can take many more hours than for example; copying the same data within a Windows server running on lesser hardware.

     

    Strange when you think about it, Unraid as a storage/NAS OS but doesn't have a reliable fast and full featured file mover.  Sitting at your client unable to close the Krusader tab for 15 hours or close the client machine is far from a good solution. 

    I mean if you are thinking it to be equivalent to say a Dell storage array, then I can tell you that they don't have file moving facilities. Not to mention you can move/copy files with linux mv and cp respectively.

     

    or there is midnight commander


  14. 2 hours ago, nfwolfpryde said:

    Thanks, that makes sense. My cache disk failed a week or so ago, and I replaced it. Must have rebuilt it wrong. I’ll experiment tonight with the fix!!

    If you want to get things within your docker share back on cache then the best method would be to set it to "Prefer" then run the mover this way any files on your disks will be moved to the Cache. It is then up to you if you want to set the Cache to "Only" or leave it to "Prefer"


  15. 2 minutes ago, Ancan said:

     

    No I think there's room for improvement here. I bit worried about the lack of notifications.

    Yeh I'm unsure why it waited for the parity sync for it to display a notification about the removed disk..

     

    Would need a more experienced user with unraid to see why...


  16. 8 minutes ago, Ancan said:

    Parity is now green as well. Just as green as Toshiba disk on my desk here beside me. Perhaps I should post on the bug-forum instead?

     

    image.thumb.png.542742cecdf56af20e00414c7bf67902.png

     

    I'll stop the array now, and replace the Toshiba with a new disk instead. Let's hope Unraid will let go of it then.

     

     

    Edit: About time!!!

     

    image.png.3505137ecfeb0014cbb8922cc87060c7.png

     

    Ah so likely due to the parity sync running.

     

    Although I don't think that should be the correct course of action by unraid....


  17. 17 minutes ago, Ancan said:

     

    Kudos on commitment for pulling a drive!

    I've got notifications set to "browser" only while I'm testing, and haven't seen anything except the "errors" popup. Disk still green in the GUI, even though it's sitting here on my desk.

    I should mention that my paritity is not fully synchronized yet, and was not when I pulled the disk. I'm being cruel I know. I validate enterprise storage installations as part of my job, and am a bit damaged by that probably.

     

    Ah okay, I'm unsure on how it would deal with loosing a drive during a parity sync. As I'm aware most single parity hardware raid (say raid 5) would just die. 

     

    And I don't have a system to test that with myself.


  18. 27 minutes ago, Ancan said:

    Nothing yet. Except a "*" on the disk temperature and up to 32 errors now. Still green otherwise. By now I ideally should have had a mail in my inbox and be on my way to the store for a replacement. Hmm...

     

    bild.thumb.png.3be8fa018fec9da7cd7d2f37e313744e.png

     

    Edit: I *love* how you can just paste images from the clipboard into the forum here (Ok, I somehow get double attachments when I do it but it 10 x beats saving the image and uploading it)

    If you setup the notification system under Settings > Notification Settings

     

    Then it should of notified you of the errors on the drive... I would need to test myself though

     

    Edit: After pulling my disk 5, I got a notification within 5 seconds (Discord) This is what my main screen looks like too:

    image.thumb.png.a7c9584ec526a62c4ffda664969ab323.png

     

    After this I stopped the array, unassigned the device, started the array. Then stopped the array and reassigned the device and started the array.

     

    Data is now rebuilding fine.


  19. 8 minutes ago, Ancan said:

    Hi all,
    I'm trying out Unraid as a platform for my new NAS, and pulled a disk to see the impact on the array.

     

    I'm bit surprised actually, because I pulled the disk 30 minutes ago, and so far nothing has happened except a popup about twenty or so read errors. You would think completely removing a disk from an array would cause some distress, but it's still green in the interface and I can browse the structure (but not download). What's the expected behavious here really? Because I thought I'd get a big red warning about a dead disk.

     

    I haven't tested myself, however when removing a disk without shutting array down etc. It should detect this and then emulate the storage using parity data.

     

    I would expect it to also show a notification stating something along the lines of "disk not detected"


  20. 2 hours ago, Helmonder said:

    ZFS is meh... At least for now.. It brings back several limitations that we currently do not have with unraid and that I really like not having... Like not deciding how back your pools are, no need to have same type/same size disks.. You also need an amount of RAM per TB in your server and that adds up quickly..

     

    My info is from a few years back so stuff might be different..

    From what I am aware there are still limitations with disk sizes and adding disks to a pool. unraid does have the nice feature of mixing disk sizes.

     

    So if unraid added the feature for multiple arrays it would definitely be a pulling point for more people to use unraid.


  21. 9 hours ago, Necrotic said:

    I am not sure how to make this work with Unraid, but Linus just talked about using GlusterFS to make multiple separate things show up as a single share.

     

    Yeh GlusterFS with ZFS pools is an option. However for people who are less skilled with Linux, I don't think they would know how to create ZFS pools via command line and then make shares etc.

     

    If unraid supported more than 30 drives (in an array or by having multiple arrays) then it would allow less Linux skilled people to have larger storage/more disks.


  22. 7 hours ago, SLNetworks said:

    Alright, I understand that. And I see where the 4TB drive becomes invalid. 4 can take care of the 1, 10 can take care of 1, but 4 don't take care of 10. *buries head in cables*

    The whole idea is that the Parity drive has to be the largest (or equal to the largest) in the array.

     

    In your case if you wanted dual parity (and didn't want to buy more drives) then you would need to make both of the 10TB drives parity and then use the 1TB and 4TB for data. This however would reduce your total capacity from 15TB to 5TB.

     

    To keep the same size storage as you currently have then you could buy a third 10TB drive for the dual parity.