Jump to content

Conmyster

Members
  • Content Count

    82
  • Joined

  • Last visited

Posts posted by Conmyster


  1. 15 hours ago, PhiPhi said:

    I so use Krusader and sometimes the performance is really good but inconsistent especially when moving or copying large amounts of data which can take many more hours than for example; copying the same data within a Windows server running on lesser hardware.

     

    Strange when you think about it, Unraid as a storage/NAS OS but doesn't have a reliable fast and full featured file mover.  Sitting at your client unable to close the Krusader tab for 15 hours or close the client machine is far from a good solution. 

    I mean if you are thinking it to be equivalent to say a Dell storage array, then I can tell you that they don't have file moving facilities. Not to mention you can move/copy files with linux mv and cp respectively.

     

    or there is midnight commander


  2. 2 hours ago, nfwolfpryde said:

    Thanks, that makes sense. My cache disk failed a week or so ago, and I replaced it. Must have rebuilt it wrong. I’ll experiment tonight with the fix!!

    If you want to get things within your docker share back on cache then the best method would be to set it to "Prefer" then run the mover this way any files on your disks will be moved to the Cache. It is then up to you if you want to set the Cache to "Only" or leave it to "Prefer"


  3. 2 minutes ago, Ancan said:

     

    No I think there's room for improvement here. I bit worried about the lack of notifications.

    Yeh I'm unsure why it waited for the parity sync for it to display a notification about the removed disk..

     

    Would need a more experienced user with unraid to see why...


  4. 8 minutes ago, Ancan said:

    Parity is now green as well. Just as green as Toshiba disk on my desk here beside me. Perhaps I should post on the bug-forum instead?

     

    image.thumb.png.542742cecdf56af20e00414c7bf67902.png

     

    I'll stop the array now, and replace the Toshiba with a new disk instead. Let's hope Unraid will let go of it then.

     

     

    Edit: About time!!!

     

    image.png.3505137ecfeb0014cbb8922cc87060c7.png

     

    Ah so likely due to the parity sync running.

     

    Although I don't think that should be the correct course of action by unraid....


  5. 17 minutes ago, Ancan said:

     

    Kudos on commitment for pulling a drive!

    I've got notifications set to "browser" only while I'm testing, and haven't seen anything except the "errors" popup. Disk still green in the GUI, even though it's sitting here on my desk.

    I should mention that my paritity is not fully synchronized yet, and was not when I pulled the disk. I'm being cruel I know. I validate enterprise storage installations as part of my job, and am a bit damaged by that probably.

     

    Ah okay, I'm unsure on how it would deal with loosing a drive during a parity sync. As I'm aware most single parity hardware raid (say raid 5) would just die. 

     

    And I don't have a system to test that with myself.


  6. 27 minutes ago, Ancan said:

    Nothing yet. Except a "*" on the disk temperature and up to 32 errors now. Still green otherwise. By now I ideally should have had a mail in my inbox and be on my way to the store for a replacement. Hmm...

     

    bild.thumb.png.3be8fa018fec9da7cd7d2f37e313744e.png

     

    Edit: I *love* how you can just paste images from the clipboard into the forum here (Ok, I somehow get double attachments when I do it but it 10 x beats saving the image and uploading it)

    If you setup the notification system under Settings > Notification Settings

     

    Then it should of notified you of the errors on the drive... I would need to test myself though

     

    Edit: After pulling my disk 5, I got a notification within 5 seconds (Discord) This is what my main screen looks like too:

    image.thumb.png.a7c9584ec526a62c4ffda664969ab323.png

     

    After this I stopped the array, unassigned the device, started the array. Then stopped the array and reassigned the device and started the array.

     

    Data is now rebuilding fine.


  7. 8 minutes ago, Ancan said:

    Hi all,
    I'm trying out Unraid as a platform for my new NAS, and pulled a disk to see the impact on the array.

     

    I'm bit surprised actually, because I pulled the disk 30 minutes ago, and so far nothing has happened except a popup about twenty or so read errors. You would think completely removing a disk from an array would cause some distress, but it's still green in the interface and I can browse the structure (but not download). What's the expected behavious here really? Because I thought I'd get a big red warning about a dead disk.

     

    I haven't tested myself, however when removing a disk without shutting array down etc. It should detect this and then emulate the storage using parity data.

     

    I would expect it to also show a notification stating something along the lines of "disk not detected"


  8. 2 hours ago, Helmonder said:

    ZFS is meh... At least for now.. It brings back several limitations that we currently do not have with unraid and that I really like not having... Like not deciding how back your pools are, no need to have same type/same size disks.. You also need an amount of RAM per TB in your server and that adds up quickly..

     

    My info is from a few years back so stuff might be different..

    From what I am aware there are still limitations with disk sizes and adding disks to a pool. unraid does have the nice feature of mixing disk sizes.

     

    So if unraid added the feature for multiple arrays it would definitely be a pulling point for more people to use unraid.


  9. 9 hours ago, Necrotic said:

    I am not sure how to make this work with Unraid, but Linus just talked about using GlusterFS to make multiple separate things show up as a single share.

     

    Yeh GlusterFS with ZFS pools is an option. However for people who are less skilled with Linux, I don't think they would know how to create ZFS pools via command line and then make shares etc.

     

    If unraid supported more than 30 drives (in an array or by having multiple arrays) then it would allow less Linux skilled people to have larger storage/more disks.


  10. 7 hours ago, SLNetworks said:

    Alright, I understand that. And I see where the 4TB drive becomes invalid. 4 can take care of the 1, 10 can take care of 1, but 4 don't take care of 10. *buries head in cables*

    The whole idea is that the Parity drive has to be the largest (or equal to the largest) in the array.

     

    In your case if you wanted dual parity (and didn't want to buy more drives) then you would need to make both of the 10TB drives parity and then use the 1TB and 4TB for data. This however would reduce your total capacity from 15TB to 5TB.

     

    To keep the same size storage as you currently have then you could buy a third 10TB drive for the dual parity.


  11. +1 (Just because it would provide more features)

     

    However from looking through the VMWare forums etc, it seems that NFS is becoming the more suggested method on connecting to your storage..


  12. 6 minutes ago, itimpi said:

    For those who want a GUI based file manager that is hosted on Unraid then Docker/Krusader docker containers already provide a solution that performs better than a browser based version. 

    Did not know about Krusader existed...

     

    In which case it would be solved with the docker container...


  13. I would preferably say that this is not a needed feature. From my experience in using web based file explorers for website management (cpanel, etc) the performance is not very good.

     

    The best case would be to use FTP, SMB, etc as they will have much better performance over a web based solution. Not to mention the amount of work that will entail to make the web based version work.


  14. On 9/10/2019 at 1:23 AM, Xaero said:

    This feature would need to be implemented cautiously. Parity check times for large volumes with many disks are already high enough. And with the current parity/array ratio limitations, anything beyond the current 28+2 is imho reckless. I'd imagine we would see multiple arrays and array pooling before we would see a larger configuration. I'd rather see the 28+2 changed to be more flexible with additional parity disks; and multiple array and array pooling myself, as it would be a much more flexible system overall. You could also do simultaneous parity checks on the multiple arrays; and the pooling would make everything still appear as one "big logical volume"

    I would say that the best way like you state is to have multiple arrays. Max 30 drives per array as per the usual, however you can spread a user share across multiple arrays.

     

    This could easily increase the max capacity from ~448TB to ~896TB (16TB Drives RAW Capacity) with 2 arrays.


  15. 46 minutes ago, Sic79 said:

     

    Ok, thanks for helping emoji4.png.

     

     

    I checked the settings now under discovery rules. And both of them says “Not supported”. And then there is a info box why it’s not working. It says “Cannot find the “data” array in the recieved JSON object.”

     

    Does that error say something to you?

     

    Edit: My Zabbix version is 4.0.8

    Ah in which case that last part would make more sense as to why this does not work.

     

    The agent provided is using the latest tag, and the latest tag currently is Agent version 4.2, which isn't backwards compatible with 4.0.*

     

    I would suggest updating your Zabbix server to 4.2 minimum and it should work. (My Zabbix Server is on 4.2.6)

     

    Sorry for not making this clear in the main thread, as such I have updated the main thread post stating that this is Agent 4.2 and the minimum sever version.

    • Like 1

  16. 50 minutes ago, Sic79 said:

    Hi, just tried this out but can´t seem to get my disks visible in Zabbix. Would also like it to monitor my network on unRAID server but that is not showing either. Everything else that is important seems to be monitored nice thou.

     

    I followed your instructions with adding disks and maked sure the docker was previleged.

    Maybe I use the wrong template (Template OS Linux), which template do you use in Zabbix?,

     

    Thanks for the docker, great to finally have a zabbix agent in unRAID

    Hello,

     

    Reading through your reply I can confirm the template I use is "Template OS LInux"

     

    However one thing to note is that filesystem and networking is done under discovery. (by default Zabbix does discovery every 1hr)

     

    If you would like to force a discovery check then you can do the following from the Zabbix WebUI

     

    Configuration > Hosts > Select relevant host > Discovery rules

     

    You should then see the below:

    714442497_discoveryrules.thumb.PNG.62f835f6a94fe68db096fe6fc1eb6ac3.PNG

     

    You can then click the relevant box for your rule you want to force check (In your case I would just select both)

     

    Once selected you can click the check now button. This will force Zabbix to complete a discovery check on your host.

     

    Please do let me know if this works 😉


  17. Hello,


    I am looking around backup solutions available and wanted to know which services do you use or think are the best?

     

    I have looked at CrashPlan PRO and Google(Using rsync) so far. The only issue I foresee is that I only have 35mbps upload, and by my calculations to backup my current data (~14TB) it will take 41 days or more at that speed.


  18. 5 minutes ago, yanksno1 said:

    Deleted that post by accident. So I'm trying to remove some directories in my /mnt/disk1/ folder. I tried this but it gave me this response. 

     

    root@Tower:~# sudo rm /mnt/disk1/TV

    rm: cannot remove '/mnt/disk1/TV': Is a directory

     

    Am I doing anything wrong there? These were folders that were created by the Channels DVR docker that I switched directories to. 

    Okay I see, are you using any kind of user shares? or just disk shares?

     

    I would be careful deleting direct off the disk if user shares are in play as you will only remove the files on that disk.

     

    In any case the correct command would be:

    rm -r /mnt/disk1/TV

    As you are root you do not need to use sudo and you need the -r toggle to go recursive (ie delete directory, sub-directories and files)


  19. 16 minutes ago, yanksno1 said:

    Thanks for the help so far. Well that explains it about trying my other user haha to log in. I'm definitely new at ssh! So now trying the root user, I changed my password on the user to make sure it wasn't that in the system. It kicked me out there, made me log back in so I know it shouldn't be the password right? But now when I try to ssh using the root I get "Permission denied, please try again". Tried looking at my System Log, but didn't see anything obvious. Anything I should look for? Attaching another screen shot just for you guys to look at to make sure I'm not doing anything wrong. Hopefully I can get it going.

     

     

    Screen Shot 2019-09-05 at 19.40.58 PM.png

    Just double checking, were you ever able to ssh onto your unraid machine?

     

    AND

     

    Is the following setting enabled (It is located in Settings > Management Access):

    image.png.95edad889f9a26daa9254209aea0761a.png