macmanluke

Members
  • Posts

    64
  • Joined

  • Last visited

Posts posted by macmanluke

  1. Actually had a bit of a look at it last night and i noticed in the syslog im getting:

    Sep 15 18:45:05 Vault file.activity: Starting File Activity
    Sep 15 18:45:05 Vault file.activity: File Activity inotify starting
    Sep 15 18:45:16 Vault inotifywait[29573]: Failed to watch /mnt/disk3; upper limit on inotify watches reached!
    Sep 15 18:45:16 Vault inotifywait[29573]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify/max_user_watches'.
    Sep 15 18:45:17 Vault file.activity: File Activity inotify exiting

     

    After a bit of searching i ended up increasing the inotify watches, gradually each time it would take longer to exit but eventually when i upped it to just over 4mil i started to get:

    Sep 13 20:46:47 Vault file.activity: Starting File Activity
    Sep 13 20:46:47 Vault file.activity: File Activity inotify starting
    Sep 15 20:49:18 Vault inotifywait[23057]: Couldn't watch /mnt/disk3: No such file or directory
    Sep 15 20:49:20 Vault file.activity: File Activity inotify exiting

     

    After that ran out of time to keep looking into it

    but /mnt/disk3 is there and is healthy so not sure why id get that error

     

    edit:

    Playing with it tonight again i turned off include cache/pools and how getting

    Sep 16 18:46:00 Vault inotifywait[28045]: Failed to watch /mnt/disk4; upper limit on inotify watches reached!

    Seems like just increasing the watches is masking another problem?

  2. Thanks just upgraded from 6.9.x to 6.10.3 and was having horrible time with SMB shares on mac (slow, crashing etc)

    This seems to have solved it - seems like something that should be configured out of the box?

     

    edit: actually its better (eg working) but still seems slower than it should be especially for a batch of small files

  3. Hello all

     

    Just upgraded my Unraid server from a B365M board + 9400 to z590 + 10400 (needed more expansion and made sense to update platform at the same time)

    All changed over pretty smoothly but iv got an issue when i boot i get display output right through the boot process to the point it would normally drop you at the login terminal but instead display just switches off.

    Using the iGPU, had i915 blacklisted but removed that with no change
    Not a big deal just seems strange and cant think of what might be causing it.

     

    any suggestions?

     

    thanks

  4. Hello All

    About 2 weeks ago my unraid server crashed/rebooted on its own - first issue iv had in around 2 years normally super stable.
    Triggered a parity check but all good otherwise
    It did it 2 days later again

    That was just before i was due to go on holidays so i just left it running but with slightly reduced load (VM with a 3060 that mines eth while idle etc turned off)

    Seemed good went a bit over a week with no issue but yesterday it happened again

    Im not super familiar with unraid to know where to look for logs etc for clues - everything iv managed to find seems to be post reboot - is there any logs that persist over a reboot?

    Im partially considering that it might be power supply related so considering swapping that with a spare i have but i dont like just replacing stuff if i can help it.

    Server is an Intel i5 9400, 32GB ram, 5x HDDs, dual 1TB NVMe cache, 3060 GPU passed to a VM and sitting on a UPS

     

    thanks

    Luke

  5. 8 hours ago, guy.davis said:

     

    Good day!  Please take a look at the log files for each Plotman plotting job.  Found in /mnt/user/appdata/machinaris/plotman/logs on Unraid.  Hope this helps!

     

    Does not seem to say anything useful, they just end without finishing

    Had 3 finish yesterday but it failed again last night (looks like right at the start of one last night)
    Interestingly both nights looks like it stopped just after 3am
    when i come back in the morning the web browser window has also disconnected and needs a refresh

  6. So i set this up last night

     

    Started it plotting and was at around 170gb for 2 plots. This morning it had stopped plotting and no sign of the plots.
    Drive still has a bunch of plot files on it (still on the ssd)

    how can i find out what went wrong?

    guess i have to manually remove the files and cant resume?


    edit: Ran a single plot today and it seemed to complete ok, was also after sync had completed not sure if that made a difference. will just keep them running one at a time i guess for now

  7. 6 minutes ago, lnxd said:

    Thanks for fielding that question for me. For some reason, even though I have it set to email me every time there's a new post, it doesn't :P 

    Also thanks 
    letting me know about the procedure with Nvidia too, I have no way to test my work as I only have AMD cards. Do the overclock arguments from PhoenixMiner work for your 1070, or do you get errors?

     

    If this was XMRig, I'd say you're looking at a ~10% increase in hash power. But the performance of this container in theory should be around the same as running PhoenixMiner from a VM as long as there are no other bottlenecks (ie. pinning the CPU incorrectly).  You might save a couple of watts from not having a display connected to your GPU anyway. The only way to know for sure would be test it (easy), or research the performance impacts of using a GPU via VFIO passthrough (basically voodoo).

     

    The main reason I made this container vs. mining from an NHOS VM in the first place was stability with AMD cards; it's not affected by the vfl / vendor-reset issue. And then coincidentally it just turned out it's super convenient if you want to use the same GPU with multiple docker containers either simultaneously or separately. Which makes it arguably more convenient for Nvidia as well if you're more interested in using your GPU for Plex transcoding or one of the awesome gaming containers on CA for eg. rather than GPU intensive VM workloads.

    So maybe a question that's just as valid is whether you prefer to have your card bound to VFIO or not 😅


    Yea might be worth a play

    Believe it would be bound to VFIO (box checked in the VM?)

    Its a Nvidia 3060 and hashing pretty much same as it was in a dedicated box (48mh)

    Currently use quicksync for plex but the nvidia would be slightly better id guess

    Also no monitor connected, just a HDMI dummy

    Funny enough i searched for a docker before setting up the VM and came up with nothing so went the VM route

  8. Hello

     

    Looking to upgrade shortly to 6.9 and do the SSD fix - cache ssd pool has written 190TB in a year...

     

    In the instructions i noticed it says use mover to transfer cache to array but onto a btrfs drive - is there any reason for this?
    my array is XFS

    If its required is there any way of converting a drive without messing up the array/parity etc?
    Currently have one drive thats empty (and excluded from shares globally) as i moved its contents off when i had a drive start failing recently (since replaced/rebuilt array)

     

    thanks!

  9. Just set this up and have a strange issue

     

    If i use lancache im getting sub 2MB/s downloads in battle.net launcher but if i bypass it i get my usual 15-20MB/s

    Any suggestions?

     

    quick test in steam does not seem to have the same issues

     

    thanks!

     

     

    edit: looks like its recommended to change the slice size for blizzard - where is the config files for this docker?

  10.  

    On 3/16/2020 at 8:38 PM, tjb_altf4 said:

    Passkey will come eventually, I had mine turn up 12-24 hours later... servers are being crushed with requests from new folders... which is fantastic.

    Same applies with handing out WU, they simply are struggling to keep up.

    I wonder if the software trying to get WU very often is also causing issues as if i leave it running i get nothing but if i stop it overnight ill get a new WU straight away when restarted

  11. Installed it yesterday and it was working fine all day

    Today i noticed it was doing nothing

    Searching the logs i see:

    Quote

    [93m10:59:14:WARNING:WU01:FS00:Failed to get assignment from '<different ips>': No WUs available for this configuration[0m
    [91m10:59:14:ERROR:WU01:FS00:Exception: Could not get an assignment[0m

    Its just repeating that, got it to do something restarting the docker but was soon doing nothing again
    Is there something wrong (or actually just nothing for it to do?