Jump to content

nlash

Members
  • Posts

    82
  • Joined

  • Last visited

Posts posted by nlash

  1. Sorry, last questions (hopefully).

     

    What's the best sequence of steps to protect the data on the cache from corruption when this happens? I've moved my cache-only shares to the array and I have appdata backups happening nightly. 

     

    I'm not familiar enough with how the cache pool system works to know what NOT to do in this scenario. Can I simply power down, replace the cable, and scrub? 

     

     

  2. 4 hours ago, JorgeB said:

    Those errors mean that device sdb dropped offline at some point in the past, but before this last boot, to reset the fs errors see here.

     

    Ah, the reset errors was the step I was missing—thank you. Would sbd dropping offline be due to cables or some other something else? 

  3. The last two times I've rebooted, the server has started a parity check and reported cache pool errors upon start-up. The first time was after an OS upgrade (months ago), the most recent time (today) was after a CPU pin reassignment. I have a script that I grabbed from here that does hourly BTRFS checks and reports when it finds them. Nothing was reported since the last time this happened and I rebuilt my pool. I also have monthly BTRFS scrubs scheduled. 

     

    Diagnostics attached. 

     

    Any help as to why this is happening would be appreciated. 

    unraidserver-diagnostics-20220825-1704.zip

  4. I just added a second SSD to my existing cache to create a cache pool. Mover moved everything off of cache to the array.

     

    I installed the second drive to the pool, formatted. 

     

    1393820178_ScreenShot2022-01-19at7_35_52AM.thumb.png.f1d98b8de1c546daf40fa1a6153c43ee.png

     

    Switching the shares back to Prefer, then running mover yields these errors:

     

    424684979_ScreenShot2022-01-19at7_38_22AM.thumb.png.0af1457d879106c786c09ec31c8d2cee.png

     

    I can move things back to the cache manually using Unbalance, but I don't understand why Mover won't do it.

     

    15352781_ScreenShot2022-01-19at7_39_56AM.thumb.png.42ffbd00902f62c4891b45072f5816f1.png

    Screen Shot 2022-01-19 at 7.36.38 AM.png

  5. 1 hour ago, ljm42 said:

     

    Everything in those screenshots looks good! Only one thing I can think to check if you reload the Settings -> Management Access -> My Servers page, is "Allow Remote Access" still set to "Yes"? It is possible that you can set it up, test the connection, but not hit Apply, in which case the change will not be saved.

    It was set to "Yes", but I did a No > Apply, then Yes > Apply and it seems to be working.

     

    Thanks!

    • Like 1
  6. It was working for a while, but now I cannot seem to get remote access working.

     

    Port is forwarded. The check passes. It shows that it's connected to the mothership and I've since rebooted the server. 

     

    Any ideas?

     

     

    Screen Shot 2021-05-27 at 7.23.34 AM.png

    Screen Shot 2021-05-27 at 7.22.54 AM.png

    Screen Shot 2021-05-27 at 7.23.16 AM.png

  7. Quote
    On 1/8/2021 at 10:50 AM, joggs said:

    Very interesting!

    I have exactly the same problem.

    I could not add more than 2 cores before the stuttering began in Windows, but I did not test the 3,5,7 combo, which actually works, so now I have 3 working cores, so thank you for that.

    Also, MacOS runs as butter with more cores for me as well.

    Any new findings in this matter, as the last post is over a year old?

     

     

    I've actually since upgraded to a 3900x. 

     

    But, before the upgrade the problem went away and I was able to add 4 cores. I don't know what updated or what changed, but something fixed it. 

     

    I currently have 8 cores passed to both VMs (not simultaneously) and everything runs fine. 

     

    No clue what changed. 

  8. 7 minutes ago, alturismo said:

    ok, i tried now by adding the opencore and install images again, still cant boot anymore.

     

    so you say it works in your place, you did remove the 2 entries from the vm template and the vm boots fine directly ?

    in case u didnt remove the 2 entries and still have to hit enter to boot, then be careful to remove them ... 

    my intention was a direct boot like bigsur as sample ...

     

    i give up now

    Yes, I removed the other two disks. Just have the High Sierra installation that auto boots.

  9. 41 minutes ago, alturismo said:

    may a question about EFI edit on high sierra (worked like described on bigsur) as i would like to test out with a gpu now and im on nvidia only.

     

    when i walk through the video, on minute 12 there is mounting the EFI partitions and copy over the files, but on high sierra im missing the target partition ...

     

    image.thumb.png.ee08e4d5bb3f6e61480d863158e469a8.png

     

    live with it and no way to autostart without the other partitions or is there a different way maybe ?

     

    thanks ahead

     

    I'm running High Sierra. I just mounted the EFI o partition, copied the EFI and NVRAM over to the EFI on MacOS partition. Works and boots fine. 

     

    GPU passthrough on the other hand...if you get that working with your Nvidia GPU, please let me know how. I'm still trying to get a 1080ti through without having to also use VNC. 

  10. On 12/17/2020 at 10:03 AM, SpaceInvaderOne said:

    You shouldnt need to do anything on the opencore config

    You will just need to download the nvidia drivers once booted into osx.

    Run this in terminal and it will download the correct nvidia drivers in high sierra

    
    bash <(curl -s https://raw.githubusercontent.com/Benjamin-Dobell/nvidia-update/master/nvidia-update.sh)

     

    Yeah, nothing I do will let me use the 1080ti. The only way High Sierra will boot is if I use VNC. I have the GPU passed through and the web drivers installed. If I add the GPU after VNC (meaning both added to the VM template), I can VNC into the OS, see the web drivers loaded and see the 1080ti listed, but if I remove VNC, the OS will not boot. 

     

    I'm kind of at a loss. I've had this GPU passed through to various High Sierra vms and cannot get it working with this most recent one. Anyone else out there able to successfully boot into High Sierra with an Nvidia card on this latest Macinabox?

  11. Has anyone done a recent install of High Sierra on 6.9 beta 35?

     

    I'm getting random hard resets for seemingly no discernable reason within the VM. Sometimes attempting to install packages will do it, other times it'll start a reboot loop. This is on a fresh install with no App Store updates to the OS, Lulu firewall, and passing through a 1080ti with Nvidia web drivers. FWIW, OS lives on the cache drive. 

     

    I had the exact same installation running rock solid from the introduction of Macinabox (on 6.8.3) and now can't get it to play nice. 

  12. I've had a High Sierra installation working with a 1070ti passed through for months. Today, upon booting up the VM it no longer grabs the GPU. The Nvidia web drivers are still installed and the GPU boots fine into a Windows VM. 

     

    Clover still has web drivers checked and it's still a 14,1 Mac.

     

    How can I rework getting this VM to grab this GPU? What are the correct steps so that I don't have to start all over?

×
×
  • Create New...