gyto6

Members
  • Posts

    148
  • Joined

  • Last visited

Posts posted by gyto6

  1. 22 hours ago, Holle88 said:

    unraid-porn (speaking in phrases of food-porn),

    i downgraded my server today from a 3700x and 32GB RAM with GTX1070 to my mini-ITX J3455 for energy saving purpose.

    (not used the streaming machine at all and my 2 websites and 10 docker containers can be easy driven be 4 cores with not SMT)

    Idle power goes down from 130W to 50W  (drives not spinning)

    🤣

     

    That’s nice indeed! With a lot of works, I was able to drop the power consumption to 70W. It’s really tricky but interesting !

     

    However, I forgot that I had to finish this post so thank for the reminder ;)

    • Like 1
  2. On tomorrow morning after the coffeemaker's finished to boot, so at 1PM.

     

    No expected delay's come to my ears. For the content, I wished that a lot about ZFS cache/special drives and properties are integrated within the GUI.

     

    Impossible for not instructed users to get straight through ZFS specificities, so links to a solid ZFS documentation should be provided from the Web GUI.

  3. 5 minutes ago, DCJona said:

    ok but how can i update the plugin without strating the array and access? 

     

    From SSH or physically on your server :
     

    plugin remove nameOfplgFile.plg

     

    Default folder is config/plugins on your OS drive

     

    Then reboot your server and install the fresh plugins

  4. 13 minutes ago, DCJona said:

    do anyone has issue accessing to the gui ? i can't access anymore .. 

    Quote

    Obsolete/Broken plugins

    There are a few plugins which are known to be incompatible with Unraid 6.12, and upon boot will not be installed. You will get a notification for each plugin that is affected and can review the list by going to Plugins/Plugin File Install Errors.

    disklocation-master version 2022.06.18 (Disk Location by olehj, breaks the dashboard)

    Update this plugin before upgrading the OS

    plexstreams version 2022.08.31 (Plex Streams by dorgan, breaks the dashboard)

    Update this plugin before upgrading the OS

    corsairpsu version 2021.10.05 (Corsair PSU Statistics by Fma965, breaks the dashboard)

    Update this plugin before upgrading the OS

    gpustat version 2022.11.30a (GPU Statistics by b3rs3rk, breaks the dashboard)

    Switch to GPU Statistics by SimonF

    ipmi version 2021.01.08 (IPMI Tools by dmacias72, breaks the dashboard)

    Switch to IPMI Tools by SimonF

    nut version 2022.03.20 (NUT - Network UPS Tools by dmacias72, breaks the dashboard)

    Switch to NUT - Network UPS Tools by SimonF

    NerdPack version 2021.08.11 (Nerd Tools by dmacias72)

    Switch to NerdTools by UnRAIDES

    upnp-monitor version 2020.01.04c (UPnP Monitor by ljm42, not PHP 8 compatible)

    ZFS-companion version 2021.08.24 (ZFS-Companion by campusantu, breaks the dashboard)

    Some of the affected plugins have been taken over by different developers, we recommend that you go to the Apps page and search for replacements. Please ask plugin-specific questions in the support thread for that plugin.

     

  5. 15 minutes ago, dopeytree said:

    Ah ok so best to backup to another disk (perhaps in array) or other pool. Then wipe the cache pool. Then set up as a new pool with all drives present & ready to go? Then copy data back

    Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's.

    Concerning this situation, you can simply wipe a disk and set it as a single ZFS. Then copy your datas back on it, wipe your second disk and add it as a mirror. Your zpool shall be then two devices mirrored.

     

    For the perf, don't expect ZFS to speedup your nvme drives. As your supposed to store videos on it, the "simplest" tweak to use are the "Ashift" and "Recordsize" parameters dedicated respectively to zpool and dataset.

    ZFS benefice only's it ARC read cache mechanism. So if your editing your videos, there are no gain.

     

    • Like 1
  6. 15 hours ago, dopeytree said:

    -... ..- -  -..-.  ..-. .. .-. ... -  -..-.  ..- ..-. --- .----. ...  -..-.  .... .- ...- .  -... . . -.  -..-.  -.. .. ... -.-. --- ...- . .-. . -..  -..-.  ... - . .- .-.. .. -. --.  -.-- --- ..- .-.  ... --- -.-. -.- ...  -..-. 

    ..- ..-. --- / -.. .. -.. -. .----. - / ... - . .- .-.. / -- -.-- / ... --- -.-. -.- ... .-.-.- .-.-.- .-.-.- / .. .----. -- / .- / .... --- -... -... .. - --..-- / .. / .- .-.. .-- .- -.-- ... / .-- .- .-.. -.- / ..-. --- --- - / -. .- -.- . -.. .-.-.-

  7. 20 hours ago, JonathanM said:

    Soon™ is feeling more and more imminent. Possibly in the next few days.

     

    Perhaps it may even be time to start up the Soon™ 6.13 series thread with whatever speculations and rumours you have heard.

    .- -. -.. / -- --- .-. . / .--. --- ... - ... / .-- .. - .... / -- --- .-. ... . .----. ... / -.-. --- -.. . -.-.--

    • Like 1
  8. Hi,

     

    A simple question advertised in the title : does unraid connect IPv6 link actually work ?

    I can access my server through IPv6 address and dedicated port, but no DNS Record exists on the domain zone myunraid.net to use IPv6 dedicated link.

    Is this an error from my config or does the team actually manage this troubleshoot from it side?

     

    Thanks,

     

    PS : Unraid-Api already restarted, server too, disconnect and reconnect to Unraid Connect, nothing seems to work.

  9. 18 hours ago, pras1011 said:

    One day I want to get 8 x 8tb once the price drops far enough.

     

    Maybe I should put the question that should have been written in your previous post to keep a link with the thread's title.

    Does SSD works with any Unraid configuration?

    1 - If you're using a ZFS Pool, no trouble. ZFS manages any Parity/Cache operation. Leave a "dead" storage in the array to start container.

    2 - If you're using the SSD within the array, the TRIM shall break the parity, so it's not functionnal.
     

    Quote

    Trim doesn't work with SSDs assigned to the array, so there might be some performance degradation over time, but parity doesn't break.


    AFAIK, no logs about Unraid 6.12 mentionned the SSD TRIM and Parity operation if dedicated to the array.

  10. Just now, SimonF said:

    Unlikely for kernel to be bumped to 6.2 during rc phase.

    Absolutely, It's not worthy to install the new kernel only for ZFS users to use the latest revision.
    Unraid's stability is above all.

  11. 2 hours ago, jonathanselye said:

    Kernel 6.2 is already stable even Ubuntu's next version is going to release with it as its kernel, and it has full support for intel arc, can we possibly see this on the stable release Soon™ or possibly next RC?

    Thanks!

    Only a fortune teller can answer this question. Whatever it'll come, soon or later.
    I think that Limetech's team has enough work now, and finishing the new Unraid release is above the new kernel, which might be unstable with their solution.

    • Like 1
  12. 12 minutes ago, Kilrah said:

    You didn't read the notes that mention the list of incompatible plugins that need to be removed/installed beta versions of.

    I did at the time and remove all of them, the white screen has appeared suddenly after importing my ZFS Pool and was never able to access it ever since.

    Whatever, I abandoned the idea to test the RC release.

     

  13. 10 hours ago, xaositek said:

    For anyone curious - test/beta builds are still happening - 6.12.0-rc2.11 just came out

     

    Waiting for the official release... I'm stuck with a white screen after login into the WebGUI. Restarting Nginx and Php doesn't change a thing (error 500). 🥲

  14. Hi everyone,

     

    As I added a picture of my new UPS, you'll notify that the picture is, for now, worst as I had to take it with my mobile phone. I'd take a better shot with my camera soon, I hope.

     

    Whatever, I'd be glad to anwser some questions, but they'd be mostly answered within the 2nd and 3rd post. I simply want to fully explain each point, in order for neophytes to understand what a server might implies, in hardware as software, and detailed with pleasant visio files.

    But It will take time, and I'm going through a really bad time in my life... So I'm no more located where my server is sitting, and I'm doing what I need to heal myself mentally...
    I'd be glad to come soon to fill and finish this humble but passionate description about my server.

  15. 1 minute ago, JorgeB said:

    Yes, you can have as many stripped mirrors as you want, and each mirror can be up to 4 way mirror.

    Thanks, I know what to do tonight! Improve my english skill. 😉

    • Haha 5
  16. 6 minutes ago, JorgeB said:

    Do you mean you are using a 6 way mirror? i.e., just using the capacity of one disk?

     

     

    My english skill might be broken then. As specified within my signature, I'm using the capacity from 6 disks divided by two or "3 MIRRORS STRIPPED"

    Is it usable with the release?

  17. Sniff! Can't taste this RC release due to having a 6 disks mirror vdev instead of 4.
     

    Looking forward the release delivering the support to more disk @limetech. The logs related to it are whatever pretty solid, I didn't expect this way to support ZFS. Sounds promising! 😁

  18. 9 hours ago, 0edge said:

    Got home to a false degraded pool error and lots of "ereport.fs.zfs.deadman" in the logs. Restarted and everything is fine. Also been having a lot of the ACPI error. This is a brand new 12700k system, any input?

     

    Mar 14 15:06:22 GUNRAID kernel: ACPI BIOS Error (bug): Failure creating named object [\_SB.PC00.PEG1.PEGP._DSM.USRG], AE_ALREADY_EXISTS (20220331/dsfield-184)

    Mar 14 15:06:22 GUNRAID kernel: ACPI Error: AE_ALREADY_EXISTS, CreateBufferField failure (20220331/dswload2-477)

    Mar 14 15:06:22 GUNRAID kernel: ACPI Error: Aborting method \_SB.PC00.PEG1.PEGP.\_DSM due to previous error (AE_ALREADY_EXISTS) (20220331/psparse-529)

    Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

    Mar 14 15:06:47 GUNRAID zed[21232]: Diagnosis Engine: error event 'ereport.fs.zfs.deadman'

    Did you try putting the server into some rice? 😁

    Sorry, that's weird for me. All I can say is, by copying google research, "it sounds like a I/O error".

    Maybe your disks have been spinning down and ZFS didn't expect that?

  19. 17 hours ago, professeurx said:

    hi thank you for your answer, I have made several disconnections connection via "my server" and always the same error, I have also restarted my unraid several times but nada

    You're welcome.

    Sadly, I got the trouble several times along the day, so they might be many operations from Limetech's servers side, or a new update within the plugin sets it to the failure we're witnessing.