Leaderboard

Popular Content

Showing content with the highest reputation on 07/15/21 in all areas

  1. I don't see what started it. Your recovery is to reboot and go from there.
    1 point
  2. I used skylake-server with my setup but did not try anything with nested virtualization. I went bare metal hackintosh so I cannot test anything right now.
    1 point
  3. Übrigens.... Hat wer interresse an einen AMD Opteron OS6128WKT8EGO 6128 Sockel G34 Octa Core 2.0 GHz 598732-001 ?
    1 point
  4. Es lebt! Das Problem war das uralte BIOS. Ich habe eine ältere CPU in dem falle einen AMD Opteron 6128 gekauft eingesteckt und dann hatte ich ein Bild konnte ganz entspannt ins BIOS gehen und das BIOS auf die neueste Version updaten (mit FAT32 exFAT hat er nicht erkannt). Dann die CPUS ausgewechselt meinen ganzen RAM den ich habe (4x8GB) eingesteckt und es läuft. Ich bedanke mich ganz herzlich an alle die mir hier geholfen haben!
    1 point
  5. Definitely, I didn't notice you had dual parity, that's the way to go.
    1 point
  6. I'd take the newer processor. In addition to the much faster RAM and newer GPU, it's also cooler running and more efficient (14 nm vs the older 22 nm lithography)
    1 point
  7. Yes, you can mount without starting the array with the UD plugin, or in any other Linux distro, either case you just mount one of the devices and the other one will mount together.
    1 point
  8. For who is using the x86_validate_topology patch, there's a new patch that covers a wider range of kernels (I remember I had to change this patch from Mojave to Catalina and to Big Sur). This is the new patch, should be valid for mac os monterey too: <dict> <key>Arch</key> <string>x86_64</string> <key>Base</key> <string>_x86_validate_topology</string> <key>Comment</key> <string>XLNC - Disable _x86_validate_topology - 10.13/10.14/10.15/11.0/12.0</string> <key>Count</key> <integer>0</integer> <key>Enabled</key> <true/> <key>Find</key> <data> </data> <key>Identifier</key> <string>kernel</string> <key>Limit</key> <integer>0</integer> <key>Mask</key> <data> </data> <key>MaxKernel</key> <string>21.99.99</string> <key>MinKernel</key> <string>17.0.0</string> <key>Replace</key> <data> ww== </data> <key>ReplaceMask</key> <data> </data> <key>Skip</key> <integer>0</integer> </dict> Also, the force penryn patch was recently modified to be compatible from big sur 11.3+ till monterey, for big sur 11.3+/Monterey 12: <dict> <key>Arch</key> <string>x86_64</string> <key>Base</key> <string>_cpuid_set_info </string> <key>Comment</key> <string>algrey - _cpuid_set_cpufamily - Force CPUFAMILY_INTEL_PENRYN - 11.3 + / 12.0</string> <key>Count</key> <integer>1</integer> <key>Enabled</key> <true/> <key>Find</key> <data> gD0AAAAABnU= </data> <key>Identifier</key> <string>kernel</string> <key>Limit</key> <integer>0</integer> <key>Mask</key> <data> //8AAAAA//8= </data> <key>MaxKernel</key> <string>21.99.99</string> <key>MinKernel</key> <string>20.4.0</string> <key>Replace</key> <data> urxP6ngx2+s= </data> <key>ReplaceMask</key> <data> </data> <key>Skip</key> <integer>0</integer> </dict>
    1 point
  9. I think this could even help when using nextcloud and its "preview generation" for videos/photos (which I am already using). The longer I think about it the more I tend towards the i5
    1 point
  10. Depends... Both CPUs are very old - but i would take the "more cores CPU" 👍 RAM-Speed is not relevant on Intel-CPUs and it doesn't matter for unraid... But that's just my opinion
    1 point
  11. Yup, now that I've RE-read the definitions of the Share methods, that makes sense, and the goal of Highwater (spinning up the disks as little as possible) makes sense too. Thanks for that. Looks like I sent myself on a wild goose chase.
    1 point
  12. I think so: I don't have any amd to test with, but to have nested virtualization in mac os you should need vt-x and amd has amd-v; if anyone has an amd cpu he can try to emulate a "Skylake-Client" cpu, but I would not bet it will work (I think that having virtualization working inside a vm is possible only with the passthrough of the cpu, and not possible to emulate, with an emulated cpu). I know @david279 had a similar setup, if he's around maybe he tried and can confirm this with his amd cpu.
    1 point
  13. That should not be what show up. It should be the "Document Server is running" page. I don't use SWAG so I can't help you there so you should ask in the proxy's support thread.
    1 point
  14. Hi JorgeB, thank you so much for taking the time to review my logs i really appreicate it. This is 100% a classic case of dumb user error! I can confirm its all working well now, once i gave the sata contoller and the disks back to unraid..... thank you again!
    1 point
  15. Moved back to latest today, latest build seems to be all good!
    1 point
  16. Don't forget to replace all the cables, even if they seem compatible with the new PSU, they may be wired differently... or simply one of them might have been a contributing factor to the PSU failure
    1 point
  17. Thanks for the suggestion! I will have a look at how we can link this in.
    1 point
  18. Loving System AutoFan, but have one small issue when rebooting. I've tried searching to see if this has been answered, so apologies if it has. When I reboot the server, the drives typically get different assignments. Example: Drive1 in the array may have been '/dev/sdd' then after a reboot it's '/dev/sde' - unfortunately since AutoFan uses these assignments in the exclude list, that exclude list will inevitably be changed after a reboot and cause the list to be wrong. Case in point... I have 4 fans mapped to exclude all but the array drives, but after a reboot there's at least one excluded drive that swaps its drive assignment with an array drive. The last time I rebooted, the exclusion list swapped the cache drive with Disk1, which caused the fans to stop even while Disk1 was still spun up. At first I thought maybe it would be possible to keep drive assignments the same after a boot, but apparently it's not possible - - and then I wondered if it were possible if the exclude list in AutoFan used either the drive name or serial number so that it was less affected by a reboot. If I've made any sense, feedback would be appreciated. Is there something I'm not doing right, or is this just the way it is? Thanks!
    1 point
  19. Thanks so much for this! Yes, it appears to fix the issue with Cloudflare proxy... no more kicking to the login screen with proxy enabled. Much appreciated!
    1 point
  20. Well, I tried to reboot my server to see if that would fix it. Now the Calibre-Web docker is completely missing and I have to re-install it. *le sigh*
    1 point
  21. Ich verwende nun seit ca. Nov. 2020 Toshiba Enterprise Capacity und habe darüber bisher keine Klagen. Alles okay. Jede Festplatte wird vor dem Einsatz bei mir einmal voll geschrieben und dann per verify überprüft: Bisher alle dabei fehlerfrei und performant. Und dass sie zusätzlich noch den Stromausfallschutz (Persistent Write Cache) haben macht sie in meinen Augen noch ein bisschen attraktiver. Ich setze aktuell mehrere MG07ACA14TE , eine MG07ACA12TE und 2 MG08ACA16TE ein.
    1 point
  22. 1 point
  23. Nested virtualization in mac os is only possible with intel cpu (mac os doesn't support nested virtualization with amd): you need to enable nested virtualization by: 1- add this line to syslinux configuration: kvm_intel.nested=1 in the append line, so that it looks like: append kvm_intel.nested=1 initrd=/bzroot Reboot unraid 2- set up your mac os virtual machine for cpu passthrough: if you have the custom qemu arg at the end of the vm xml with penryn emulation nested virtualization will not work (it may work with other newer intel emulated cpu, I never tested); for cpu host passthrough you need: <qemu:commandline> ... ... ... <qemu:arg value='-cpu'/> <qemu:arg value='host, .... </qemu:commandline> </domain> Note that macinabox emulates penryn cpu, so if you didn't change the last lines of the xml to set the passthrough of the cpu, nested virtualization won't work. Intel core 2 duo lacks EPT (extended page tables) and UG (unrestricted guest) for virtualization, this is why Penryn is not working.
    1 point
  24. Du sparst ca 4W bzw 8 € pro Jahr. Also Amortisierung erst nach 7 Jahren. Wirklich interessant wird das erst, wenn das im Preis fällt. Ist halt noch sehr neu auf dem Markt. So für unter 90 € wäre es denke ich ein Must-Have. Es gab es auch schon für 100 €. Vielleicht warten?! Der Cache ist als einzelne Festplatte voll nutzbar. Da unterteilt sich also gar nichts. Du siehst die VM Dateien und auch die Dateien, die du auf den Server hochlädst. Nach x Stunden wird dann der Mover aktiv und verschiebt die Dateien von dem Cache auf das HDD Array, während die VM Dateien auf der Cache-Platte verbleiben. Du kannst für einzelne Shares selbst entscheiden ob die Dateien auf dem Cache verbleiben sollen oder verschoben werden sollen. Ich habe zB meine MP3 Sammlung dauerhaft auf dem Cache, damit beim Abspielen über Sonos keine Wartezeit entsteht und es möglichst stromsparend ist.
    1 point
  25. I'm having trouble with autofan as well, all seems to be working fine, but even though I exclude the ssd, it's still being reported as the device with the highest temperature, and thus the fans stay at 100% at all times. The device in question is placed in a pool with other ssds.
    1 point
  26. What benefit does this have over using Unraid's built in implementation? https://forums.unraid.net/topic/61413-ntpd-server-windows-client/
    1 point
  27. try occ files:scan --all from within the nextcloud docker's console
    1 point
  28. With that attitude, you aint' getting any help. Not on here, not in real life.
    1 point
  29. The instructions do not include shrinking an array to remove a failed disk. They are focused on removing a functional drive from an array. The failed drive is simulated by parity + all other disks in the array working together. Parity itself is not a backup. You may have already gone too far with the steps to be able to recover the data on the failed disk. But I'm not sure exactly how far you went, so there may yet be a way to recover. And even if you went farther, there still may be a way to put the array back as it was and have is simulate the failed disk. What you should have done (and anyone who is finding this thread for instructions) is to copy the data from the failed disk (which unRAID would be simulating and it would have appeared as if it were present) to other non-failed disks in your array that had available space. Once the data is copied, you would have been able to do the new config, redefined the array omitting the failed disk, and rebuilt parity. The net affect is you would have kept all of your data but using fewer physical drives. The amount of available space would have dropped by the size of the failed disk. Give some more details on the current state and someone may be able to assist.
    1 point
  30. You will need to start the ntp daemon manually it isn't running by default /etc/rc.d/ntpd start
    1 point
  31. Hi Guys this is a tutorial on how to setup and use Duplicati for encrypted backups to common cloud storage providers. You will also see how to backup from your unRAID server to another NAS/unRAID server on your network. You will also see some of the common settings in Duplicati and how to also restore backups etc. Hope its useful How to easily make encrypted cloud or network backups using Duplicati
    1 point