Leaderboard

Popular Content

Showing content with the highest reputation on 01/28/23 in all areas

  1. That's the joke. Similar to duke nukem forever. soon has become the light hearted way of dealing with the seemingly interminable delays between releases. Soon™️ has no time scale attached, it's some date in the future with no way to make a prediction. Even the developers don't have a hard timeline, when it's done is the official answer. 1 month is NOTHING in the historical scale of Soon™️. That said, it could happen any time now. I'm beginning to think the Unraid community is unwittingly participating in a variable interval reinforcement schedule study, https://open.lib.umn.edu/intropsyc/chapter/7-2-changing-behavior-through-reinforcement-and-punishment-operant-conditioning/#stangor-ch07_s02_s02_t01
    4 points
  2. I was running into the common backup error related to dockers. Came to this thread for some research. Found a lot of people that weren't stopping all dockers - that's not my case as I am in fact stopping all dockers... So I was poking around a bit more and testing... For the record here are the errors that I ran into: [27.01.2023 05:04:31] Verifying Backup ./binhex-qbittorrentvpn/qBittorrent/config/qBittorrent.conf: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Size differs [27.01.2023 05:06:36] tar verify failed! Last successful backup (scheduled weekly, Friday's, 5AM) was Dec 9th. While manually running a backup, confirming that all dockers remained down via the web UI (they did) - it still failed. So I did it again and ran ps -ef | grep -i qb while the back up was running... Found this guy: root 19499 1 0 2022 ? 00:00:02 /usr/bin/ttyd -d0 -t disableLeaveAlert true -t theme {'background':'black'} -t fontSize 15 -t fontFamily monospace -i /var/tmp/binhex-qbittorrentvpn.sock docker exec -it binhex-qbittorrentvpn bash bingpot! Killed that process, re-ran and I've got successful backups again. Looks like this process is from opening the console to that docker from the UI. I'm able to reproduce this hanging process - typing "exit" in the console windows just re-opens a new shell. Closing the browser window seems to be the only way to "close" it - but the process remains. The following command will kill all running docker exec commands - but may potentially be too greedy depends on your specific needs - good enough for what I do though: ps -ef | grep -i "docker exec -it" | grep -v grep | awk '{print $2}' | xargs kill I will likely be putting this command in a pre-start script. Thought I post here in case anyone is having similar problems.
    2 points
  3. Wow! I was only having problems accessing Nextcloud, so it never occurred to me to look at Swag. I followed these instructions, and also update my customized Nextcloud conf files and now everything is working. Thank you @sonofdbn for pointing this out to me. I'll try to remember to update the appropriate config files going forward.
    2 points
  4. Thanks so much! Got Nextcloud working again. What I did after reading the Swag support thread and after stopping Swag: 1. Went to my Swag folder in /mnt/appdata 2. Went to the nginx sub-folder 3. Renamed ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (in case something went wrong) 4. Made a copy of ssl.conf.sample and named the new file ssl.conf 5. Made a copy of nginx.conf.sample and named the new file nginx.conf. 6. Restarted Swag. NOTE: I didn't have any customisations in the ssl.conf and nginx.conf files. (I can't claim any credit for this - all taken from the Swag support thread)
    2 points
  5. There are several things you need to check in your Unraid setup to help prevent the dreaded unclean shutdown. There are several timers that you need to adjust for your specific needs. There is a timer in the Settings->VM Manager->VM Shutdown time-out that needs to be set to a high enough value to allow your VMs time to completely shutdown. Switch to the Advanced View to see the timer. Windows 10 VMs will sometimes have an update that requires a shutdown to perform. These can take quite a while and the default setting of 60 seconds in the VM Manager is not long enough. If the VM Manager timer setting is exceeded on a shutdown, your VMs will be forced to shutdown. This is just like pulling the plug on a PC. I recommend setting this value to 300 seconds (5 minutes) in order to insure your Windows 10 VMs have time to completely shutdown. The other timer used for shutdowns is in the Settings->Disk Settings->Shutdown time-out. This is the overall shutdown timer and when this timer is exceeded, an unclean shutdown could occur. This timer has to be more than the VM shutdown timer. I recommend setting it to 420 seconds (7 minutes) to give the system time to completely shut down all VMs, Dockers, and plugins. These timer settings do not extend the normal overall shutdown time, they just allow Unraid the time needed to do a graceful shutdown and prevent the unclean shutdown. One of the most common reasons for an unclean shutdown is having a terminal session open. Unraid will not force them to shut down, but instead waits for them to be terminated while the shutdown timer is running. After the overall shutdown timer runs out, the server is forced to shutdown. If you have the Tips and Tweaks plugin installed, you can specify that any bash or ssh sessions be terminated so Unraid can be gracefully shutdown and won't hang waiting for them to terminate (which they won't without human intervention). If you server seems hung and nothing responds, try a quick press of the power button. This will initiate a shutdown that will attempt a graceful shutdown of the server. If you have to hold the power button to do a hard power off, you will get an unclean shutdown. If an unclean shutdown does occur because the overall "Shutdown time-out" was exceeded, Unraid will attempt to write diagnostics to the /log/ folder on the flash drive. When you ask for help with an unclean shutdown, post the /log/diagnostics.zip file. There is information in the log that shows why the unclean shutdown occurred.
    1 point
  6. Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks
    1 point
  7. Installing DSM 7.1.0.-42661 on UNRAID 6.10.3 I have been trying to install DSM 7.1 on my unraid server for the last night. After some problems and testing I have created this guide to install DSM 7.1 on unraid working perfectly. This is my vm config: STEP 1-Virtual machine creation: Download tiny core from https://github.com/pocopico/tinycore-redpill Select CentOS VM Template and apply correct options from the attached image. Select Q35-6.2 Select 3.0 (qemu XHCI) Load tiny red core as vdisk1 USB by selecting manually (you can create previously the name of the folder for the VM inside /domains and upload the tinycore img ) Create secondary disk 50G or whatever you want (this is your data storage for synology) as vdisk2 SATA Select Network model: e1000 Save. Uncheck "Start VM after creation" Edit again vdisk2 in the advanced xml template (top right corner) to controller='1' (if we don't do this tiny red core will not detect the disk properly when doing the satamap and will not install DSM correctly and will ask you to reinstall the *.pat infinitely) STEP 2-Start VM and connect via SSH Start VM and load tiny red core Once the OS is loaded, open the terminal and enter ifconfig to find out the ip of the machine Connect via ssh (with Putty) to the obtained IP address. user: tc password: P@ssw0rd STEP 3-Run the following commands: To update tiny red core with the latest data ./rploader.sh update now ./rploader.sh fullupgrade now To generate random mac and serial numbers (copy mac address generated to set on unraid vm template later) ./rploader.sh serialgen DS918+ (or whatever version you want, you can see all versions available with command info ./rploader.sh) To map the connected disks ./rploader.sh satamap now To record the vid/pid from usb ./rploader.sh identifyusb now To install NIC drivers (sometimes it loads the e1000e module instead of the e1000 and it doesn't work, adding this command will make sure that the e1000 module for the NIC is loaded correctly) For e1000 Network Card run: ./rploader.sh ext apollolake-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/rp-ext/master/e1000/rpext-index.json NOTE: change version according your selection for CPU and DSM version For virtio-net Network Card run: ./rploader.sh ext apollolake-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/rp-ext/master/v9fs/rpext-index.json NOTE: change version according your selection for CPU and DSM version To build the image ./rploader.sh build apollolake-7.1.0-42661 STEP 4-Download your .pat for your correct cpu architecture from official repo or mega download for this version apollolake , we will need it to install DSM later. https://global.download.synology.com/download/DSM/release/7.1/42661-1/DSM_DS918%2B_42661.pat - Offical repo https://mega.nz/file/5YI2yCYL#oNR6Cq5FmIdySL1cdUm_8vm2A-BSuj2NqbP_7ywTe_A - DSM 7.1-42261 apollolake STEP 5-Edit VM settings on advanced xml mode and install *.pat After doing all the above we shut down the machine. Edit the virtual machine on xml advanced mode (if we change mac address in normal editing mode you will lose controller="1" option set for vdisk2 we did previously so you will have to set it again everytime you change and/or save a setting from normal mode, try to edit always in xml advanced mode), look for MAC address and set the MAC generated by tiny red core. Save. Start VM Select the first option USB and leave it a few minutes like below image until we find it with synology assistant Open WebGui and install *.pat We now have DSM 7.1 working on unraid. There is other method with SATA boot but I have to test it better. Hope this guide help someone. Regards.
    1 point
  8. Ok, so I was looking into LSI HBA cards and they all seemed overpriced / over-hyped compared to other options. After looking around a lot I was debating between the HP240 and Adaptec as they both offered more features for less money. I ended up going with the adaptec 71605 as I got a really good deal on it and it checked all the boxes except trim support for consumer SSD's like basically all other HBA's. 16 sata ports PCIE Gen3 x8 No need to flash the card for HBA funcality, just simply switch to HBA mode in the cards bios Built in driver support in unraid and every other OS I have tried it with Can be found for less then $50 fairly easily on ebay (I got mine for a bit over half that during a sale) So I have been using it for a bit over a month now and have to say, I am quite happy with my choice. I can hit over 2GB/sec speeds on my 5x 128gb cache pool but I am CPU bottlenecked at that point (all cores pegged to 100%). Others have reported 4500MB/s combined speeds and still not reaching a bottleneck. The heatsink does need some airflow like all HBA cards. I built a funnel out of poster board that covers half a 120mm side panel fan and it keeps it nice and cool with the fan at the lowest speed ~500rpm. The bios is also pretty simple to figure out as well, reset all settings to default and don't setup any raid settings and it will work. Although switching it to HBA mode and enabling drive listing on boot is better for our use case IMHO. Overall I am very happy with this card and recommend it. The only issue I have had is that the default write-cache setting is "Drive dependent". Sounds fine on paper but I had several drives just suddenly stop supporting write caching after a reboot and hdparm could no re-enable it nor would it work on another computer. This setting should always be set to "enabled always" and never changed. EDIT: I figured out how to fix this from within unraid after some more playing around, I will leave the old windows fix in a zip archive below in case some of that helped get this working. hdparm does not work to fix this, nor does the normal smartctl command. There is a more advanced smartctl command that does work though! smartctl -s wcache-sct,on,p /dev/sdX This got it working on a drive I added to the server after fixing the others without having to revert to windows. No need to reboot or anything, takes effect immediately. The ",p" is supposed to make the command persistent. I archived the rest of this thread for future reference in case someone needs it in the attached zip file, including the dskcache windows program. dskcache - Tool for fixing write cache on hard drives.zip Archived instructions for fixing write-cache disabled on Adaptec HBA card.zip
    1 point
  9. Uncast Episode XIV: Return of the Uncast with Bob from RetroRGB Season 2 of the Uncast pod is back and better than ever, with new host Ed Rawlings, aka @SpaceInvaderOne 👾 On this episode, Bob from RetroRGB joins the Uncast to talk about all things retro gaming, his discovery and use cases for Unraid, a deep dive into RetroNAS, and much more! Check out the show links below to connect with Bob or learn more about specific projects discussed. Show Topics with ~Timestamps: Intro from Ed, aka Spaceinvader One 👾 ~1:20: Listener participation on the pod with speakpipe.com/uncast. Speakpipe will allow you to ask questions to Ed about Unraid, ask questions directly to guests, and more. ~2:50: Upcoming Guests ~3:30: Bob from RetroRGB joins to talk about Unraid vs. prebuilt NAS solutions, use cases, and RetroNAS VMs. ~6:30: Unraid on a laptop? ~9:30: Array Protection, data recovery, New Configs, new hardware and client swapping. ~11:50: Discovering Unraid, VMs, capture cards, user error. ~17:30: VMs, Thunderbolt passthrough issues, Thunderbolt controllers, Intel vs. AMD, motherboard hardware, and BIOS issues/tips. ~21:30: All about Bob and RetroRGB. ~23:00: Retro games on modern TVs and hardware and platforms. ~24:34: MiSTerFPGA Project ~27:15: RetroNAS ~30:30: RetroNAS security: Creating VLANs, best practices, and networking tips. ~37:15: Using Virtiofs with RetroNAS on Unraid, VMs vs. Docker, and streamlining the RetroNAS install process. ~43:13: Everdrive Console Cartridges and optical drive emulators. ~46:50: Realistic expectations and advice to new retro gaming enthusiasts. ~51:05: MiSTer setup how to's and retro gaming community demographics. ~55:45: Retro gaming, CRTs, emulation scaling, wheeled retro gaming setups, and how to test components and avoid hardware scams. ~1:05: Console switches, scalers, and other setup equipment. In the end, it all comes down to personal choice. Show Links: Connect and support Bob: https://retrorgb.link/bob Send in your Uncast questions, comments, and good vibes: https://www.speakpipe.com/uncast Spaceinvader One interview on RetroRGB MiSTer FPGA Hardware https://www.retrorgb.com/mister.html RetroNAS info: https://www.retrorgb.com/introducing-retronas.html Other Ways to Support and Connect with the Uncast Subscribe/Support Spaceinvader One Youtube https://www.youtube.com/@uncastpod
    1 point
  10. Ich probiere es mit dem Bewegungsmelder und berichte wie der WAF und KAF (Kids ^^) ist.
    1 point
  11. Best way to have a specific IP is to reserve it in the router. Let all devices use DHCP and reserve IPs by MAC address in the one place that really matters. No chance of collisions that way.
    1 point
  12. Thankfully i have an 8GB kit coming on monday just to keep me online, thank you very much for your help
    1 point
  13. Since January 1st has come and gone I just wanted to post an update saying that the Apache Guacamole team has not yet made any 1.5.0 releases. As soon as they release something I will start working on an update.
    1 point
  14. Yes, plase head over to GitHub and read this issue: Click You have to extract the start command and put it in the variable then it will work. Hope that helps.
    1 point
  15. It's Java. I had mentioned replacing the .jar in my post, so I went that route. It worked fine. I probably should've gotten on here before you replied so you didn't have to take your time with this. However, I appreciate your speedy response. BUT, now that it's the weekend I'm trying to make it a BetterMC server. Since it is so heavily modded, I used Curseforge on the client side to get BetterMC up and going. I am able to play single player. I also knew that Curseforge will put together a server package to keep all the mods synced up. I put them in but it wouldn't work when connecting to the server. In the readme file that came with it it states this: "ONLY USE THESE FILES TO RUN THE SERVER: Windows = start.ps1 Linux = start.sh (DO NOT USE THE .jar TO RUN THE SERVER!)" Unfortunately, your container automatically add the .jar to start the container. Is there a way to edit your container to use the start.sh shell to run it, or should I just try another container solution? I don't mind messing around and changing this if it's not mindboggling to do so. Thank you for the work you do. I really appreciate it. I use at least 6 of your game containers and they are always reliable.
    1 point
  16. I believe its Dell Perc 310 should be able to flash to it mode if not already https://www.sanderh.dev/flashing-your-Dell-Perc-to-IT-firmware/
    1 point
  17. Just pushed an update. This should fix that.
    1 point
  18. Local LAN addresses can be safely posted as they give away nothing that is private about your system.
    1 point
  19. Das habe ich gemacht, es ist jetzt so wie vor der Aktion.
    1 point
  20. Ja Geht auch. Check noch mal mit dem oben genannten Kommando, ob die Vdisks noch sparsig sind.
    1 point
  21. 1 point
  22. exactly, take a look at the last 20 posts in the swag support thread, been handled there already ... that is not always true as you see upper, from time to time you should update those config files ... here there was ~ 1 year time now until it really broke it, manual changes in the conf wont give this note, its the header (date versioning) which triggers the update notification ... so sometimes it makes sence to compare and update them or wait until it crashes and then read the logs from all in the chain (in the upper case swag) and fix it.
    1 point
  23. Can you get it to work if you disable Docker in Settings and boot in SAFE mode?
    1 point
  24. naja, wenn du der Ursache nicht auf den Grund gehen willst ... Hintergrund, Unraid läuft bei jedem Start frisch NUR im RAM, sprich, da ist nichts installiert im Sinne von "Müll" was aufgeräumt wird ... aber egal. wie würde ich das machen, stumpf und hart (bitte beachte dass du weißt was du machst) Bild meiner Disks um später diese wieder genau so zuzuweisen .... Parity !!! Beispiel (Foto machen oder auf dem Gerät speichern von dem su später unraid "extern" einrichtest) Prüfung meiner Daten, liegt wirklich ALLES was ich behalten will bereits im Array !!! Beispiel, Shares individuell prüfen und noch aufräumen (move to array disks), hier hätte ich noch Handlungsbedarf USB Stick Lizenz Datei sichern Beispiel, PlusKey ... kopieren um diese später wieder dahin zu kopieren (für den "platt" gemachten Stick) Vorbereitungen erledigt, jetzt wird es halt "stumpf" und ... VM Dienst Stop, Docker Dienst Stop (also den Dienst in den jeweiligen settings, nicht nur die Dockers individuell ausschalten ...) ich nutze jetzt keine tools wie MC oder krusader oder ... dauert mir alles zu lange Beispiel, cmd line, hier liste ich alle meine vorhandenen Shares und würde jetzt diese jetzt schnell einzeln wipen um sicher zu sein das alle reste auch von den array disks verschwunden sind bei diesen Shares, ich schätze du hast sicherlich vieles "wild" verstreut" in /appdata und co ... hier die Befehle um diese zu wipen (inkl. Inhalte), das gleiche dann für domains, system, ... alles was weg soll geht natürlich auch mit mc wenn das lieber ist (Konsolen Datei Manager Tool) BITTE NICHT VERTIPPEN, /mnt/user/Daten würde meinen Daten Share wipen, Dokumente, Nextcloud ... also VOR Return schauen, durchatmen, ... Last login: Fri Jan 27 09:57:54 2023 from 192.168.1.200 Linux 6.0.15-Unraid. root@AlsServer:~# ls -la /mnt/user total 12 drwxrwxrwx 1 nobody users 19 Jan 14 14:43 ./ drwxr-xr-x 14 root root 280 Jan 28 05:00 ../ drwxrwxrwx 1 nobody users 34 Jan 22 16:00 .Recycle.Bin/ drwxrwxrwx 1 nobody users 199 Jan 14 10:34 Daten/ drwxrwxrwx 1 alturismo 1000 41 Jan 27 05:50 Dokumente/ drwxrwxrwx 1 nobody users 123 May 11 2022 Media/ drwxrwx--- 1 nobody users 181 Dec 20 03:10 Nextcloud/ drwxrwxrwx 1 nobody users 4096 Jan 21 18:51 appdata/ drwxrwxrwx 1 nobody users 86 Jan 14 07:47 domains/ drwxrwxrwx 1 nobody users 4096 Jan 18 05:16 isos/ drwxrwxrwx 1 nobody users 42 Aug 30 12:21 lxc/ drwxrwxrwx 1 nobody users 239 Jan 14 15:35 system/ root@AlsServer:~# rm -R /mnt/user/appdata/ root@AlsServer:~# rm -R /mnt/user/domains/ root@AlsServer:~# rm -R /mnt/user/system/ jetzt bist du grundsätzlich schon soweit weil alles ex ist, sollten Shares da sein wo du später nicht mehr willst kannst du diese jetzt im Share Reiter auch löschen (wenn Sie leer sind), wäre jetzt aber für appdata, domains, system und isos nicht notwendig da diese per default neu erstellt werden. Jetzt, Kiste runter fahren, USB Stick platt machen, mit dem Unraid USB Creator neu machen, key file in den config Ordner, starten, array disks zuweisen (Bild !!!), cache's formatieren und zuweisen, array starten, Docker Dienst starten, VM Dienst starten, alles neu und die vorhanden Data Shares inkl. Daten sind wieder genauso da wie vorher, Unraid erkennt die vorhandenen Daten und legt bei eigenen Shares die Shares automatisch wieder an ... Bitte nichts formatieren !!! wenn du new config machst, vorhandene Parity kann weiter genutzt werden, dann sparst du dir auch den parity build (sofern dieser aktuell existiert und nicht gerade im Aufbau ist was ja EIngangs lange dauert) So würde ich das machen wenn ... jetzt nochmals zu meinem ersten Satz, ich würde zuerst nach der Ursache schauen weil ... ich gegebenenfalls wieder genau dort lande da ich mein setup ja wieder dort haben will und ich all das auch im laufenden Betrieb machen, prüfen, ... kann (ohne platt zu machen). Abschließend, überlege Dir was du machst und sichere Dir die wichtigen Daten immer einmal weg !!! BEVOR du etwas gröberes anstellst
    1 point
  25. quite excited about the continuation of uncast after a seemingly never ending hiatus since episode 13 concluded.
    1 point
  26. I had the same issue. I downgraded to 1.30.2.6563 In linuxserver's container look for the variable labeled Version: and put in the version number, I had a tough time finding a good changelog so I used https://www.videohelp.com/software/Plex/version-history to find the version I wanted.
    1 point
  27. Da ich nicht ganz erkennen kann, was Du alles schon probiert hast, hier der normalerweise funktionsfähige Weg: Wenn Du die enthaltenen Daten extern sicherst, dann beide Datenträger aus dem Array nimmst (new config), solltest Du problemlos di ekleinere SSD als Datendisk neu formatieren und nutzen können. Die größere SSD als Parität würde ich dann im zweiten Schritt nachträglich zufügen. Das sollte bei den SSD alles zusammen nur wenige Stunden dauern. Vielleicht könntest Du die realen Größen der SSDs hier mal angeben, da kann vielleicht auch anderen Helfen, wenn sie diese Angaben brauchen/suchen. Nur so nebenbei: Das SSD im Array keine Trim-Unterstützung erfahren ist Dir vermutlich bekannt.
    1 point
  28. After installing the Intel I225-V NIC, I see the same issue. I now also tested the connection between my WiFi Macbook and a wired PC -> same issue. I'm quite sure now that the issue is caused by my Router.
    1 point
  29. Thank you. I ran the check and got a CRC warning. Repair fixed that so I'll keep an eye on things and report back.
    1 point
  30. Here is the video where he confirms the investment is being developed by two former employees. I thought it was framework, but that was misunderstanding. It sounds like a new company
    1 point
  31. I updated my docker and it didn't come back up properly. I restarted it. The log now just shows: [migrations] started [migrations] no migrations found ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- **** Server already claimed **** No update required [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. When I go to https://ipadress:32400/web/index.html I just get: This XML file does not appear to have any style information associated with it. The document tree is shown below. <Response code="503" title="Maintenance" status="Plex Media Server is currently running database migrations."/> Any ideas where to go from here?
    1 point
  32. This is what i did to fix it. Stop Swag docker Go to \\<server>\appdata\swag\nginx folder rename original nginx.conf to nginx.conf.old copy nginx.conf.sample to nginx.conf rename ssl.conf to ssl.conf.old copy ssl.conf.sample to ssl.conf restart swag docker This worked for me
    1 point
  33. Since this thread doesn't completely explain it, let me fill in a few details. If you have single parity, you can rearrange the disk order and parity will remain valid. But you must not make any other changes to assignments. Disks cannot be removed or added or parity will not be valid. If you have dual parity, parity2 must be rebuilt since it depends on disk order. New Config/Trust parity without parity2, then add parity2 and it will rebuild.
    1 point
  34. Since you have a failing disk, you could do parity swap procedure with disk2. That will copy parity to 8TB then rebuild disk2 to the former parity disk. Then you can proceed to replace/rebuild disk1 with 8TB. And/or upsize/rebuild new disk2 (former parity) to 8TB if you want. Stop the array, unassign disk2, start the array with disk2 unassigned. Then disk2 will be a missing disk and you can follow the documented parity swap procedure.
    1 point
  35. I have included your update for the next Unraid version. Thanks
    1 point
  36. Confirming this worked for me too. Not sure I needed to replace both, but I did anyway and Swag and Nextcloud are both back and up and running. For noobs like me, here's what I did: 1. Stop the Swag container 2. Go to the /mnt/user/appdata/swag/nginx folder 3. Rename your ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (just in case we to restore them) 4. Copy ssl.conf.sample to ssl.conf and nginx.conf.sample to nginx.conf 5. Start the container and you should be good.
    1 point
  37. There are multiple apps segfaulting in the log, good idea to run memtest.
    1 point
  38. So I also had this problem, for the time I've reverted all the way back to UnRAID 6.9.2 which exhibits none of these issues. I have both ESXi running as a Guest on UnRAID and a Windows 10 VM that runs VMware Workstation (where my vCenter is installed). I went through the trouble spinning up a Windows 11 VM and testing compatibility in there as well. The primary behavior that was noticed is that this error message when running ESXi 7 and VMware Workstation 16.5 is along the lines of "vcpu0:invalid VMCB". I tested VMware Workstation 17 as well and "AMD-V is supported by the platform, but is implemented in a way that is incompatible. " After some searching it turns out that pre-2011 AMD's version of AMD-V botched the VMCB flags and didn't include the proper virtualization parameters. My best guess at the moment is that the QEMU version in UnRAID 6.11.x is for some reason implementing an extremely outdated version of AMD-V that is getting passed through...when they were doing it properly before. No amount of XML flags seems to fix the issue. Can anyone chime in on QEMU regression changes?
    1 point
  39. For people using GUS with the 'latest' tag. After updating this made the services crash on my install, so use this instead: testdasi/grafana-unraid-stack:s230122
    1 point
  40. I get similar issues since a few versions back, server starts to max the CPU then stuff like this gets spammed in log. After a some time, it calms down and everything is back to normal. Seems to happens alteast once every day. kernel: traps: lsof[14343] general protection fault ip:1504971c54ee sp:1fa708f06a2f1a3 error:0 in libc-2.36.so[1504971ad000+16b000] kernel: traps: lsof[26932] general protection fault ip:14df495714ee sp:cd45860846c31f97 error:0 in libc-2.36.so[14df49559000+16b000] kernel: traps: lsof[32411] general protection fault ip:1527d92984ee sp:6526393b4abc9f47 error:0 in libc-2.36.so[1527d9280000+16b000] Ive upgraded most hardware in server, thinkig it was ram issue. But problem remains with new hardware aswell.
    1 point
  41. I saw we got one for the 304, but how about one for it's bigger brother? The Fractal Node 804!
    1 point
  42. @juan11perezThank you!! How do we get this in the CA store? Would be great for more people to use it to tinker.
    1 point
  43. Ich habe ihm nur erklärt wie er das lösen kann. Umsetzen muss er das im Plugin selbst. Bis dahin geht tatsächlich nur "pkill -xc rsync". Ich habe mir einfach zur Vorsicht das folgende Script angelegt und als Ausführungsoption "At stopping of Array" ausgewählt: #!/bin/bash # x = exact match # c = return process count pkill -xc rsync
    1 point
  44. I was running into the same issue. Docker Version "not available" In my case Fix Common Problems plugin was reporting "Unable to communicate with GitHub.com". This issue surfaced after setting unbound Pi-hole as the DNS on my home network. My workaround was as follows. 1. disable VM and docker in unraid settings 2. change the DNS in unraid IP settings to 1.1.1.1 (use whatever DNS you'd like) rather than my home router IP / Pi-hole 3. re-enable VMs and docker in unraid settings 4. toggle advanced view and force update containers in the docker tab of unraid After the following the above steps all contains show as up to date and are auto updating again.
    1 point
  45. Open device manager You will see your unknown devices. Right click the unknown device and select "update driver". Select "Browse my computer for the driver software" Click browse Select the CDROM Drive virtio-win-x.x Then click next. Windows will scan the entire device for the location of the best-suited driver. It should find a RedHat network adapter driver, follow the prompts and you're in business. ** I never bothered to locate the actual subfolder of the driver on the virtio-win-1-1 image, I just let windows do it for me. ** Hope this helps.
    1 point