All Activity

This stream auto-updates

  1. Past hour
  2. Changed Status to Closed Changed Priority to Other
  3. Array didn't auto-start because disk4 was initially missing Nov 28 23:37:25 quasar-ultima kernel: mdcmd (5): import 4 Nov 28 23:37:25 quasar-ultima kernel: md: import_slot: 4 missing It came back online a few seconds later, that's why you could then start the array manually: Nov 28 23:37:45 quasar-ultima kernel: mdcmd (5): import 4 sdm 64 3907018532 0 WDC_WD40EFAX-68JH4N1_WD-WX42D410YUKV Nov 28 23:37:45 quasar-ultima kernel: md: import disk4: (sdm) WDC_WD40EFAX-68JH4N1_WD-WX42D410YUKV size: 3907018532 So it's not an Unraid problem, check/replace cables on that disk.
  4. Thanks for the answer. I already did what you said but same result.
  5. pretesting for InvokeAI v2.2 Update - including unified canvas for Outpainting/inpainting (only use if you want to play around with it) Dockerfile: FROM ubuntu:22.04 RUN apt-get update \ && DEBIAN_FRONTEND="noninteractive" \ apt-get install -y \ git \ dos2unix \ python3-pip \ python3-venv \ python3-opencv \ libopencv-dev \ lshw \ && apt-get clean WORKDIR /usr/lib/x86_64-linux-gnu/pkgconfig/ RUN ln -sf opencv4.pc opencv.pc WORKDIR / ADD / . RUN dos2unix / RUN chmod +x RUN apt-get remove dos2unix -y ENTRYPOINT ["./"] #!/bin/bash if [ -f "/venv/pyvenv.cfg" ] ; then source /venv/bin/activate fi if [ ! -f "/InvokeAI/" ] ; then echo "Cloning Git Repo in to Local Folder..." git config --global --add /InvokeAI git clone -b development cd InvokeAI cp configs/models.yaml.example configs/models.yaml else git config --global --add /InvokeAI cd InvokeAI fi if [[ $(lshw -C display | grep -i vendor) = *AMD* ]] && [[ $(lshw -C display | grep -i vendor) != *NVIDIA* ]] && [ ! -f "requirements.txt" ] ; then echo "AMD GPU Found" cp environments-and-requirements/requirements-lin-amd.txt requirements.txt elif [[ $(lshw -C display | grep -i vendor) = *NVIDIA* ]] && [ ! -f "requirements.txt" ] ; then echo "Nvidia GPU Found" cp environments-and-requirements/requirements-lin-cuda.txt requirements.txt fi echo "Checking if The Git Repo Has Changed...." git fetch UPSTREAM=${1:-'@{u}'} LOCAL=$(git rev-parse @) REMOTE=$(git rev-parse "$UPSTREAM") BASE=$(git merge-base @ "$UPSTREAM") if [ $LOCAL = $REMOTE ]; then echo "Local Files Are Up to Date" elif [ $LOCAL = $BASE ]; then echo "Updates Found, Updating the local Files...." git config pull.rebase false git pull fi current=$(date +%s) last_modified_envcuda=$(stat -c "%Y" "environments-and-requirements/requirements-lin-cuda.txt") last_modified_envamd=$(stat -c "%Y" "environments-and-requirements/requirements-lin-amd.txt") last_modified_pre=$(stat -c "%Y" "scripts/") if [ -f "/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_envamd)) -lt 60 ] && [[ $(lshw -C display | grep -i vendor) = *AMD* ]] && [[ $(lshw -C display | grep -i vendor) != *NVIDIA* ]] ; then cp environments-and-requirements/requirements-lin-amd.txt requirements.txt echo "Updates Found, Updating python Environment...." pip install -r requirements.txt --upgrade --no-cache-dir pip install -e . elif [ -f "/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_envcuda)) -lt 60 ] && [[ $(lshw -C display | grep -i vendor) = *NVIDIA* ]] ; then cp environments-and-requirements/requirements-lin-cuda.txt requirements.txt echo "Updates Found, Updating python Environment...." pip install -r requirements.txt --upgrade --no-cache-dir pip install -e . fi if [ -f "/venv/pyvenv.cfg" ] && [ $(($current-$last_modified_pre)) -lt 60 ] ; then echo "Updates Found, Updating Model Preload...." python scripts/ --root /userfiles/ --no-interactive fi if [ ! -f "/venv/pyvenv.cfg" ] ; then echo "Creating Python Environment...." python3 -m venv /venv/ source /venv/bin/activate pip install -r requirements.txt --no-cache-dir pip install -e git+ pip install -e . echo "Preloading Important Models/Weights...." python scripts/ --root /userfiles/ --yes fi if [ ! -f "~/.invokeai" ] ; then echo '--web --host= --root="/userfiles/"' > ~/.invokeai echo "Loading InvokeAI WebUi...." python scripts/ else echo "Loading InvokeAI WebUi...." python scripts/ fi my-invokeai.xml: <?xml version="1.0"?> <Container version="2"> <Name>InvokeAI</Name> <Repository>invokeai_docker</Repository> <Registry/> <Network>bridge</Network> <MyIP/> <Shell>bash</Shell> <Privileged>false</Privileged> <Support/> <Project/> <Overview>InvokeAI Docker For Unraid </Overview> <Category>Other: Status:Beta</Category> <WebUI>http://[IP]:[PORT:9090]/</WebUI> <TemplateURL/> <Icon></Icon> <ExtraParams>--gpus all</ExtraParams> <PostArgs/> <CPUset/> <DateInstalled/> <DonateText/> <DonateLink/> <Requires/> <Config Name="Webui Port" Target="9090" Default="9090" Mode="tcp" Description="" Type="Port" Display="always" Required="true" Mask="false">9090</Config> <Config Name="InvokeAI" Target="/InvokeAI/" Default="/mnt/cache/appdata/invokeai_dev/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/invokeai/</Config> <Config Name="userfiles" Target="/userfiles/" Default="/mnt/cache/appdata/invokeai_dev/userfiles/" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/cache/appdata/invokeai/userfiles/</Config> <Config Name="venv" Target="/venv" Default="/mnt/user/appdata/invokeai_test/venv" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/invokeai/venv</Config> <Config Name="Huggingface Token" Target="HUGGING_FACE_HUB_TOKEN" Default="" Mode="" Description="If you wish to auto download recommened models and agreed to there t&amp;amp;c please enter you Huggingface token here" Type="Variable" Display="always" Required="false" Mask="true"></Config> </Container>
  6. The increasing reallocated sector count metric never looks good. Don't use in in the array.
  7. You were forgetting to start the array after you unassign the old data disk. This commits the change to unraid so that the drive will be emulated. Then stop the array and assign the new data disk. I'm doing the same thing with my server next week. Here is the link to the unraid manual you should follow and the steps I'm taking in my dual parity setup: unRAID Storage Manual Parity Upgrade Section 1. Preclear new disks 2. Make backups of flash drive, dockers, VM'S, and appdata. 3. Take screenshots of Disk Assignments and Shares. 4. Run a Parity Check. Verifying 0 sync errors. 5. Disable Dockers and VM's. 6. Disable Mover and any other scheduled tasks. 7. Turn off auto-start for array. 8. Stop array and unassign parity disks. 9. Power down server. 10. Remove parity disks and store in safe place until parity is built on new disks. (In case you lose a data drive in the process) 11. Install new parity disks. 12. Start server. Verify new disks are the correct ones by looking at serial numbers. 13. Assign new disk/s to Parity slot/s. (Unraid suggests to replace one parity disk at a time, in dual parity, which will keep the array in a protected state during parity build). I will be replace both parity disks at the same time to save time. 14. Start array in maintenance mode (so nothing can be written to array while building parity) or normal mode (to access data while building parity). 15. Wait for parity build to finish. Then reboot server to verify unraid boots correctly, with all the correct disk assignments. Data Upgrade Section 16. Start array. Verify all your data is still there. 17. Take another set of backups and screenshots (same as in Parity section). 18. Stop array. Unassign data disk to be replaced. 19. Start array to commit change to unraid. You may have to check a box at the bottom of the page stating that disk will be emulated. (Necessary to start array for missing disk to be emulated). 20. Stop array and power down server. 21. Remove the data disk you just unassigned and keep safe until data is rebuilt onto new disk. 22. Install new data disk. (You can now use the old parity disk since your server has built parity on the new parity disks). 23. Start server and verify new data disk is shown by looking at the serial number. 24. Assign the new data disk to the empty slot you created when you removed the old data disk. 25. Start array. (You may need to check a box at the bottom of the page stating that unraid will rebuild the data on the new disk using parity). 26. Once the data disk is rebuilt, verify all your data is there. 27. Reboot your server to verify there are no issues and all data is still there after starting the array. 28. Take new backups and screenshots of everything for future use. 29. Change settings back to normal, like array auto start, re-enabling dockers, VM's, Mover, etc. Enjoy your larger array!
  8. Can you confirm your memorybacking options? I was running with just memfd and shared, but can you try including nosharepages also if you are not already? <memoryBacking> <nosharepages/> <source type='memfd'/> <access mode='shared'/> </memoryBacking>
  9. Hey @JonathanM is this still the case? I just tested it with my Mac client in the same subnet, but got a timeout. Anything else to be aware of?
  10. There is an error in the docker setup screen. The folder paths must end in a / or instead of folder/path/to/img you get folder/path/toimg. You can correct this manually in the XML, or delete the appdata files edit the container settings to add the trailing slash and rerun it. That said, I have not been able to get it to boot even after partitioning and running a Big Sur install. Monterey just hangs on the apple logo altogether.
  11. Today
  12. Went to login to the GUI and noticed my boot usb unmounted. docker and vm tabs missing and when i access the apps tab im hit with the fatal flash error. After a quick google im assiuming my flash drive with unraid on it just went bad? I do have a backup of the flash drive, but ill need to wait until the morning to purchase a replacement. How long will unraid continue to function in this state? My containers and VMs running at the time do still appear to be functional and accessabile. What is the best method with swapping unraid boot drives? never had to do this before. I am liscensed but it now says missing key file. How can i make sure this is copied over correcly from my backup? I Apppciate any advice here and thank you in advance
  13. das hat bei mir nur Auswirkung bei der 1060, die fällt dann von 8 auf 4 Watt ... warum, gute Frage ... hab ich auch noch nicht heraus bekommen, selbst die Karte zu aktivieren und deaktivieren bringt das nicht, nur per VM on/off. Meine 3080ti ist davon nicht betroffen, die ist immer bei ~ 6W im P8 mode, egal ob VM vorher an oder aus war. nichts spezielles ... war bei der vorherigen 3070 auch so, laufen die Lüfter der Karte im P8 mode bei Dir ? Danke kleiner Nachtrag, so ist das jetzt mit aktiver Desktop VM im idle
  14. This thread would be more targeted to Moderators or Admins. Since a week or two, I'm unable to lock threads. The popup window is blank and even clicking on 'Save' does not work. I've tried several Browsers : Chrome, Firefox with all my plugins Chrome whitout the most aggressive plugins Edge with no plugins at all, same thing. I am not running any kind of pfsense or pihole that could screw up my network traffic. It seems it work OK for JorgeB, am I the only one having issues here ? I'll try from work to see if it might be some kind on router/ISP issue.
  15. Aha! I did not know that. But now I'm confused because I don't see the dynamix trim plugin installed anymore. Did that get uninstalled in an upgrade? I do however still see SSD TRIM Settings in the Scheduler: I thought that was part of the plugin actually. So do I now simply disable the TRIM scheduling?
  16. The negatives of the small case are: restrictive air flow that will inevitably affect HD temps very limiting expansion possibilities requirement for the usually more expensive mini ITX motherboards which normally come with no more than 4 SATA ports, and those boards that do come with 6 are impossible to find. A better choice would be to get a spacious, NAS friendly case e.g. Fractal Node 804 which features ample cooling, accommodates cheaper mATX motherboards, many of which have 6 or even more SATA ports and designed to hold at least 8 HDs. It also would be better to start with just 2 HDs (parity and data) and get the biggest ones you can afford. Multiple smallish HDs are not desirable. They consume extra electricity, use up limited SATA ports and increase number of points of potential failure. The usage of a HBA card would add another device using non-trivial amounts of electricity. If from the get go you structure your system with bigger drives and at least 6 SATA ports, then you probably won't even need to consider having HBA for a long while. Any modern Intel i3 chip with iGPU will serve your needs just fine.
  17. Uff. Danke für die ganzen Infos. Somit habe ich da wohl noch einiges an Arbeit vor mir. wie bekommst Du die GPUs denn so runter? Ich habe den NVIDIA-Docker laufen und setze persistence-Mode=1 um in P8 zu kommen. Damit komme ich auf 28w für eine 3090. Wenn ich meine Windows-VM starte, stoppe und dann wieder den Treiber lade, komme ich auf 18w. Reproduzierbar. Was macht die VM mit der Karte das den Verbrauch anschließend nochmal um 10w senkt? Und was machst Du. Um auf <10w zu kommen? Deine 3080ti-Werte sollten ja mit ner 3090 erreichbar sein. Kann ich mir irgendwie alle relevanten Einstellungen der GPU anzeigen lassen, um sinnvoll vergleichen zu können? Und nette Verbrauchs-Übersicht. Da bastel ich auch gerade noch dran. Viele Grüße, skies
  18. As already stated several times elsewhere, mounting network shares in Unraid is handled by the plugin Unassigned Devices. Please post in the plugin's support thread with your diagnostics.
  19. I don't know of any tutorial (specific to Unraid). You might want to understand that Usenet predates "the Web" by a decade. There is no... You search and you will find. Be humble and you will be accepted. Be a torrent whore and... well... to each their own. MrGrey.
  20. It was a lovely 11 days.. Updated to 6.9.0 this morning - Immediately met with Connection Refused. Logs show as before, it doesn't get to the "Listening" stage. Tried with and without the --security option. Back to 6.8.0 and it's fine - My Unraid is up to date 6.11.5 if that helps. Would anyone have a solution?
  21. Backup. That's why you Backup If you have a backup, everything is easy and can take as much time as you need. Your plan looks sound, well thought out, even smart, but if you don't have backup... don't blame me if it goes sideways. MrGrey.
  22. That was the solution. Thank you SlrG!
  23. It is many files, but I was looking at the statistic when it was backing influxdb, where my bucket are all overt 1gb (one is 40gb). I took 45 minute to backup influxdb. Then, it came to another file to backup in another folder, 1gb, and it took way less time. Might be something on my influxdb. But I never saw read throughput over 50mbps during the 1h30 backup duration which is weird.
  24. No need to trim any btrfs pool since it uses the discard=async mount option
  25. I just got two Samsung 870 QVO 2.5" SSD and set them up in a new pool. I'm just curious if the trim function will be run automatically on the new pool just like on the original cache pool? Or do I have to do something special to trigger it? Also, they are set up in a BTRFS raid1. Will trim be performed on both drives?
  26. Thanks, worked for me! @advplyr, you might want to update your readme with this conf, it's much more Swag conformant
  1. Load more activity