Leaderboard

Popular Content

Showing content with the highest reputation on 01/25/23 in all areas

  1. Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks
    4 points
  2. Ok- I *THINK* this should fix things. For some reason, rich text pasting was enabled (maybe a system change from a recent forum update?) but I changed it to "Paste as Plain text" which I believe was the default before. Please let me know if this issue persists.
    2 points
  3. I have included your update for the next Unraid version. Thanks
    2 points
  4. Dear Unraiders and Lime Tech, As a storage professional am I working with storage on a daily basis. Therefore was I searching for hot spare funtionality at Unraid and did not find it. (Feel free to correct me when I am wrong.) Definitions: Hot Spare: In case of a disk failure a hot spare disk is automatically added to the array and triggers the normal data/parity rebuild. Global Hot Spare: Hot spare disk to be used by the complete array. Local Hot Spare: Hot Spare intended for one single disk pool. Question is does Unraid require hot spare functionality?? Just remember why we use NAS storage? We like to have some level of hardware redundancy for our data. Hot spares can be an addition to the overall package of hardware redundancy and when used raise the redundancy level. There are many different (hardware) redundancy solutions one can add to your Unraid system. It is just a matter of how far you want to take it and how important your data is to you. Slider always moves between cost on one side and highest possible data redundancy on the other side. Benefit which hot spare feature would bring is the fact that the feature is fully automatic from the time a hot spare has been made available. Another benefit is that the time your array runs in degraded mode is reduced. Some posts expressed worry that in case unraid encounters a bad SATA connection a hot spare will kick in. (when available) Exactly what I would want! First priority is the health of the array. Other posts have mentioned the problem of having a hot spare available at disk array's with different disk capacities. Well that is true. It is a bit more work when you use disks with different capacity to come to the correct disk size you need. In this case it is more easy to use disks with same capacity for every drive pool. If hot spare feature is optional one could choose to use this feature or not. A choice to have a global or local hot spare would complete the whole. Would it not be a very peaceful thought when I am at work and I do not have to worry about my unraid system at home in case my Unraid system could have a hot spare available.... Proactivity is the definition of being in control. Cheers, Marcel
    1 point
  5. We are seeking an established DevOps Engineer to support, improve, and maintain mission-critical infrastructure at Lime Technology, creators of Unraid OS. This position is best suited for someone who can work independently on a wide variety of given tasks with minimal direction. Strong communication, collaboration, and documentation skills are a must. In this role, you will focus on configuring and maintaining cloud infrastructure, proactively monitoring systems for performance and security issues, and collaborating with development teams to troubleshoot and resolve issues. You'll have a broad array of responsibilities that cut across both development and DevOps. This is a fully remote position. The right candidate is located in North America or has the ability to collaborate with the team for at least 4 hours each day between 8am - 5pm PST. You must have on-call availability to troubleshoot during system downtime. We would greatly prefer to hire from the Unraid community. Please see the full job announcement here: https://unraid.net/blog/devops-engineer Want to apply? Please do so via this link.
    1 point
  6. And another 3 weeks later still not as much as a reply, nothing. Could somebody atleast give us an update? @limetech is this issue on the radar for future updates? Thanks in advance
    1 point
  7. The templates for your previous apps are on the flash drive in config/plugins/dockerMan/templates-user
    1 point
  8. Thank you for your help 🙂 I really appreciate your knowledge and suggestions. I'm going to go ahead and replace disk 8 now and rebuild it hopefully without any more issues 🤞
    1 point
  9. Since the disk was rebuilt according to current parity a parity check now should not find any errors, but the data is OK so I wouldn't worry much about if for now.
    1 point
  10. Interesting you may just be right, one of my torrent containers was running libtorrent >2. I'm downgrading and I will mark this if it resolves the issue
    1 point
  11. Das ist doch nix Ungewöhnliches im Parity-Protected Array. Das entspricht min. 120-180 MB/s. Am Anfang zieht der RAM-Cache. >100 MB/s bekommt man nicht bei Standard-Konfiguration mit Harddisks im Parity-Protected Array. Ich muss mir mal ein paar Makros basteln für die immer und immer wieder gleichen Fragen. Oder besser gleich ein paar Youtube Videos 😁
    1 point
  12. updated g2g Version coming now included for SD users to get program posters back. need some manual hands'on, rename the cronjob.sh, the docker will generate a new one, redo your settings, either proxy pictures only or cache them locally, to start i would recommend to set 1 Poster size depending on your wish, SD has a 5k daily (24h) limit. here a sample for Plex users, 2x3 here the final result with posters again (well, almost all )
    1 point
  13. Try downgrading the BIOS to v1.6, it's a known issue with those boards:
    1 point
  14. Perfect, all done, now it's matter of waiting. Thank you so much! I've followed the steps, but when I run: lsmod | grep amdgpu I still have this as output: amdgpu 6705152 0 gpu_sched 40960 1 amdgpu i2c_algo_bit 16384 1 amdgpu drm_ttm_helper 16384 1 amdgpu ttm 73728 2 amdgpu,drm_ttm_helper drm_display_helper 135168 1 amdgpu drm_kms_helper 159744 4 drm_display_helper,amdgpu drm 475136 7 gpu_sched,drm_kms_helper,drm_display_helper,amdgpu,drm_ttm_helper,ttm i2c_core 86016 6 drm_kms_helper,i2c_algo_bit,drm_display_helper,amdgpu,i2c_piix4,drm backlight 20480 4 video,drm_display_helper,amdgpu,drm With: root@Tower:~# ls /boot/config/modprobe.d/ amdgpu.conf root@Tower:~# cat /boot/config/modprobe.d/amdgpu.conf blacklist amdgpu In the meantime I have disabled hardware acceleration on Frigate and Plex (the only two containers that had it). Forgot to add, yes I have rebooted. Also I have the same file "amdgpu.conf" in /etc/modprobe.d/ with the exact same content.
    1 point
  15. Thanks a lot for all the replies it’s been very helpful and educational. I’ve got a memtest going now.
    1 point
  16. Probier doch mal ohne die ganzen Powersave Einstellungen. Einfach mal testen.
    1 point
  17. Have a look at the list of hardware on Hardwareluxx: https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-odvwZpQZKv_NCI/edit#gid=0 you can see very efficient systems there and see what SSD's they have used. maybe that helps. No change in Ubuntu. I now think that it could be my PSU. Thats quite old. Bought it in 2014 and the model exisits since 2012. I also read a couple of posts online on other forums that cheap and old PSU's prevent lower c-states. I'll see if I can find an affordable PSU that has a low power usage at very low load. I know the Corsair 550 2021 version would be a nice bet.
    1 point
  18. It's logged as a disk problem, run an extended SMART test on disk2.
    1 point
  19. @TexasDaveSince you have just added some new drives are you sure your PSU is up to driving them? Also, are you using power splitters?
    1 point
  20. Disk is failing the SMART test, it should be replaced.
    1 point
  21. Unless something has changed very recently then your scenario cannot happen! I do not believe that Mover Tuning over-rides the Use Cache setting so it only makes decisions on whether to move a file in the direction designated by the Use Cache setting and never moves them back and forth.
    1 point
  22. This is tough since AV1 isn't even working properly on Windows (stutters, artifacts and even crashing the whole system) and on Linux there needs some serious work to be done too to get the HW transcoding working. The next thing is that on Linux you need the Intel Media Drivers package too which needs to be integrated into the containers themself to even support encoding/decoding in the container. I've seen some serious progress recently in the Intel Media Driver GitHub but it's a long way that those things work reliably (on Windows too) like for the already existing Intel iGPUs with QuickSync for h264 (AVC) or h265 (HEVC). EDIT: I'm also not sure if much devices can decode AV1 currently, at least I think Apple devices, especially iOS lacking support for AV1 currently, but that might change in the near future.
    1 point
  23. it's probably not the same issue this thread was started with, even though symptoms may look same See this recent thread -
    1 point
  24. this is fine. It simply means permissions of newly created files will be exactly what is being requested umask is used to block permissions on newly created files. 007 means new files will not get any permissions for others. see https://en.wikipedia.org/wiki/Umask 99 is user nobody, 100 is group users. 0 is root. I think this is the main issue.
    1 point
  25. 1 point
  26. Oh, I had no idea they would do things differently, I am familiar with "mc", so I can use that from now on. Is there somewhere I can read about how moving with "mc" differs from moving with "mv"? And should there be additional caveats on this wiki article? It suggests Midnight Commander as the first option but then says you can do the same thing with the usual "mv" without much mention of any danger aside from having your session end mid-move (and suggests nohup to mitigate this; in my case, I used tmux). https://wiki.unraid.net/Transferring_Files_Within_the_unRAID_Server
    1 point
  27. With default container settings, dupeGuru works with data under /mnt/user. To my knowledge this is safe and you cannot screw up things.
    1 point
  28. how did you create the parent folder? Worth checking permissions on it via terminal. Shares should be owned by nobody:users and should have drwxrwxrwx permission for directories and -rw-rw-rw- for files If you are too new to Linux terminal, try new permissions under tools tab in docker UI, I just prefer terminal because it can confirm the issue before you do anything. If it's permissions, also should try to figure out why you are ending up in this situation in the first place
    1 point
  29. So I took the switch out of the equation and I hooked up to Unraid from the Windows PC directly and hit 800mbyts per sec, so there is either a setting in the switch or the switch is just not up to it. I may just order a Microtek switch and see how it works and return the TP-Link switch.
    1 point
  30. Excellent! Thank you very much. This solved it for me: sg_format --format --fmtpinfo=0 /dev/sdX Not sure how this format was different than the wipe I did in TrueNAS, but very glad it works now. Thanks again! Best regards
    1 point
  31. You dont need virtiofs. I dont know about serv 2022 but it wont find a disk on installl normally there is an option to load drivers which you will find on virtiso look for stor dir.
    1 point
  32. Ein mögliches Script wäre dies: (This sichert my USB Stick) - must dies nur an deine Bedürfnisse anpassen. Ich habe dieses aus einem anderen Thread - es stammt auch nicht von mir. #!/bin/bash #### SECTION 1 ####------------------------------------------------------------------------------------------------------ #dir = WHATEVER FOLDER PATH YOU WANT TO SAVE TO dir="/mnt/disk3/flash_backup/" echo 'Executing native unraid backup script' /usr/local/emhttp/webGui/scripts/flash_backup #### SECTION 2 ####------------------------------------------------------------------------------------------------------ echo 'Remove symlink from emhttp' find /usr/local/emhttp/ -maxdepth 1 -name '*flash-backup-*.zip' -delete sleep 5 #### SECTION 3 ####------------------------------------------------------------------------------------------------------ if [ ! -d "$dir" ] ; then echo "making directory as it does not yet exist" # make the directory as it doesnt exist mkdir -vp "$dir" else echo "As $dir exists continuing." fi #### SECTION 4 ####------------------------------------------------------------------------------------------------------ echo 'Move Flash Zip Backup from Root to Backup Destination' mv /*-flash-backup-*.zip "$dir" sleep 5 #### SECTION 5 ####------------------------------------------------------------------------------------------------------ echo 'Deleting Old Backups' #ENTER NUMERIC VALUE OF DAYS AFTER "-MTIME +" find "$dir"* -mtime +15 -exec rm -rfv {} \; echo 'All Done' #### SECTION 6 ####------------------------------------------------------------------------------------------------------ #UNCOMMENT THE NEXT LINE TO ENABLE GUI NOTIFICATION UPON COMPLETION #/usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Flash Zip Backup" -d "A copy of the heimnas-server unraid flash disk has been backed up" -i "normal" exit
    1 point
  33. oh yes ! https://github.com/limetech/webgui/blob/master/plugins/dynamix.docker.manager/include/DockerClient.php good to know!
    1 point
  34. The Unraid webgui is actually open source, since you found the solution if you are interested you can submit a PR here: https://github.com/limetech/webgui
    1 point
  35. In the docker section, toggle the "advanced view," then go down to steam-headless and "force update."
    1 point
  36. This has already been confirmed by LimeTech as a future enhancement, but not for v6.12
    1 point
  37. Shift-F10 OOBE\BYPASSNRO can install drivers later
    1 point
  38. +1 Wow, just to add to my before speakers, this i really a needed feature. And it's really missing component in this day and age. I tried tinkering with the unofficial API, it stopped suddenly working and is a disaster to get is back running. I just want to control my USB attach/detach from one button from my phone/home assistant. As I daily unplug my mouse and keyboard from the unraid vm, and need my phone to log in to unraid then switch to vm tab and detach mouse and keyboard and reattach. This is a nightmare. PLEASE LIME TECH listen to your customers and put this ASAP on the roadmap UNRAID OFFICAL API!!!
    1 point
  39. Same problem. It used to max the link speed. About 50% of that now both ways. Only unraid and SMB are the common factors.
    1 point
  40. Tbh at this day and age, no API support on a network-operation-centric system is unheard of. I switched from TrueNas not even thinking about the availability of APIs being a potential issue (this is on me). Then, finding the unofficial Unraid-API only to realize that it hasn't been updated for 2 years with the repo owner understandably not interested in maintaining it, presumably because of the sheer amount of work needed to build a web scraper and maintain it for exponentially increasing versions of the webpages (multiple versions of unRaid, infinite combinations of plugins). This feature has been a long time waiting to be solved. @willnnotdan mentioned in the discord that an API was promised to be released with version 6.8 and that was almost three years ago. Oh and this feature request was posted more than 6 years ago... Please Lime Tech, show us some love, give us an api.
    1 point
  41. For those who are still struggling to get Compreface running with the internal database, I think this is a step by step on how to fix the issue. If you are starting fresh, skip the first three lines. If you installed this docker already, start from the beginning. On the Docker tab in the unRaid webgui, stop CompreFace if running. On the Docker tab in the unRaid webgui, delete CompreFace docker installation. THIS IS VERY IMPORTANT: delete the default compreface folder that was created in your appdata folder. IF YOU DON’T DO THIS, YOU WILL CONTINUE TO GET ERRORS NO MATTER WHAT YOU DO. Go back to the apps tab and install Compreface (I’ve only tested the CPU version — I don’t know about the GPU version). DO NOT JUST BLINDLY HIT APPLY AND START THE INSTALL WITH THE DEFAULT PARAMETERS Instead, toggle on Advanced View (top right of the Compreface docker installation page). THEN GO TO “SHOW MORE SETTINGS”. DELETE ALL FOUR OF THE CUSTOM SETTINGS THERE BY USING THE BUTTONS TO THE RIGHT OF THE TEXT BOX Those settings are for using an external database, but if you are trying to use the internal database it will create a series of errors. Hit apply on the docker install page, and let docker/unraid do its thing. Go get a cup of coffee and let compreface go through its installation. After successful installation, you should be able to access the webgui for compreface without issue. These fixes do appear in this thread already, but I had to read through the thread several times and do a bunch of trial-and-error to figure out that I needed to delete the docker and the appdata before trying to reinstall sans database parameters.
    1 point
  42. Below I include my Unraid (Version: 6.10.0-rc1) "Samba extra configuration". This configuration is working well for me accessing Unraid shares from macOS Monterey 12.0.1 I expect these configuration parameters will work okay for Unraid 6.9.2. The "veto" commands speed-up performance to macOS by disabling Finder features (labels/tags, folder/directory views, custom icons etc.) so you might like to include or exclude these lines per your requirements. Note, there are problems with samba version 4.15.0 in Unraid 6.10.0-rc2 causing unexpected dropped SMB connections… (behavior like this should be anticipated in pre-release) but fixes expected in future releases. This configuration is based on a Samba configuration recommended for macOS users from 45Drives here: KB450114 – MacOS Samba Optimization. #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native spotlight backend = tracker [data01] path = /mnt/user/data01 veto files = /._*/.DS_Store/ delete veto files = yes spotlight = yes My Unraid share is "data01". Give attention to modifying the configuration for your particular shares (and other requirements). I hope providing this might help others to troubleshoot and optimize SMB for macOS.
    1 point
  43. I tested Docspell on a VM under Unraid via docker-compose. As I want to run the dms as a docker directly in Unraid I took a closer look at the docker-compose file. After a few tests docspell was up and running. I built templates and wrote instructions on how to install them. Sorry, the instructions are in german but I may translate them later. See here: https://github.com/vakilando/unraid-docker-templates You can try them. They are for my personal use and they are not part of the unraid community-apps and I do not have a support thread for them (yet).
    1 point
  44. If no parity 2, then all data disk could freely reorder and no need rebuild anything. - power off - Reoder all disk in the physical slot which you want ( you can do this in first or in last step ) - power on - stop array - execute "new config" with "retain" all - re-order all data disk - don't touch parity 1 - start array with check "parity valid"
    1 point
  45. Ah, I think I found it. There i s a file in the root folder of the backup zip-file named changes.txt, and that file starts with the UNRAID version number.
    1 point
  46. Actually, I was surprised Unraid did not have this time tested feature. I personally think a hot spare capability would still be of benefit in unraid. Even though you only lose one disk of data in unraid, it is actually also about the risk factor of losing another disk. Once one disk is gone, you can rebuild that, but a second no. Liklihood of more than one disk dying simultaneously? Well, more likely the more disks you have. And yes it does happen. One advantage of a hot spare particularly for smaller builds is that you could do away with the negative performance impact of dual parity and still have cover to reduce the risk of a second disk dying, which is heightened once one disk has died, e.g. due to the extra heat from having to constantly calculate the parity of the failed disk. It's a really great feature and unraid is the first redundant system I've ever seen that doesn't have it.
    1 point
  47. Of course it is possible to add additional functionality yourself by scripting those. On the other hand when you add custom functionality yourself then at some point this is going to work against you == additional maintenance and testing at every new Unraid version. You might like custom modding and that is fine but for now like to address high level design. By example the KISS principle by implementing out of the box solutions. Hot spare functionality might be a game changer in regard to Unraid's design. Have no doubt. An additional protection layer in securing data availability. Like to think that it is only a matter of time for LimeTech to add hot spare functionality. The logical next step in Unraid's evolution. Look around and see what happens in today's home with IOT. We see more and more automation and simplicity been added to our lives. Automation is the way forward. The difference between pro-activity and reactivity.
    1 point
  48. For consideration: A competitor, Synology, has a feature I don't think has been requested (didn't see it; didn't find it in search) here, but would be a good fit. The hot-spare drive, a pre-cleared, spun down drive; if/when a major drive hard drive failure occurs bad disk is dropped from drive aray, the hot-spare is assigned it's place, and appropriate notifications tripped. Certainly be a handy option for any remote backup setups. Obvious requirement, the hot-spare must equal size (perhaps >=) of parity drive.
    1 point
  49. Makes sense, I was thinking of the case: "what if parity were to fail" in which case it would have to be equal to, but I thought, "Well it could be bigger." I guess I thought pre-clear was a requirement for a replacement disk, as when I was adding a drive to my array, I saw unRAID clearing the disk, or using that wording. My mistake. That's true. I hadn't considered that point. My brain was more on the automation factors of the convenience; customer peace of mind; that sort of thread.
    1 point