Leaderboard

Popular Content

Showing content with the highest reputation since 03/28/23 in Report Comments

  1. If all you have is the array, and no cache or other pools, then your Primary is the array, and secondary is none If you have a pool named 'cache', then cache:no = Primary:array, Secondary:none cache:only = Primary:cache, Secondary:none cache:yes = Primary:cache, Secondary:array, Mover action: cache->array cache:prefer = Primary:cache, Secondary:array, Mover action: array->cache Just substitute another pool name for 'cache' above for other pools.
    8 points
  2. Let's see if this helps. Below is a typical Movies share, where new files come in to the cache drive and are then moved to the array. Hopefully the 6.12.0 screenshot is more clear about what is happening: Below is a typical appdata share, where files are normally on the cache but if files end up on the array they will be moved to the cache: And finally here is a cache only share in 6.11.5, automatically upgraded to exclusive mode in 6.12.0: Edit: consolidating other comments here One interesting thing about this, and the reason we are calling it a "conceptual change", is that your share.cfg files are not modified during the upgrade. In 6.12 this is entirely a front-end change, a different way of displaying the same configuration information. This change should make the configuration easier to understand at a glance, and it sets us up nicely for the future. It would have been horribly confusing to try and extend the old UI to handle multiple arrays and cache pool to data pool transfers, etc. At least from a UI perspective, those features will be easy to add going forward. Regarding exclusive mode - My favorite thing about configuring the new exclusive mode is that there is nothing to configure. It is fully automatic at the time the array starts. Looking at my image above of "6.12.0 exclusive mode"... when you start the array the system checks for "sharename" on all your disks. If it only exists on the specified pool then "exclusive mode" is enabled and you will get the speed benefits of the bind mound to that pool. If, at the time the array starts, "sharename" exists on storage outside of the specified pool then you automatically get the old "cache only" functionality, where files in "sharename" are included in the user share regardless of what disk/pool/etc they are on. From the release notes: "If the share directory is manually created on another volume, files are not visible in the share until after array restart, upon which the share is no longer exclusive."
    6 points
  3. 4 points
  4. https://forums.unraid.net/topic/92285-new-use-cache-option/?do=findComment&comment=854925 https://forums.unraid.net/topic/94080-individual-mover-settings-for-different-cache-pools/?do=findComment&comment=869566 I've been advocating for this change for a few years now.
    4 points
  5. This is definitely an improvement. I never could think of a good way to clear up the confusion of the "old" way of thinking. I think this is it. What we have with 6.11: A user share setting labeled "Use cache pool", which isn't necessarily about a pool named 'cache' since we can name pools as we wish. The 4 possible values for this setting are not self-explanatory, and has always confused new users. Another user share setting labeled "Select cache pool" which is not necessarily about a pool named 'cache'. This setting is where you specify which pool that other setting is about. What we have with 6.12rc4: A user share setting labeled Primary storage which specifies where new files are written. A user share setting labeled Secondary storage which specifies where overflow gets written. A user share setting labeled Mover action which specifies source/destination for mover. The old way and the new way of thinking are functionally the same, but the new way is much easier to explain and understand. And it provides a way forward for future functionality.
    4 points
  6. Everything functions the same, this is conceptually a different way of looking at things to make it clear how cache pools interact with the array and the actions that mover takes.
    4 points
  7. Sorry can't share the diagnostic @JorgeB (there's Engineering Sample equipment on my unraid, i can't do that due to NDAs) model name : AMD EPYC 7B13 64-Core Processor root@Megumin:~# grep -c ^processor /proc/cpuinfo 128 The big difference from my look at both of them is the multiple GPUs vs my 1 (which is a converged accelerator, which is the ES part) root@Megumin:~# du -sh /run/udev/* | sort -h | tail -3 0 /run/udev/tags 0 /run/udev/watch 17M /run/udev/data (this is my udev size) I know there was changes on the kernel on udev for multiple gpus due to issues at booting them "fast" enough on AI clusters. (just don't remember which 6.1 version got it, but it's one between 6.12.6 and 6.12.8, still looking at patches to see where it was applied, as we upgraded to 6.6 at work to have the patch before it was on 6.1) So, one thing i should think is making the udev 64MB for 6.13 (which would help for weird and complex systems.) I know Rome vs Milan also do udev differently, as it was an updated architecture on the cpu side. Mobo: Asrock Rack RomeD8-2T CPU: Epyc 7B13 Ram: 1TB PCIe Expensions: 1 x LSI 9400-8i (x8) 3 x ASUS Hyper M.2 card filled with m.2 1 x Nvidia A100X ES 1 x Connectx-5 pro 2 x Oculink to U.2 2 x M.2 (On-board) Those are all my specs as i posted yesterday on Discord to confirm where the issue could be.
    3 points
  8. Also on 6.12.4, can't access SSH over tailscale... Not interested in hacking up the go file.
    3 points
  9. Thanks for all of your help testing the rc series, Unraid 6.12.4 is now available! Changes from rc19 are pretty minor, but if you are running one of the RCs please do upgrade as it simplifies support to have everyone on a stable release.
    3 points
  10. I wanted to share that I too have mostly figured out the Samba macOS nuance. I'm on Unraid 6.12.3 with macOS Ventura 13.5.1 To start, setting fruit:metadata = stream in the SMB Extras in the Unraid UI was the single biggest contributor to getting things working. Here's exactly what I have, in its entirety: [Global] fruit:metadata = stream Note that I don't use Unassigned Devices, which I think would add to these lines. After adding this and stopping/starting the array, pre-existing Time Machine backups were NOT working reliably, so I also had to create new Time Machine backups from scratch. I kept the old sparsebundles around just in case. Once new initial backups were made successfully, one of my MacBooks was able to reliably back up on a daily cadence. It's been running this way for a couple months. Meanwhile, one of my other MacBooks refused to work well with Time Machine, making one successful backup every few weeks, contingent on a recent Unraid reboot. I couldn't deal with this, so I factory reset it (reinstalling macOS) and created an additional new Time Machine backup on Unraid. Then it worked flawlessly. Then one of my MacBooks died, so I needed to restore data from Time Machine. I first tried to connect to Unraid and mount the sparsebundle through Finder, but it would time out, beachball, and overall never end up working. I was however able to get it mounted and accessible through the Terminal/CLI using the command `hdiutil attach /Volumes/path/to/sparsebundle` and with that, access the list of Time Machine snapshots and the files I wanted to recover. Then, I tried to use Apple's Migration Assistant to attempt to fully restore from a Time Machine backup. I was able to connect to the Unraid share and it was able to list the sparsebundles, but it would get stuck with "Loading Backup..." indefinitely. I moved some of the other computers' sparsebundles out of the share so it could focus on just the one sparsebundle I wanted, but even after waiting 24 hours, it would still say that it was loading backups. Looking on the Open Files plugin's tab in Unraid, I would see it reading one band file at a time. After enough of this, I tried to access a different sparsebundle that only had two backups, instead of months of backups, and "Loading Backups..." went away within 10 minutes and I was able to proceed with the Time Machine restoration, albeit slowly, and not with the data I wanted. This did clue me in to something, though. Using `find /path/to/sparsebundle/bands/ -type f | wc -l` to get the file count inside the sparsebundle, the one that made it through Migration Assistant was only 111 files, and the one that stalled for 24h was over 9000 files. I then went back to the Unraid SMB settings and tried to fiddle around a bit more. I found, as others did, that changing the following settings in smb-fruit.conf caused big improvements. The defaults for these settings are `yes` so I changed them to `no`: readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no As the Samba Fruit man page notes in https://www.samba.org/samba/docs/current/man-html/vfs_fruit.8.html, `readdir_attr:aapl_max_access = no` is probably the most significant of these, as the setting is described: "Return the user's effective maximum permissions in SMB2 FIND responses. This is an expensive computation. Enabled by default" My suspicion is that the thousands of files that make up a sparsebundle end up getting bottlenecked when read through Samba, causing Migration Assistant to fail. After adding these lines to `/etc/samba/smb-fruit.conf` and copying that updated file over to `/boot/config/smb-fruit.conf` and stopping and starting the array, I confirmed the settings were applied with `testparm -s` and looking at the output: [global] ~~~shortened~~~ fruit:metadata = stream fruit:nfs_aces = No ~~~shortened~~~ [TimeMachine] path = /mnt/user/TimeMachine valid users = backup vfs objects = catia fruit streams_xattr write list = backup fruit:time machine max size = 1250000M fruit:time machine = yes readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no fruit:encoding = native Now that the new settings were in place, Migration Assistant got through the "Loading Backups" stage within a minute or two, and I was able to successfully fully restore the old backup sparseimage with thousands of files. I know there's some nuance around setting Apple/fruit settings depending on the first device to connect to Samba, so this entire experiment took place with only Macs connecting to Unraid. I did not yet repeat the experiment with Windows connecting first or in parallel, but I hope the behavior is the same as I cannot guarantee Macs will always connect before Windows computers in my network. Anyway, I wanted to share as I avoided updating Unraid 6.9.2 for literal years to keep a working Time Machine backup. I then jumped for joy at the MacOS improvements forum post a year ago just to find it didn't help in any way, and was again excited to update to 6.12, just to find it STILL didn't work reliably with default settings. Very disappointing, LimeTech. And a huge thanks to the folks in these threads that have shared their updates and what has or has not worked for them. Let's keep that tradition going, as it's clear we are on our own here. Some Time Machine related posts from over the years. I'll make update posts in each directing here. TLDR: Working Time Machine integration. Adding fruit:metadata = stream to the global settings and then readdir_attr:aapl_max_access = no and readdir_attr:aapl_finder_info = no and readdir_attr:aapl_rsize = no into the smb-fruit settings allowed me to run Time Machine backups AND restore from or mount them using Finder and Migration Assistant.
    3 points
  11. I made a fix for displaying the IPv6 address in network settings. Should be included in the official release.
    3 points
  12. generally spoken, sure, we are free to use own hardware. and now may one culprit, Fritz is more or less the only one who make decent cable modem/routers ... and as there are meanwhile many cable users here the only option would be to make a dual setup (2 routers ...). which may really doesnt fit in the time anymore considering energy consumptions and so on ... even i thought about it to finally get rid of this issue now ... i spent 2 excessive month figuring it out also, Fritz has a wide distribution of DECT smart home accessoires (like here too ...) so basically, yes, either im locked to the Fritz "appliances" or i drop everything and just keep a fritz as modem and get another router, hoping it can handle ipvlan properly ... while i was surprised that also unify has firewall isuues in some combinations ... and when i think about it, alot of hardware is somehow binding networking to mac addressing and not ip's only so personally, i wouldnt even know what to buy now ... the 2nd NIC solution is posted sometimes already, sadly with no diags afaik ... if it wouldnt be such a mess to reconfigure everything i could also test it again, but for now after relying long on 6.11 i switched now almost all dockers to bridge usage and the ones which are impossible i setted up seperatly LXC containers for my usage ... when i find some spare time i ll report the 2nd NIC usage results here, sadly i dont have a 2 NIC setup in my small Test Server ... otherwise it would be already posted
    3 points
  13. Nothing changed regarding that, just the value shows up as a percentage instead of an absolute value, but like posted above it will go back to an absolute value once v6.12.4 is released.
    3 points
  14. I agree, I asked for this to be changed to an absolute value to avoid user confusion and for next release it should be back to that.
    3 points
  15. So, you’re saying you will not go into the go file & add the short script that’ll take you a couple seconds? If you can’t do that. Then okay. Not be rude or anything, we can’t help you if don’t add it in. All of us were able to get working & it’s great! I, myself, suggest adding it in, but again. It’s up to you if want to or not.
    3 points
  16. Soo i fell over this forum I want to point out , that I DONT HAVE UNRAID , but my kernel panic errors directed me to this site. Im running OMV / Debian And my server started these CPU errors and i found out (at least i think) that this was related to network / docker / bridge And right now im on testing phase for possible solution. Here's what i did : I simply added forward/accept br0 connection: iptables -A FORWARD -p all -i br0 -j ACCEPT Im not sure if this will help you guys or you already have tried this. EDIT: It seems this didn't work Im at work right now, and i cannot remote in to my pc.. So its properly in "kernel panic"
    3 points
  17. Indeed NTP is broken in Unraid 6.12.0, will be fixed in the next release. Thanks
    3 points
  18. NFS cannot deal with a symlink. You can use the actual storage location. For example if you have Syslogs as a cache only share, use the /mnt/cache/Syslog reference rather than /mnt/user/Syslog. This avoids shfs when using /mnt/user/Syslog.
    3 points
  19. The current internal release (rc6.1) allows for /mnt -> /mnt to work and have access to exclusive shares via that no problems. (note that any share that isn't exclusive does not have any problem via /mnt or /mnt/user) /mnt/user as a host path does not allow for access to exclusive shares. Work is underway to have a setting to Global Share Settings to enable / disable exclusive shares. Assuming that a setting for enable / disable gets implemented, then at the same time FCP would also have a new test that would look at the paths for installed containers and then either warn people to ensure that exclusive access is enabled if the host path isn't compatible or a message saying that it's OK to enable.
    3 points
  20. Hello i have an i5 11500 an experiencing the same issue since 6.12 rc3. Yesterday i add the flag i915.enable_dc=0 and now it's ok. Thank you
    3 points
  21. Here is a link to the libtorrent bug tracker for this issue: https://github.com/arvidn/libtorrent/issues/6952
    3 points
  22. Right, that is why we are calling this a "conceptual change". Your share.cfg files are not modified during the upgrade. This is entirely a front-end change, a different way of displaying the same configuration information. But your ideas are spot-on with regards to what becomes possible in the future. It would have been horribly confusing to try and extend the old UI to handle multiple arrays and cache pool to data pool transfers, etc.
    3 points
  23. Published 6.12.0-rc4.1 which fixes a dumb coding error checking for bind-mounts if share name contains a space.
    3 points
  24. Let's not get ahead of ourselves : ) Today the unRAID array is "The Array". In the future it will be another type of pool along with zfs, btrfs, etc. But let's not confuse this thread with future talk. I'd suggest keeping an eye on the Uncast for discussions of what is coming: https://forums.unraid.net/forum/38-unraid-blog-and-uncast-show-discussion/
    3 points
  25. What impact does this change have on existing unraid arrays? Now, "i am" confused...
    3 points
  26. I can do nothing but 100% agree with this. In this world of ubiquitous free/open source software, it's easy to forget that we are actually paying customers for this product. As paying customers, it only makes sense to make certain demands and set some expectations on what we're purchasing. Make no mistake, I like UnRAID a lot, but this whole thing with letting the customers figure out solutions to bugs in the product they paid for... It's just not right. A NAS software product which doesn't support Time Machine backups out of the box? Again, it's just not right.
    3 points
  27. Actually, the GitHub repo linked (https://github.com/memtest86plus/memtest86plus) is GPL2.0 and could be included ( @limetech) Would solve at least one major issue with the current version where it won't work with UEFI booting...
    3 points
  28. I can't get TM to fail under 6.12.7. Been trying hard for the last month with no success
    2 points
  29. I tried time machine like a year ago and couldn't get it to work - excited to see if this allows it to function again!
    2 points
  30. Have you tried removing the docker folder plugin that is not compatible with this release of Unraid?
    2 points
  31. This is a UD mount that doesn't specify the NFS version. Unraid is now set up to scan for the best protocol supported by the remote server. Oct 13 10:58:18 BackupServer unassigned.devices: Mounting Remote Share 'MEDIASERVER:/mnt/user/Public'... Oct 13 10:58:18 BackupServer unassigned.devices: Mount NFS command: /sbin/mount -t 'nfs' -o rw,soft,relatime,retrans=4,timeo=300 'MEDIASERVER:/mnt/user/Public' '/mnt/remotes/MEDIASERVER_Public' And the resultant mount: MEDIASERVER:/mnt/user/Public on /mnt/remotes/MEDIASERVER_Public type nfs4 (rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=300,retrans=4,sec=sys,clientaddr=192.168.1.4,local_lock=none,addr=192.168.1.3)
    2 points
  32. We've got a new RC with some nice fixes for the weekend
    2 points
  33. Met the same problem. Thank bonienl for pointing out that it works on 6.12.2. Temporary workaround: Stop Docker service in Settings/DockerSettings. Replace /usr/local/etc/rc.d/rc.docker line 282 (From Unraid 6.12.3) : IPV6=$(ip -br -6 addr show $NETWORK scope global|awk '{print $3}') With (From Unraid 6.12.2) : IPV6=$(ip -6 addr show $NETWORK mngtmpaddr|awk '/^ +inet6 /{print $2;exit}') [[ -z $IPV6 ]] && IPV6=$(ip -6 addr show $NETWORK scope global permanent|awk '/^ +inet6 /{print $2;exit}') Start Docker service. Not sure, but the edit might not persist after reboot.
    2 points
  34. @ljm42 has been trying to help you so that last statement of yours is ridiculous. If you've rolled back, there's not much we can do to help you without diagnostics after the issues occur. Furthermore, when you bounce around on multiple problems in one thread, it is difficult to understand what we are trying to solve. It seems like you are frustrated and we are sorry you are having issues but also please review our Community Guidelines as your general tone and some of your comments are a bit rude. I am updating this report to retest as there is nothing we can do since you rolled back and we do not have diagnostics to inspect.
    2 points
  35. Here are my diagnostics: tower-diagnostics-20230622-1751.zip
    2 points
  36. Unraid 6.12.1 is out. I'll update, maybe it solve this issue (fingers crossed)
    2 points
  37. 2 points
  38. Try to find the pid of nginx with: netstat -nlp | grep PORT_WEBGUI You'll get this: tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3832/nginx: master tcp 0 0 IP_VPN:8080 0.0.0.0:* LISTEN 3832/nginx: master tcp 0 0 IP_SERVER:8080 0.0.0.0:* LISTEN 3832/nginx: master tcp6 0 0 IPV6:8080 :::* LISTEN 3832/nginx: master tcp6 0 0 ::1:8080 :::* LISTEN 3832/nginx: master In my case the PID is: 3832 Then kill it with : kill -9 3832 and restart nginx by using: /etc/rc.d/rc.nginx start With this i can get the WebGui back for a moment.
    2 points
  39. You should not do that. This gives access to everything on the system, including unwanted shares and content. Instead use a more specific path, like /mnt/user/myfolder --> /unraid
    2 points
  40. I created the plugin "Dynamix Factory Reset" to start the system as new, it preserves the license key and optionally the array and pools. Backup your configuration first when you want to restore it afterwards.
    2 points
  41. I had no problems. Everything worked as before. No config changes, only how the gui presents the information.
    2 points
  42. RC4 blog is live with screenshots of the new Shares changes.
    2 points
  43. I went looking for rc3 in the change log as well... It was appreciated last cycle
    2 points
  44. 2 points