Jump to content


Popular Content

Showing content with the highest reputation since 12/19/19 in all areas

  1. 3 points
    Overview: Support for Docker image unraid-api in the electricbrain repo. Docker Hub: https://hub.docker.com/r/electricbrainuk/unraidapi Github/ Docs: https://github.com/ElectricBrainUK/UnraidAPI Discord: https://discord.gg/Qa3Bjr9 If you feel like supporting our work: https://www.paypal.me/electricbrain Thanks a lot guys, I hope this is useful!
  2. 2 points
    Just my 2 cents on the controller front. If using iOS, iPeng is definitely worth it. For Android I think the options are pretty miserable TBH and no where near as polished as iPeng. I used Squeezer for a while and was reasonably impressed. Then I bought Orange Squeeze and IMO it is not that great... maybe a little bit better than Squeezer but not worth the asking price IMO. However my tip is to install the Material theme/skin for LMS and then point your Chrome browser to that. It has a wonderful responsive design so works equally well on a desktop as on a mobile device. You can get chrome on Android to create a nice little icon and place it on your homescreen, which will then open a chrome-less (app like) experience. Very nice indeed 🙂
  3. 2 points
    One option - if you can get to the command line, you could type something like this: /usr/local/emhttp/webGui/scripts/notify -e "Your Unraid server is not secured" -s "I found your Unraid on the Internet without a password" -d "You need to secure this before someone hacks you" -i "alert" That will give them a notification on the webgui and send them an email (if they have that configured)
  4. 2 points
  5. 2 points
    Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\ You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
  6. 1 point
    Schedule direct is very nice. I have found zap2xml just as easy and free!
  7. 1 point
    Tools -> Diagnostics -> attach zip file.
  8. 1 point
    Upgrade to v6.8.1 and the errors filling the log will go away, it's a known issue with some earlier releases and VMs/dockers with custom IPs.
  9. 1 point
    tldr: If you require hardware support offered by the Linux 5.x kernel then I suggest you remain on 6.8.0-rc7 and wait until 6.9.0-rc1 is published before upgrading. The "unexpected GSO type" bug is looking to be a show stopper for Unraid 6.8 using Linux kernel 5.3 or 5.4 kernel. We can get it to happen easily and quickly simply by having any VM running and then also start a docker App where Network Type has been set to "Custom : br0" (in my case) and I've set a static IP for the container or toggle between setting static IP and letting docker dhcp assign one. There are probably a lot of users waiting for a stable release who will see this issue, and therefore, I don't think we can publish with this bug. The bug does not occur with any 4.19.x or 4.20.x Linux kernel; but does occur with all kernels starting with 5.0. This implies the bug was introduced with some code change in the initial 5.0 kernel. The problem is that we are not certain where to report the bug; it could be a kernel issue or a docker issue. Of course, it could also be something we are doing wrong, since this issue is not reported in any other distro AFAIK. We are continuing investigation and putting together a report to submit either to kernel mailing list or as a docker issue. In any case, an actual fix will probably take quite a bit more time, especially since we are heading into the holidays. Therefore this is what we plan to do: For 6.8: revert kernel to 4.19.87 and publish 6.8.0-rc8. Those currently running stable (6.7.2) will see no loss of functionality because that release is also on 4.19 kernel. Hopefully this will be last or next to last -rc and then we can publish 6.8 stable. Note: we cannot revert to 4.20 kernel because that kernel is EOL and has not had any updates in months. For 6.9: as soon as 6.8 stable is published we'll release 6.9.0-rc1 on next release branch. This will be exactly the same as 6.8 except that we'll update to latest 5.4 kernel (and "unexpected GSO type" bug will be back). We will use the next branch to try and solve this bug. New features, such as multiple pools, will be integrated into 6.10 release, which is current work-in-progress. We'll wait a day or two to publish 6.8-rc8 with reverted kernel in hopes those affected will see this post first.
  10. 1 point
    It's also worth noting that FreeBSD bug reports really haven't gotten any official response. The users that made the bug reports are the ones making the progress. Would anyone on your team have connections to get some official eyes on the bug?
  11. 1 point
    TLSv1 is being obsoleted this Spring and TLSv1 and TLSv1.1 should be removed from nginx.conf: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; Major browsers are committed to supporting TLSv1.2 so there should be minimal issues.
  12. 1 point
    Not yet, we'll need to wait. Taking the opportunity to move this to stable reports since it's still an issue.
  13. 1 point
    Just weird /mnt/user0 was missing, if all is working well now now it doesn't matter.
  14. 1 point
    "vers=v1.0" in the mount command (on Linux, i don't know how it handled in windows)
  15. 1 point
    If you have a good and current backup of your flash drive, you can always get everything back by preparing a new install to the same or different flash drive, and copy the config folder from your backup. Everything about your configuration is in that config folder.
  16. 1 point
    Others have mentioned how Unraid parity is superior to RAID-1 (to be exact, RAID-1E but that's academic). What I'd like to point out is you are probably confused about backup, RAID mirror and Unraid parity. A backup is a complete, independent and redundant copy of your original data. Complete = you don't need anything else to recover the data other than the copy itself Independent = operations not performed directly on the copy does not affect the copy Redundant = the copy is only accessed if (a) data needs to be recovered from the copy and (b) the copy needs to be updated A RAID-1 (i.e. mirror) is NOT a backup since it fails the independence and redundancy tests. When something changes the data, the mirror copy is automatically and immediately changed too. Both mirrors are equally likely to be accessed in regular uses. (Just think for example, you accidentally delete your wedding photo, can you recover it with a RAID-1 mirror? Answer is no because the mirrored copy also deletes the photo as soon as the original is deleted). A (Unraid) parity is NOT a backup because it fails the completeness test i.e. recovering data requires data from other disks to reconstruct the missing piece.
  17. 1 point
    did you try to start it as mentioned in the github repo? docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd console For me that works, but only one time. I'm somehow not able to detach from the screen session and have to close the window which leaves me with no way to attach again. I have to restart the container to regain that ability. So i switched to not using the console command, instead I'm using the command command which works way better for me. docker exec -u nobody -t binhex-minecraftserver /usr/bin/minecraftd command <some minecraft command here without leading slash> For all commands u can visit the Arch Wiki.
  18. 1 point
    Mostly likely a controller crash or power issue that caused errors in multiple disks, when this happens Unraid disables as many disks as there are parity devices, which disks get disabled is luck of the draw. Parity1 is likely still valid, parity2 will be a little different because the emulated disk repair, if it was me I would re-enable parity and the other disk to rebuild disk11 to a new disk, but this assumes all other disks and OK and parity really is in sync.
  19. 1 point
    Thank you both, i've destroyed all snapshots prior to setting it up properly. after the creation i did a reboot. Still misses some of the 3 hourly's... id really like that. Anyway i'll start from scratch again. Thanks for the input.
  20. 1 point
    Container path should be `/music/` Host path should be `/mnt/disk2/Music` In jellyfin, you'd then select `/music`
  21. 1 point
  22. 1 point
    Those messages mean that the first 3 trim operation worked and whatever is mounted as /tmp/overlay (which is not a location that exists on a default Unraid install) could not be trimmed. Having said that the figures for /etc/libvirt look a bit strange as the libvirt.img file mounted at that location is normally only 1GB in size . Is yours different for some reason? Even the figure for the docker.img file is more than I would normally expect unless you increased the size above the default of 20GB for some reason.
  23. 1 point
    What makes you think it should be going much faster? That is not an atypical speed when copying to a drive in the parity protected array. You might find this section from the Unraid online documentation to be relevant?
  24. 1 point
    Did you modify the GPU Bios with a HEX Editor?
  25. 1 point
    wow. I seemed to also be missing --vfs-cache-mode writes from my mount. I have just seen a drastic improvement of playback with that one change. Also after i fixed the docker not running in the script everything is working perfectly. So happy this finally got done!
  26. 1 point
    There is indeed a network conflict, because eth1 is configured to be a member of both bond0 and bond1. Easiest, delete the file "network.cfg" in the /config folder on your USB device and reboot your system. This will let it start with default network settings.
  27. 1 point
    I added options to change text and link color.
  28. 1 point
    It seems to be a problem with this motherboard. See http://forum.asrock.com/forum_posts.asp?TID=10788&title=z390-taichi-ultimate-nics-same-mac-address
  29. 1 point
    That sounds right. As your USB 3.1 controller is isolated in its own IOMMU group, that should work for you.
  30. 1 point
    This has been updated to a plugin so it survives reboots. Please see the new thread here:
  31. 1 point
    There is no 'free' version of Unraid anymore so you will not be able to upgrade unless you are prepared to buy a key. Are you prepared for that? You should always have backup of any critical data, preferably with at least one off-site copy. Your comments imply you have no backups which is very risky as you can potentially lose data for all sorts of reasons.
  32. 1 point
    You can disable it, but you may not be able to see the server shares show up under 'network' in Windows 10 clients, but you can still browse them manually by server name or IP
  33. 1 point
    My bad... Serves me right for giving advice off the top of my head...
  34. 1 point
    Limetech have at least committed to making the source for the part of Unraid that is currently closed source freely available if they ever stop supporting Unraid.
  35. 1 point
    Depending on the current formatting of the drives you can probably mount them using the Unassigned Devices plug-in and transfer the data one drive at a time.
  36. 1 point
    Hello! Thanks for this great plugin, i just moved away from freenas to unRAID, and i really like ZFS. I did ran into some problems. I've setup a array just because i needed one with 2x 32GB SSD one of which is for parity. Then i followed the guide and created the following: NAME STATE READ WRITE CKSUM HDD ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdj ONLINE 0 0 0 sdp ONLINE 0 0 0 sdn ONLINE 0 0 0 sdl ONLINE 0 0 0 sdk ONLINE 0 0 0 sdi ONLINE 0 0 0 logs sdg ONLINE 0 0 0 pool: SSD state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM SSD ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdm ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdd ONLINE 0 0 0 With these datasets: root@unRAID:~# zfs list NAME USED AVAIL REFER MOUNTPOINT HDD 4.39M 10.6T 224K /mnt/HDD HDD/Backup 1.36M 10.6T 208K /mnt/HDD/Backup HDD/Backup/Desktop 192K 10.6T 192K /mnt/HDD/Backup/Desktop HDD/Backup/RPI 991K 10.6T 224K /mnt/HDD/Backup/RPI HDD/Backup/RPI/AlarmPanel 192K 10.6T 192K /mnt/HDD/Backup/RPI/AlarmPanel HDD/Backup/RPI/Garden 192K 10.6T 192K /mnt/HDD/Backup/RPI/Garden HDD/Backup/RPI/Kitchen 192K 10.6T 192K /mnt/HDD/Backup/RPI/Kitchen HDD/Backup/RPI/OctoPrint 192K 10.6T 192K /mnt/HDD/Backup/RPI/OctoPrint HDD/Film 192K 10.6T 192K /mnt/HDD/Film HDD/Foto 192K 10.6T 192K /mnt/HDD/Foto HDD/Nextcloud 192K 10.6T 192K /mnt/HDD/Nextcloud HDD/Samba 192K 10.6T 192K /mnt/HDD/Samba HDD/Serie 192K 10.6T 192K /mnt/HDD/Serie HDD/Software 192K 10.6T 192K /mnt/HDD/Software SSD 642K 430G 25K /mnt/SSD SSD/Docker 221K 430G 29K /mnt/SSD/Docker SSD/Docker/Jackett 24K 430G 24K /mnt/SSD/Docker/Jackett SSD/Docker/Nextcloud 24K 430G 24K /mnt/SSD/Docker/Nextcloud SSD/Docker/Organizr 24K 430G 24K /mnt/SSD/Docker/Organizr SSD/Docker/Plex 24K 430G 24K /mnt/SSD/Docker/Plex SSD/Docker/Radarr 24K 430G 24K /mnt/SSD/Docker/Radarr SSD/Docker/Sabnzbd 24K 430G 24K /mnt/SSD/Docker/Sabnzbd SSD/Docker/Sonarr 24K 430G 24K /mnt/SSD/Docker/Sonarr SSD/Docker/appdata 24K 430G 24K /mnt/SSD/Docker/appdata SSD/VMs 123K 430G 27K /mnt/SSD/VMs SSD/VMs/HomeAssistant 24K 430G 24K /mnt/SSD/VMs/HomeAssistant SSD/VMs/Libvert 24K 430G 24K /mnt/SSD/VMs/Libvert SSD/VMs/Ubuntu 24K 430G 24K /mnt/SSD/VMs/Ubuntu SSD/VMs/Windows 24K 430G 24K /mnt/SSD/VMs/Windows Now when i disable Docker and try to set the corresponding paths, i get this: How to solve this? Kind regards. Edit, it just needed a trailing slash after /appdata/ Now, i cant disable the VM service from the vm settings tab. Also editing the default location is not found or editable to the mount point of zfs /mnt/SSD/VMs (even with a trailing slash) i just cant press apply (same for disabling the VM service) Please advise. Second edit: Needed to stop the array, then everything is editable. Works as advertised so far. thanks again. Solved!
  37. 1 point
    In their infinite wisdom, the creators of this app made the password field be the md5 hash of your actual password. See my earlier post here for details on how to set it.
  38. 1 point
  39. 1 point
    I spent a lot of time trying to find similar situations on Google as well, but was able to find any. The problem isn't really btrfs atop of dm-crypt, as having docker writing to the encrypted cache only produces 27 GB of (daily) writes in comparison to having docker mounted on the loop device, which resulted in 400 GB of daily writes with a lot of less activity compared to now (I've increased the amount of docker in the meantime). @hotio's problem matches my case exactly though. Unfortunately I'm lacking the knowledge at the moment to dive in at the level of looking into/fixing the amount of page flushes etc (like @limetech mentioned). I was able to pinpoint it to the combination of docker, the loop device and (most likely) dm-crypt. Hence this bugreport. Appreciate the efforts! 🍻
  40. 1 point
    Try the solution from here: https://forums.unraid.net/topic/34889-dynamix-v6-plugins/?do=findComment&comment=522886 unraid doesn't support s3 sleep officially so it seems it won't put disks into standby again after wake up because it thinks they are already in standby.
  41. 1 point
    Instructions For Pi-Hole with WireGuard: For those of you who don't have a homelab exotic enough to have VLANs and who also don't have a spare NIC lying around, I have come up with a solution to make the Docker Pi-Hole container continue to function if you are using WireGuard. Here are the following steps I used to get a functional Pi-hole DNS on my unRAID VM with WireGuard: 1a) Since we're going to change our Pi-hole to a host network, we'll first need to change your unRAID server's management ports so there isn't a conflict with Settings > Management Access: 1) Take your Pi-hole container and edit it. Change the network type to "Host". This will allow us to avoid the problems inherent in trying to have two bridge networks talk to each other in Docker. (Thus removing our need to use a VLAN or set up a separate interface). You'll also want to make sure the ServerIP is set to your unRAID server's IP address and make sure that DNSMASQ_LISTENING is set to single (we don't want PiHole to take over dnsmasq): 2) We'll need to do some minor container surgery. Unfortunately the Docker container lacks sufficient control to handle this through parameters. For this step, I will assume you have the following volume mapping, modify the following steps as needed: 3) Launch a terminal in unRAID and run the following command to cd into the above directory: cd /mnt/cache/appdata/pihole/dnsmasq.d/ 4) We're going to create an additional dnsmasq config in this directory: nano 99-my-settings.conf 5) Inside this dnsmasq configuration, add the following: bind-interfaces Where the listen-address is the IP address of your unRAID server. The reason this is necessary is because without it, we end up with a race condition depending on if the Docker container or libvirt starts first. If the Docker container starts first (as what happens when you set the container to autostart), libvirt seems to be unable to create a dnsmasq which could cause problems for those of you with VMs. If libvirt starts first, you run into a situation where you get the dreaded: "dnsmasq: failed to create listening socket for port 53: Address already in use". This is because without the above configuration, the dnsmasq created by Pi-hole attempts to listen on all addresses. By the way, this should also fix that error for those of you running Pi-hole normally (I've seen this error a few times in the forum and I can't help but wonder if this was the reason we went with the ipvlan set up in the first place). Now, just restart the container. I tested this and it should not cause any interference with the dnsmasq triggered by libvirt.
  42. 1 point
    Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  43. 1 point
    Good job dgw! I knew I ran across some blurb long ago about how to setup Radarr and Sonarr so that the IP did not have to be respecified for the dowload client(s) each and every time that the (host) system is rebooted... gets REAL tiresome after a while. (AND, I don't quite understand why so many people didn't understand what you were talking about.) Yea, I saw all of SpaceInvaderOne's videos (when I was new to unRaid) – he does a great job of providing pertinent info in a 'condensed' format in order to shortcut the 'learning curve' (to get up and running ASAP); BUT NO, I don't recall that he went over this one. (Or, believe me, I would have done it. Maybe an updated video is in order.) So, in less than a minute, applied your resolve to see. Yep, it works! Just change specification of Network Type from "Bridge" (default) to "Host" in the docker's template and then in Settings for the Download Client(s) replace the "HARD-CODED" IP (that usually changes with every reboot of the system) with "localhost". That combined with the correct Port number (in the next field) and VOILA! Hit "Test" and "successful" communication where it didn't work before (unless the proper, CURRENT IP for unRaid server was input). (Spelled it all out here so that every one can follow what it takes to NEVER HAVE TO CHANGE THE IP ADDRESS AGAIN!) Thanks for providing the answer – this is what the 'community' is all about!
  44. 1 point
    People, we have a genius here, seriously. It works! Moving the built-in audio to bus 0x00 and slot 0x02 finally makes AppleALC work. So, resuming: - AppleALC+Lilu kexts in CLOVER kexts Other folder - built-in audio bus 0x00 and slot 0x0y (where y is a number different from 0) - In clover config.plist: Devices --> Audio: Inject=No - In clover config.plist: Devices --> Audio: check ResetHDA - In clover config.plist: Devices --> Properties --> Devices: fill in with the proper address (use gfxutil with command gfxutil-1.79b-RELEASE/gfxutil -f HDEF) - In clover config.plist: Devices --> Properties --> Add property key: layout-id, Property value: x (where x is a number reflecting the audio layout, see here for layouts for supported codecs: https://github.com/acidanthera/applealc/wiki/supported-codecs ) (my working layout is 7), Value type: NUMBER - (Optional) In clover config.plist: Boot --> add boot arguments -liludbg and -alcdbg: this will work if you use the DEBUG kexts; when your system boots you can give this command in terminal to check for Lilu/AppleALC: log show --predicate 'process == "kernel" AND (eventMessage CONTAINS "AppleALC" OR eventMessage CONTAINS "Lilu")' --style syslog --source Unfortunately I cannot check for HDMI audio of my GPU as I don't have a hdmi cable, but from the attached log it seems ok (HDAU). Relevant parts of the xml for the vm: Passed through GPU and audio of the GPU: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/GTXTitanBlack.dump'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x83' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> As pointed out by Leoyzen, gpu and gpu audio must be in same bus (in this case 0x03) and different function (0x0 for gpu, 0x1 for gpu audio). Built-in audio (passed through): <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </hostdev> Built-in audio in bus 0x00. Siri is working ok. Clover settings for built-in audio I'm saving the VoodooHDA as a second option for the audio and use AppleALC. @Leoyzen May I ask you, what was the input to make me change the bus of the built-in audio? lilu-AppleALC-log.txt
  45. 1 point
    Can we have the option to join a second network? I have containers that rely on a mariadb container. I put that container on a network called "backend_services". I have a frontend network for the app that is also used by traefik called "pub_proxy". I need to be able to join both. My current workaround is to run a portainer container and join the networks from there, only it's not persistent.
  46. 1 point
    Strictly speaking step 5 is normally not necessary as KVM can handle .vmdk files directly. To do so you need to enter the path to the .vmdk file directly into the template as the unRAID GUI does not offer such files automatically.
  47. 1 point
    To tuen on 'Turbo write' Settings >>> Disk Settings and set 'Tunable (md_write_method):' to " reconstuct write". However, be aware that writing a lot of small files to the array is always much slower (because of file creation overhead and write head movement) than for large files. IF you want max write speed from a user perspective, then use an SSD cache drive.
  48. 1 point
    For anyone trying to set up zfs-auto-snapshot to use Previous Versions on Windows clients: I placed the zfs-auto-snapshot.sh file from https://github.com/zfsonlinux/zfs-auto-snapshot/blob/master/src/zfs-auto-snapshot.sh in /boot/scripts/zfs-auto-snapshot.sh and made executable with chmod +x zfs-auto-snapshot.sh I found that no matter which way I set the 'localtime' setting in smb.conf, the snapshots were not adjusting to local time and were shown in UTC time. To fix this, I removed the --utc parameter on line 537 of zfs-auto-snapshot.sh to read: DATE=$(date +%F-%H%M) I then created cron entries by creating /boot/config/plugins/custom_cron/zfs-auto-snapshot.cron with the following contents: # zfs-auto-snapshot.sh quarter hourly */15 * * * * /boot/scripts/zfs-auto-snapshot.sh -q -g --label=04 --keep=4 // # zfs-auto-snapshot.sh hourly @hourly ID=zfs-auto-snapshot-hourly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=00 --keep=24 // # zfs-auto-snapshot.sh daily @daily ID=zfs-auto-snapshot-daily /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=01 --keep=31 // # zfs-auto-snapshot.sh weekly @weekly ID=zfs-auto-snapshot-weekly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 // # zfs-auto-snapshot.sh monthly @monthly ID=zfs-auto-snapshot-monthly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=03 --keep=12 // Edit: I switched the cron entries to use specific times of day, days of the week, etc. primarily due to the effect of reboots on unRAID's cron handling. I would get inconsistently spaced apart snapshots with the above cron configuration. # zfs-auto-snapshot.sh quarter hourly */15 * * * * /boot/config/scripts/zfs-auto-snapshot.sh -q -g --label=04 --keep=4 // # zfs-auto-snapshot.sh hourly 0 * * * * ID=zfs-auto-snapshot-hourly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=00 --keep=24 // # zfs-auto-snapshot.sh daily 0 0 * * * ID=zfs-auto-snapshot-daily /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=01 --keep=31 // # zfs-auto-snapshot.sh weekly 0 0 * * 0 ID=zfs-auto-snapshot-weekly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 // # zfs-auto-snapshot.sh monthly 0 0 1 * * ID=zfs-auto-snapshot-monthly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=03 --keep=12 // Run 'update_cron' to immediately enable the custom cron entries. The labels differ from the zfs-auto-snapshot default labels for better compatibility with Samba. For the Samba shares, I placed the below in /boot/config/smb-extra.conf: [data] path = /mnt/zfs/data browseable = yes guest ok = yes writeable = yes read only = no create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M shadow: localtime = yes Run 'samba reload' to refresh your Samba config. After the first scheduled snapshot is taken, you should now be able to see the snapshots in the Previous Versions dialog on a connected Windows client. You'll need to modify this with your desired snapshot intervals, retention, and path names. This configuration is working well for me. I hope this helps anyone else out there trying to get ZFS snapshots working with shadow copy for Windows clients.
  49. 1 point
    I figured it out. I needed to specify the byte offset of where the partition begins. For anyone who might have the same question in the future, here is what I did. From the unRAID command console, display partition information of the vdisk: fdisk -l /mnt/disks/Windows/vdisk1.img I was after the values in red. The output will looks something like this: [pre]Disk vdisk1.img: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xda00352d Device Boot Start End Sectors Size Id Type vdisk1.img1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT vdisk1.img2 206848 41940991 41734144 19.9G 7 HPFS/NTFS/exFAT[/pre] To find the offset in bytes, multiply the sector start value by the sector size to get the offset in bytes. In this case, I wanted to mount vdisk1.img2. 206848 * 512 = 105906176 Final command to mount the vdisk NTFS partition as read-only: mount -r -t ntfs -o loop,offset=105906176 /mnt/disks/Windows/vdisk1.img /mnt/test
  50. 1 point
    Update on Adaptec 71605, I have model 71605Q bought used from a recycle store. 16 drives on 4 sas cables. Running on unRaid 6.1.6 Flash the 71605 controller card bios to version 32033. I found it easiest to create a bootable USB drive, then unzip the download file onto the USB drive. Assuming only 1 (ONE) sata controller you have installed is the one you want to flash, boot the PC to dos from the USB drive, then type: "afu update" (without the quotes). This will start the sata controller card bios update and then verify, then tell you when to reboot. To date, my experimenting has shown only the 32033 bios to work properly. One last tip, when writing the afu.exe and the bios file to your USB drive, make certain you use the two files from the same downloaded zip file. I found you can't simply overwrite the bios bin file and run it. The AFU.exe must be hard-coded to update only the bin file it was packaged with. Edit: After successful flash and reboot, access the controller bios by keying ctrl-a when the controller is booting. Edit the controller and change to "HBA" mode.