asopala

Members
  • Posts

    59
  • Joined

  • Last visited

Everything posted by asopala

  1. That did it. Ethernet thought it was a public network and not a private one. Thanks!
  2. Hey guys, I'm having an issue with remote SMB shares. After having to move and getting new IP addresses for my server and my Windows PC, I can't seem to connect the SMB remote shares I had set up. I mounted them originally using the PC name, and that hasn't changed, but they won't mount. Trying to make new SMB shares doesn't seem to work either, as searching for servers only shows the Unraid server (itself). Not sure what's going on here. It's worth mentioning I can access my Unraid shares from my PC, but I can't seem to do the other way around. Anyone have any insight? I've attached the diagnostics. Just to add, the drives in my Windows PC are set to sharing, and I had no problems with it before. alexnas-diagnostics-20230919-0958.zip
  3. Hey all, I'm in a weird situation with rsync where I can't seem to be able to have it copy over the most recent version and replace deprecated versions of a file. I use an audio program called Pro Tools, and changes are done in the .ptx file itself, with incremental changes saved to the file itself. The issue is that rsync just sees test.ptx is both in the synced folder and the original, ignoring the fact that the synced file is an old version, the one that was copied over the first time. So the question is if it's possible to have rsync copy over the most recent versions of files, removing the old versions and replacing them with the new version of the file with the same name. As an example, here's a script that transfers from a local backup I have of my server to an offsite server (cause I haven't figured out how to get the two computers to communicate, hence I've had to shlep a hard drive over every time I visit to sync the two). rsync -ah "/mnt/disks/WD_easystore_264D/AlexNAS_Backup/test/" "/mnt/user/test"
  4. Pretty much. That's what solved it for me. And then I manually redid my network settings in the GUI.
  5. Looks like it was the Nerdpack. For anyone else who might stumble onto this, I also had to reset the networking settings.
  6. It boots in safe mode, so my guess is it's a plugin or something. Did anything in the diagnostic say which one? Edit: I'm also seeing two errors after starting Samba: "failed to connect to the hypervisor" and "Operation not supported: Cannot use direct socket mode if no URI is set. Are either of those two relevant?
  7. Turns out it was already disabled in ident.cfg, USE_SSL="no". So that didn't work.
  8. Ran into an issue. Suddenly I was unable to access the GUI for Unraid. I attempted to use GUI mode, but it told me it was unable to connect. Not sure what's going on. Luckily the Terminal was working, so I got diagnostics. alexnas-diagnostics-20221202-1233.zip
  9. What particular settings should I adjust? I tried looking for it but there's 52 pages of stuff I've had to sift through, including the search function. Here are my settings under Settings\Docker.
  10. So the thing is I have the docker container img pointing to a docker dataset, but it makes its own datasets that look like this. I'm trying to get rid of them.
  11. I redid everything, and hit write on this. Does hitting quit delete the write? It doesn't seem like it. Does this look right? Disk: /dev/tank/docker Size: 20 GiB, 21474836480 bytes, 41943040 sectors Label: gpt, identifier: B6A566AE-0026-D647-9992-460B2508AA95 Device Start End Sectors Size Type >> /dev/tank/docker1 2048 41943006 41940959 20G Linux filesystem ┌─────────────────────────────────────────────────────────────────────────────────────────────┐ │Partition UUID: 2B9DD3E0-DE51-4241-A035-ED4142AE7E0D │ │Partition type: Linux filesystem (0FC63DAF-8483-4772-8E79-3D69D8477DE4) │ └─────────────────────────────────────────────────────────────────────────────────────────────┘ [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ] Write partition table to disk (this might destroy data)
  12. Hey all, I tried @gyto6's scripts for mounting the docker containers into a zvol, this one. The problem I ran into was after creating the partition, I got this issue with the next line of the script: mkfs.btrfs -q /dev/tank/docker-part1 probe of /dev/tank/docker-part1 failed, cannot detect existing filesystem. WARNING: cannot read superblock on /dev/tank/docker-part1, please check manually ERROR: use the -f option to force overwrite of /dev/tank/docker-part1 I'm not sure what to do at this point. Anybody able to help?
  13. I think Level1Techs did a similar thing with Shadowcopy on Windows.
  14. That did it exactly, thanks! Looks like the last thing I need to do is to set up rclone to my google drive (while I can still take advantage of unlimited storage). Anybody know where's the best place to mount the remote shares so that dockers have access to them, and it syncs the contents of the entire pool? I didn't see anything along those lines in the thread, and running SpaceInvaderOne's tutorial, I'm not sure where to set the mount point for the remote shares to be mounted and unmounted on startup and shutdown. Can't use /mnt/disks/subdirectory when I'm not using the array for anything (currently just a dummy drive).
  15. I seem to be having an issue where I click the docker I wish to view and it seems to try to connect me to a different tower that I've never seen before called saminthedark (apologies to anyone who has that). Not sure what to do. Doesn't connect to any of my docker containers either. Worked before, but seems to have recently come up.
  16. Is there a guide on how to do that? I'm not sure where to start.
  17. By the way, anybody figure out how to make a successful Time Machine dataset in ZFS? I did the usual protocol of making a new dataset in the terminal, and using SpaceInvaderOne's code for smb-extra.config to have time machine on an unassigned device (as linked a while back from @etsjessey, but I can't for the life of me get it to show up as a time machine device on my Mac. Everything else shows up normally, but no dice. Here's the code I got, right after the rootshare configuration. [Alex Time Machine] comment = ea support = Yes path = /mnt/tank/AlexTimeMachine browseable = yes guest ok = no valid users = asopala write list = asopala writeable = yes vfs objects = catia fruit streams_xattr fruit:time machine max size = 1000 G fruit:encoding = native fruit:locking = netatalk fruit:metadata = netatalk fruit:resource = file fruit:time machine = yes fruit:advertise_fullsync = true fruit:model = MacSamba fruit:posix_rename = yes fruit:zero_file_id = yes fruit:veto_appledouble = no fruit:wipe_intentionally_left_blank_rfork = yes fruit:delete_empty_adfiles = yes durable handles = yes kernel oplocks = no kernel share modes = no posix locking = no inherit acls = yes #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end
  18. I think that's the way to go. I also didn't realize it's a GUI issue with the smb-extra.conf file being limited to 2kb. That makes everything easier.
  19. Would the recycle bin vfs object setting need to be applied to every SMB share in smbextra.config? I'm running into a 2048 character limit and have been dealing with that.
  20. Hey all, I'm using the ZFS plugin, and I'm making my SMB shares through Samba Extra Configuration, as recommended. I've noticed though that there's a point where I can't add any more letters for a time machine share I'm trying to set up without removing characters somewhere else, and I'm wondering why that is. Any particular reason it's limited at 2048 characters? How do I get around it?
  21. Hey all, Anybody know how to use the Recycle Bin plugin with ZFS? I have the datasets shared via SMB in the smb.extra config, and I was wondering if it is possible to have it work with the ZFS array for the sake of delete protection.
  22. Hey all, Anyone able to help with this issue? I tried wiping the drive and starting from scratch, but I've had the kernel panics happen again. I couldn't get a proper syslog, but I could get this image from my computer:
  23. Hey all, I'm getting kernel panics when trying to transfer over large volumes of data via rsync from my backup to my ZFS Unraid server. I'm amazed at the sheer speed of the transfer from spinning rust, but it crashes. IDK what's going on. I have diagnostic information if that helps, or if there's something else you guys need, let me know. Figured cause the issue is with ZFS, this is the place to ask for help. Anybody know what to do? I saw a year and a half back someone had a similar issue. Also had to rebuild everything from scratch before cause of this issue--thank goodness for backups, seriously. I'm on 6.9.2, BTW. Edit: I can't get a good syslog of the issue because it wipes itself when I have to hard reset the server, so I had to take a photo of what my screen is telling me. This was during an rsync of data from the backup server via FTP share to the tank. alexnas-diagnostics-20211031-1717.zip alexnas-syslog-20211031-2118.zip