wacko37

Members
  • Posts

    119
  • Joined

  • Last visited

Everything posted by wacko37

  1. @dlandon Thanks for the update mate, Device Scripts are working as per normal.
  2. As always @dlandon thankyou for all that you do for the Unraid community. I never knew Tools->PHP existed until now.... Always learning. I look forward to the next release, thanks
  3. Hi Community, Is anyone having issues running "device scripts" within UD, I have a series of scripts that I run regularly and have worked flawlessly until now. Now when I press the lightning symbol to run the script nothing happens, the script window comes up as per normal but it just goes straight to done with no other info. The logs are giving no info either (see attached screenshot) Also attached script for review. Any ideas? Has something changed of recent? 01_-_Multimedia_-_Films-Music-Books_-_SYNC--VERIFY.sh
  4. Ok - I manually edited both files in the below locations to 0 6 */4 * * but to no avail. "config/plugins/dynamix.file.integrity/integrity-check.cron" & "config/plugins/dynamix.file.integrity/integrity-check.cron" Is what I'm trying to achieve possible
  5. Hi All, I searched high and low for an answer but to no avail, basacially I would like a more custom cron schedule for DFI, what I want to do is 0 6 */4 * * which is on the 6th hour of every 4 days of a month run a check on a single disk. So there is no setting to do this via the Webui, so what I need to know is it safe to manually edit "/UNRAID Flash Dir/config/plugins/dynamix.file.integrity/integrity-check.cron" to reflect the above cron scheduale. Thanks
  6. If anyone runs into this issue, 1 must run this command in Unraid console to make the zpool mountable in ZFS Master UNRAID:~# zpool upgrade (zpool name) The command will upgrade (downgrade really) the ZFS disk to the 6.12 ZFS features.
  7. Out of curiosity is there a way to downgrade UD to 2023-10.08 before the legacy update? This way I can continue until both plugins are compatible again/
  8. @Iker Firstly as always I would like to thankyou for this great plugin and for all that you do for this great community. I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master. After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below (p.s I hope i have relayed this information correctly, I apologise in advance if this isn't the case and would be due to a lack of understanding on my part)
  9. Thanks for the speedy reply and as always a thorough explanation. TBH I have just realised how much continued work goes into maintaining such plugins/dockers. I can see you are always way ahead of the a-game (6.13) making sure the transition is a smooth 1. SO THANKYOU for all that you do...... can't say that enough. The change to ZFS has been a step learning curve for myself, now I'm somewhat worried about this new transition with 6.13. Will all existing Unraid ZFS pools be automatically upgraded with the new features when updating to 6.13, and any UD zfs disks would then require a manually update to be compatible. Will any data be lost during these upgrade?
  10. Hi All, @dlandon as always thankyou for all that you do mate! I've just formatted a new drive to zfs via UD, The new zfs device mounts fine with no errors in the logs but for some unknown reason I cannot see it in ZFS Master. I have other devices that where also formatted to ZFS via UD some months ago and they mount just fine and visible in ZFS Master. To further test I have just formatted a 2nd new drive to ZFS via UD and again this mounts ok without errors in logs but not visible in ZFS master..... weird Its as if UD is not formatting correctly or the pool name is not compliant with ZFS/Unraid, I tried multiple pool names the most simple being "test"
  11. I guess the easiest way to explain is via screenshots. The mounted device below in UD is the offline ZFS backup ssd for my cache pool, which I mount periodical for ZFS replication. Once I attempt to view the contents of ZFS--BACKUP--SSD as per the screen grab below OR via UD "browse disk share" I get "invalid path" as per screenshot below If i run the "zfs mount -a" command, I can view all the contents in the datasets as per normal. Below is a screengrab of the syslog when ZFS--BACKUP--SSD is unmounted and then mounted.
  12. Ok, I worked this 1 out and its definitely my lack of knowledge....again😒 Basically i was trying to clone to another ZFS pool on a spersarate drive, once I cloned to "pool_name/" on which the snapshot is actually stored on, it cloned just fine.
  13. having an issue when trying to clone a snapshot, i get an error "operation not permitted", I suspect its my lack of knowledge on how to clone successfully. When i select "clone" I input "pool_name/test" for the destination dataset. Do I need to create "test" dataset prior?
  14. Ok, maybe its my use case is the issue here. I have an offline ZFS drive I use for replication of my ZFS cache pool, up until now (zfs mount -a) I have not been able to view my contents on the offline drive after being mounted via UD. I can see it space used and the snapshots but not the contents, I get "Invalid Path" error, which makes total sense now that I know the ZFS pools are mounted during Unraid boot sequence. Should this not be the case?
  15. "zfs mount -a" wow this has answered so much confusion as to why I cant view any of my UD mounted zfs drive contents! Thankyou I look forward to the official solution.
  16. Sorry for the late reply, thankyou for the detailed response and your efforts in making the Mover auto-hashing possible.... this community is so blessed by the hard work of others. My main reason for asking when auto-hashing is generated is (from my limited understanding) if there where to be a risk of corruption it would be at its highest when doing that transfer. Meaning any transfer that would entail the exchange of a file between to locations/disks and most likely filesystems in my case ZFS to XFS. (or does ECC ram remove that risk altogether?) As all files being written the the main array would be via the cache (setup depending of course) It would be nice/peace of mind if the auto-hashing was to be done when the files are 1st written to the cache via the /mnt/user0/ folder structure. "user0" from my understanding is the temp folder on the cache prior to the Mover transfer to main array /mnt/user/, that way when DFI runs a verification is would be against checksums made prior to the mover transfer. Or maybe this is not at all possible due to my lack of knowledge. Either way I now know how it works atm and very grateful none the less
  17. Just wondering when DFI builds its checksums for new files. Meaning are checksums generated when new files have been created in the cache pool or after Mover has transfered the new files to the main array? Thanks
  18. @Vr2Io Attached is the script, it rsync's to external drive with --xattrs enabled, it then calls bunker to verify the files within sync folder. I have successfully tested this on an NTFS drive. 0_-_TestUSBNTFS_-_SYNC--VERIFY.sh
  19. The script is now good enough for me to use, although its definitely not perfect, its doing what I require. Please use with caution as I have copied and pasted (hacked) quite a bit and edited where required to make it work. I am not trained in this field at all and its been a processes of trial and error. BIG thanks to @cholzer who made the original script, it was then perfected by @dlandon to function correctly within UD some time ago. Also thanks @Vr2Io@JonathanM for a push in the right direction and good advice, and lastly @bonienl for the DFI Plugin and Bunker! Hope this helps someone in the future Issues - 1 - logs to be available/visible in both the generated log file and within the UD "device Script Log" cant get both to work simulationally 2 - When the generated log file is opened/viewed for the first time an error is generated in the syslog UNRAID nginx: 2023/09/01 15:53:15 [error] 24436#24436: *1512682 open() "/usr/local/emhttp/plugins/dynamix.file.manager/javascript/ace/mode-log.js" failed (2: No such file or directory) while sending to client, client: [Redacted], server: [Redacted], request: "GET /plugins/dynamix.file.manager/javascript/ace/mode-log.js HTTP/2.0", host: "[Redacted]", referrer: "[Redacted] /Main/Browse?dir=%2Fmnt%2Fdisks%2FUSERS%2Fbunker-logs" Sep 1 15:53:28 Wacko-UNRAID bunker: verified 425 files from /mnt/disks/USERS/Test. Found: 0 mismatches, 0 corruptions. Duration: 00:01:06. Average speed: 119 MB/s 0_-_TestUSBNTFS_-_SYNC--VERIFY.sh
  20. Its ok I got it - was trying to write the log in 2 places at the same time. All sorted for now...... I think.
  21. AAAHHHHH...... now I understand your post. I worked out that the logs cannot be written in 2 places at once, once I removed ">> $LOGFILE" I indeed got Content within my desired log file. jesus.....so much time wasted and the answer was right there in front of me!