wacko37

Members
  • Posts

    119
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wacko37's Achievements

Apprentice

Apprentice (3/14)

17

Reputation

1

Community Answers

  1. @dlandon Thanks for the update mate, Device Scripts are working as per normal.
  2. As always @dlandon thankyou for all that you do for the Unraid community. I never knew Tools->PHP existed until now.... Always learning. I look forward to the next release, thanks
  3. Hi Community, Is anyone having issues running "device scripts" within UD, I have a series of scripts that I run regularly and have worked flawlessly until now. Now when I press the lightning symbol to run the script nothing happens, the script window comes up as per normal but it just goes straight to done with no other info. The logs are giving no info either (see attached screenshot) Also attached script for review. Any ideas? Has something changed of recent? 01_-_Multimedia_-_Films-Music-Books_-_SYNC--VERIFY.sh
  4. Ok - I manually edited both files in the below locations to 0 6 */4 * * but to no avail. "config/plugins/dynamix.file.integrity/integrity-check.cron" & "config/plugins/dynamix.file.integrity/integrity-check.cron" Is what I'm trying to achieve possible
  5. Hi All, I searched high and low for an answer but to no avail, basacially I would like a more custom cron schedule for DFI, what I want to do is 0 6 */4 * * which is on the 6th hour of every 4 days of a month run a check on a single disk. So there is no setting to do this via the Webui, so what I need to know is it safe to manually edit "/UNRAID Flash Dir/config/plugins/dynamix.file.integrity/integrity-check.cron" to reflect the above cron scheduale. Thanks
  6. If anyone runs into this issue, 1 must run this command in Unraid console to make the zpool mountable in ZFS Master UNRAID:~# zpool upgrade (zpool name) The command will upgrade (downgrade really) the ZFS disk to the 6.12 ZFS features.
  7. Out of curiosity is there a way to downgrade UD to 2023-10.08 before the legacy update? This way I can continue until both plugins are compatible again/
  8. @Iker Firstly as always I would like to thankyou for this great plugin and for all that you do for this great community. I have encountered an issue with a ZFS disk mounted via UD not showing up in ZFS Master. After a discussion with the UD developer @dlandon in the UD thread, it appears this is to do with ZFS compatibility in the upcoming Unraid 6.13 release, UD now accommodates for 6.13 when formatting a disk to ZFS rendering it unmountable in ZFS Master...... see below (p.s I hope i have relayed this information correctly, I apologise in advance if this isn't the case and would be due to a lack of understanding on my part)
  9. Thanks for the speedy reply and as always a thorough explanation. TBH I have just realised how much continued work goes into maintaining such plugins/dockers. I can see you are always way ahead of the a-game (6.13) making sure the transition is a smooth 1. SO THANKYOU for all that you do...... can't say that enough. The change to ZFS has been a step learning curve for myself, now I'm somewhat worried about this new transition with 6.13. Will all existing Unraid ZFS pools be automatically upgraded with the new features when updating to 6.13, and any UD zfs disks would then require a manually update to be compatible. Will any data be lost during these upgrade?
  10. Hi All, @dlandon as always thankyou for all that you do mate! I've just formatted a new drive to zfs via UD, The new zfs device mounts fine with no errors in the logs but for some unknown reason I cannot see it in ZFS Master. I have other devices that where also formatted to ZFS via UD some months ago and they mount just fine and visible in ZFS Master. To further test I have just formatted a 2nd new drive to ZFS via UD and again this mounts ok without errors in logs but not visible in ZFS master..... weird Its as if UD is not formatting correctly or the pool name is not compliant with ZFS/Unraid, I tried multiple pool names the most simple being "test"
  11. I guess the easiest way to explain is via screenshots. The mounted device below in UD is the offline ZFS backup ssd for my cache pool, which I mount periodical for ZFS replication. Once I attempt to view the contents of ZFS--BACKUP--SSD as per the screen grab below OR via UD "browse disk share" I get "invalid path" as per screenshot below If i run the "zfs mount -a" command, I can view all the contents in the datasets as per normal. Below is a screengrab of the syslog when ZFS--BACKUP--SSD is unmounted and then mounted.