foo_fighter

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by foo_fighter

  1. Unraid can run on Terramaster and Asustor(x64) without too much hackery. It comes by default on the lincstation. For the Lincstation I was going to add 3.5" drives via a USB 10gbs JBOD, 10gbs should be enough for 4 drives, either that or use SATA extension cables to a 4 or 5 bay sata hot swap backplane(That's admittedly a bit of a hack). Speaking of AOOSTAR, they announced 2 new NAS devices, available soon: https://aoostar.com/blogs/news/aoostar-pro-4-bay-nas-with-n100-n305-5700u-cpu The TPU can be installed in a E.key right? some of these systems come with wifi-bt in as an E.key module.
  2. If you have mover tuning. I had to set it from 0% to 5% to get it to work. I'm not sure if it was the act of saving settings or the change from 0 to 5.
  3. You could run file integrity / bunker on top of that and check the Xattr or something like snap raid. Rsync has its own -checksum switch but it slows down the process dramatically. If backing up ZFS to ZFS, you should investigate syncoid(part of sanoid) or Znapzend. Here is a similar thread:
  4. Both photoprism and photo structure have options leave the photos inplace. I tried both and stuck with photoprism. I still have photostructure but just disabled the docker.
  5. The Linkstation N1 is currently €256 EUR On Indiegogo €322 EUR if you wait for the release. I'm very tempted to get one and do some mods/hacks, like run 3.5" drives from the 2 Sata ports possibly more with a M.2 to Sata converter. I know this defeats the purpose of a "silent" NAS but I'd rather have more storage. Cool 3d Print BTW.
  6. OP: have you looked at any of the pre-build N5105 systems? Asustor, Terramaster, or even the Lincstation N1 with the included Unraid License?
  7. Let's say I only have 2 identical disks. Both Unraid Parity with 1 ZFS Disk and A Zpool would be considered "Mirrors" with the same information identically stored on both drives. But it seems like there are some subtle differences: A Zpool would have bitrot protection(A Scrub would auto fix it) but Unraid parity would not(Need to restore from Backup if Integrity plugin indicates mismatches)? Both can provide ZFS snapshots. Unraid Parity: Reads only would only require 1 drive to spin up, Would a Zpool require both drives to spin up? Expanding: Unraid Parity is easier to expand with extra drives without touching existing data. I'm not an expert but it seems like you cannot change a ZMirror to a RaidZ1(Parity) easily? You have to destroy/rebuild the pool? A build with Zpools only needs a dummy parity assigned(could be USB)? Mover can move from Cache Pool to Array. Can it also now be configured to Move from 1 Pool to another Pool? Any other pros/cons/differences between the 2?
  8. I've had this same issue and it turned out to be bad memory. Running memtest found it and after replacing the RAM, the corrupted USB drive went away as well as other issues.
  9. @bonienl This is the same issue that happened with the 6.10 -> 6.11 Transition. the s3_script command to find the array disks doesn't match the disks.ini file. Could you make the -PB cmd line argument more future proof as more lines are added to disks.ini? From my previous post about the fix for 6.11 (modified to add more before lines in the grep command)
  10. This plugin was updated on 2/5/2023 but still doesn't seem to fix the s3_sleep issue: Mine still shows 3.0.8 and I still had to manually change PB12 to PB13 for 6.11.5 to recognize the array disks properly. # Version 3.0.8 Support for Unraid 6.9 # s/PB12/PB13/ for new .ini format # # Bergware International #################################################################################### version=3.0.8 program=$(basename $0) ini=/var/local/emhttp/disks.ini # Get flash device getFlash() { flash=($(grep -PB13 '^type="Flash' $ini|grep -Po '^device="\K[^"]+')) } # Get list of cache devices (if present) getCache() { cache=($(grep -PB13 '^type="Cache' $ini|grep -Po '^device="\K[^"]+')) } # Get list of array devices getArray() { array=($(grep -PB13 '^type="(Parity|Data)' $ini|grep -Po '^device="\K[^"]+')) }
  11. In my case, silent file corruptions and corrupted USB stick on reboots or shutdowns. Please don't use a faulty memory module.
  12. disks.ini seems to have changed after 6.9X which causes the dynamic s3_sleep plugin to not be able to detect array disks. Not sure if this plugin is actively updated. Fix is here:
  13. Found this after I made the same fix: root@Tower:/usr/local/emhttp/plugins/dynamix.s3.sleep/scripts# diff s3_sleep s3_sleep.orig 41d40 < # Change -PB12 to -PB13 for new .ini file format 51c50 < flash=($(grep -PB13 '^type="Flash' $ini|grep -Po '^device="\K[^"]+')) --- > flash=($(grep -PB12 '^type="Flash' $ini|grep -Po '^device="\K[^"]+')) 56c55 < cache=($(grep -PB13 '^type="Cache' $ini|grep -Po '^device="\K[^"]+')) --- > cache=($(grep -PB12 '^type="Cache' $ini|grep -Po '^device="\K[^"]+')) 61c60 < array=($(grep -PB13 '^type="(Parity|Data)' $ini|grep -Po '^device="\K[^"]+')) --- > array=($(grep -PB12 '^type="(Parity|Data)' $ini|grep -Po '^device="\K[^"]+')) To make it persistent, you'd have to patch the file here I believe or better yet ask the Author to update: /boot/config/plugins/dynamix.s3.sleep I'm not sure if this patch is stable, there's gotta be a more robust way to get the array devices, otherwise next time we'll be hacking it to -PB14.
  14. The repos have been updated to patch the log4j exploit. Everyone should pull the update.
  15. I haven't changed any BIOS settings...turns out if I wait long enough it eventually does go to sleep....Must be something preventing sleep that eventually finishes up.
  16. s3 sleep seems to no longer work for me, just comes right back out of sleep: Apr 14 19:46:45 Tower s3_sleep: Enter sleep mode Apr 14 19:46:45 Tower s3_sleep: Execute custom commands before sleep Apr 14 19:46:45 Tower s3_sleep: Enter sleep state now Apr 14 19:46:45 Tower kernel: PM: suspend entry (deep) Apr 14 19:46:46 Tower kernel: PM: Syncing filesystems ... done. Apr 14 19:47:06 Tower kernel: Freezing user space processes ... Apr 14 19:47:06 Tower kernel: Freezing of tasks failed after 20.002 seconds (1 tasks refusing to freeze, wq_busy=0): Apr 14 19:47:06 Tower kernel: find D 0 29092 29091 0x00000004 Apr 14 19:47:06 Tower kernel: Call Trace: Apr 14 19:47:06 Tower kernel: ? __schedule+0x4c6/0x503 Apr 14 19:47:06 Tower kernel: schedule+0x76/0x8e Apr 14 19:47:06 Tower kernel: request_wait_answer+0xd2/0x19c Apr 14 19:47:06 Tower kernel: ? wait_woken+0x68/0x68 Apr 14 19:47:06 Tower kernel: __fuse_request_send+0x70/0x75 Apr 14 19:47:06 Tower kernel: fuse_simple_request+0xfc/0x132 Apr 14 19:47:06 Tower kernel: fuse_send_open.isra.1+0x7f/0x84 Apr 14 19:47:06 Tower kernel: fuse_do_open+0x72/0xd0 Apr 14 19:47:06 Tower kernel: ? __schedule+0x4ce/0x503 Apr 14 19:47:06 Tower kernel: fuse_open_common+0x6f/0xa9 Apr 14 19:47:06 Tower kernel: ? fuse_dir_release+0x10/0x10 Apr 14 19:47:06 Tower kernel: do_dentry_open.isra.1+0x18e/0x27d Apr 14 19:47:06 Tower kernel: path_openat+0xa64/0xbec Apr 14 19:47:06 Tower kernel: do_filp_open+0x48/0x9e Apr 14 19:47:06 Tower kernel: ? do_sys_open+0x129/0x1b0 Apr 14 19:47:06 Tower kernel: do_sys_open+0x129/0x1b0 Apr 14 19:47:06 Tower kernel: do_syscall_64+0xfe/0x107 Apr 14 19:47:06 Tower kernel: entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Apr 14 19:47:06 Tower kernel: RIP: 0033:0x14d4a14bfff0 Apr 14 19:47:06 Tower kernel: RSP: 002b:00007fff7f167190 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 Apr 14 19:47:06 Tower kernel: RAX: ffffffffffffffda RBX: 00007fff7f1673b0 RCX: 000014d4a14bfff0 Apr 14 19:47:06 Tower kernel: RDX: 0000000000020000 RSI: 000000000042ae05 RDI: 00000000ffffff9c Apr 14 19:47:06 Tower kernel: RBP: 0000000000020000 R08: 00007fff7f1674df R09: 0000000000000000 Apr 14 19:47:06 Tower kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 Apr 14 19:47:06 Tower kernel: R13: 000000000042ae05 R14: 000014d4a1f56b00 R15: 00000000006422c0 Apr 14 19:47:06 Tower kernel: OOM killer enabled. Apr 14 19:47:06 Tower kernel: Restarting tasks ... done. Apr 14 19:47:06 Tower kernel: PM: suspend exit Apr 14 19:47:06 Tower s3_sleep: Wake-up now Apr 14 19:47:06 Tower s3_sleep: Execute custom commands after wake-up Apr 14 19:47:06 Tower s3_sleep: Wake-up from sleep mode Apr 14 19:47:32 Tower s3_sleep: Disk activity on going: sdb Apr 14 19:47:32 Tower s3_sleep: Disk activity detected. Reset timers.
  17. Use the housing from a shucked drive? I have one of these but I haven't tested it with a big drive: http://www.cavalrystorage.com/dock.aspx
  18. Are these preferred over the seagate archive/compute drives in the backup plus hub external drives? These are easier to shuck probably.
  19. Parity check speed is quite a bit slower 74.4MB/s down to 58.6MB/s :
  20. It was one of my plugins. I deleted a few and I'm back to: root@Tower:/tmp# df -h . Filesystem Size Used Avail Use% Mounted on - 1.6G 297M 1.3G 20% / Now to narrow down which one if I can(wasn't system stats)
  21. I can't seem to figure out what is eating up my ram. It takes about 24 hours to fill up then the webgui starts dying. Any tips?: root@Tower:/# du -sch * 9.5M bin 838M boot 0 dev 5.0M etc 0 home 0 init 15M lib 20M lib64 23T mnt 0 proc 8.0K root 148K run 15M sbin 0 sys 992K tmp 230M usr 38G var 23T total root@Tower:/var# du -sch * 0 adm 208K cache 0 empty 38G lib 156K local 12K lock 2.5M log 0 mail 132K run 4.5M sa 36K spool 0 state 12K tmp 38G total root@Tower:/var/lib# du -sch * 0 arpd 0 btrfs 4.0K dbus 4.0K dhcpcd 38G docker 0 libvirt 4.0K logrotate.status 8.0K netatalk 4.0K nfs 0 php 0 reiserfs 1.4M samba 0 xfs 38G total root@Tower:/var/lib/docker# du -sch * 38G btrfs 516K containers 776K graph 5.5M init 8.0K linkgraph.db 4.0K repositories-btrfs 0 tmp 0 trust 1.5M unraid 4.0K unraid-autostart 4.0K unraid-update-status.json 0 volumes 38G total root@Tower:/var# df -h . Filesystem Size Used Avail Use% Mounted on - 1.6G 1.6G 0 100% /
  22. You can try adding this to your preferences.xml: disableRemoteSecurity="1"
  23. You could get a 910e for about $62 on Ebay. Passmark: 3381. Not sure if that's enough power for you, but those run pretty cool.
  24. I got one of these recently. http://www.digital-loggers.com/lpc.html There are a couple of other versions, but this one has the most outlets and seems like the most configurable. Basically, it's auto ping function can cycle a port if it can't ping a specific IP address.