foo_fighter

Members
  • Posts

    204
  • Joined

  • Last visited

Posts posted by foo_fighter

  1. 1 hour ago, TRusselo said:

    1. I have been running unrade for 7 years and I'm on my 8th USB flash drive. I have tried different brands and types with and without LEDs...

    I lose one flash drive per year using unraid.

    I have had zero other hardware failures in the same amount of time. So no, it's not the same. 

    2. A system can have more than one hot spare

    Have you tried a microSD card reader with a high endurance SD card? Also, only use USB2.0 ports. 

  2. I meant "it exists" as in the hardware(4BayPlus has been at CES and in the hands of dozens of reviewers), not that you could buy it on Amazon yet. The older 4-bay version has existed for years in the Chinese domestic market and was also sent out to Beta testers to test the built-in software.  I think the hardware is close to the final revisions, but the software is still baking that's why a lot people are asking to run 3rd party OSes like unRaid or TrueNAS or OMV. 

     

     

  3. 2 hours ago, JonathanM said:

    How is this different than storaxa?

     

    Umm, for one thing it exists?!  (sorry to those who backed storaxa) 

     

    Kickstarter was a strange approach for an established company, but it did generate a lot of publicity. 

     

    I see it as a nice off the shelf hardware solution that can hopefully run unRaid well, similar to the LincStation but in more varieties. I think it would be difficult to build something at the price point that it was initially offered. It ships with its own OS, but it's not fully mature and not as flexible as unRaid.

     

  4. Anyone get in on the Kickstarter? Which model?

     

    Hopefully there will be enough critical mass of UnRaid users to get everything working, like Fan control, S3 sleep, WoL, LED control, 10Gbe ethernet drivers, Lowest Power C-States(It uses the ASM116X Sata controllers), even the watchdog timer.

     

     

  5. Edit: Seeing some instability(frozen machine) after the edits, I'm not sure if it's related to Immich/PostGres16  but there are some other reports of hung machines. 

     

    It wasn't working for me either, but I just updated the containers and switched to SIO's postgres container and it seems like it may be working now.

     

    I used to just see tons of FFMPEG errors in the log and turned off HW accel.

     

    The quick sync setting seemed to generate ffmpeg error. I'm trying VVAPI:

    image.thumb.png.fcb8992ac5976dd13ccfc77be01e643c.png

  6. I have about 100k photos/videos. I haven't figured out how to get the iGPU/quicksync to work yet so it's using software transcoding. ML is turned on for facial recognition but that just uses CPU I believe (I don't have any HW accelerators).  

    Have you checked your paths? All generated data should be outside of the docker image right?

     

  7. I see this endless cycle in syslog. But you're saying that the read SMART is caused by the spin up, not the other way around?:

     

     

    Feb 18 06:23:32 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:23:32 Tower s3_sleep: Extra delay period running: 18 minute(s)

    Feb 18 06:24:32 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:24:32 Tower s3_sleep: Extra delay period running: 17 minute(s)

    Feb 18 06:25:32 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:25:32 Tower s3_sleep: Extra delay period running: 16 minute(s)

    Feb 18 06:25:34 Tower emhttpd: read SMART /dev/sdf

    Feb 18 06:26:32 Tower s3_sleep: Disk activity on going: sdf

    Feb 18 06:26:32 Tower s3_sleep: Disk activity detected. Reset timers.

    Feb 18 06:27:33 Tower s3_sleep: Disk activity on going: sdf

    Feb 18 06:27:33 Tower s3_sleep: Disk activity detected. Reset timers.

    Feb 18 06:28:33 Tower s3_sleep: Disk activity on going: sdf

    Feb 18 06:28:33 Tower s3_sleep: Disk activity detected. Reset timers.

    Feb 18 06:29:33 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:29:33 Tower s3_sleep: Extra delay period running: 25 minute(s)

    Feb 18 06:30:14 Tower emhttpd: spinning down /dev/sde

    Feb 18 06:30:33 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:30:33 Tower s3_sleep: Extra delay period running: 24 minute(s)

    Feb 18 06:31:33 Tower s3_sleep: All monitored HDDs are spun down

    Feb 18 06:31:33 Tower s3_sleep: Extra delay period running: 23 minute(s)

    Feb 18 06:31:41 Tower emhttpd: read SMART /dev/sde

    Feb 18 06:32:34 Tower s3_sleep: Disk activity on going: sde

    Feb 18 06:32:34 Tower s3_sleep: Disk activity detected. Reset timers.

    Feb 18 06:32:34 Tower emhttpd: read SMART /dev/sdh

    Feb 18 06:33:34 Tower s3_sleep: Disk activity on going: sdh

  8. 14 minutes ago, Iker said:

    are the default ones.

    • Copy the data: Data is copied using the command "rsync rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

     

    Is rsync rsync a typo? Would you consider adding -X to preserve extended attributes for things like the Dynamic File Integrity plugin?

  9. I have 6 MB Sata ports on an old MB with no M.2 slots and a LSI HBA, but I'm thinking of consolidating to only MB ports to save power. The HBA prevents lower C-states. 

    I only have 4TB-8TB drives with 1 14TB Parity so I could easily consolidate to 2 or 3 18TB drives.

     

    18TB drives were $200 during the BF sales and ~$150 for pre-owned server pulls. They were the sweet spot at the end of last year.

     

    Some M.2 slots are only PCIe 3x1....so 6 Sata drives would saturate the BW. 

     

    I've also been looking at some pre-built systems with 4-6 HDDs and 2 M.2s. There I'd really need the high capacity HDDs.

     

  10. If 1 external drive can hold all of the data, that would be a convenient way to go. Rsync should work fine.

    You'd want to make sure /mnt/disk*/* is sync'd over to the backup drive.

     

    Another way would be to use Unbalace(d) to zero out and convert one drive at a time by moving one disk's contents to the other disks. That's assuming you have the spare capacity. 

     

    Curious why you want convert the entire Array over. You'll get some benefits but not all the benefits of ZFS(you can check for corrupted files but not repair them for example)

  11. 3 hours ago, Vetteman said:

    For my array disks I use XFS. For cache and unassigned I use BTRFS and NTFS (NTFS - drives removed from Windows external drives which already have a file system and data).

     

    I thought somewhere here it was not recommended to use BTRFS on array disks, only XFS?

     

    Perhaps there is a PAR2 plugin for NTFS drives on Unraid? But searching I could not find any...

     

    I was hoping SpaceInvader One or someone like the "Bit my Bytes" bloke would have a video on  Dynamix File Integrity.

     

    Cheers & many thanks....

    You can find the PAR2 util in nerd-pack/tools.

    • Like 1
  12. On 1/18/2024 at 5:33 AM, Vetteman said:

    I was very excited when I found this plugin as I am an avid photographer and have taken many landscape and car show photos over the last 20 years. I also do digital illustrations and 3D modeling.

     

    Previously I was backing up all my edited and raw file photos and digital illustrations and 3D assets to external WD and Seagate USB NTFS hard drives. I started to schuck the hard drives leaving them as NTFS and placing them as unassigned devices in my Unraid server with all the files still intact,  And continue to backup to them.

     

    This has worked quite well.

     

    File corruption and bitrot is a concern of any photographer or illustrator.

     

    It is mentioned XFS is the preferred file system? Will this work with NTFS or BTRFS or ONLY XFS?

     

    Also is there a video any where on setting this up or even a demo of this plugin?

     

    Thanks kindly...

     

     

     

    Both NTFS and exFAT can store extended attributes so yes, DFI(or bunker from the command line) can run on those drives and files. 

    • Like 1
  13. 4 hours ago, hernandito said:

    Thank you @foo_fighter.

     

    Posting link for SpaceInvader on ZFS Snapshots. I have to watch it again, to refresh my memory.

     

    In reading the above, can we surmise:

    Sounds interesting, specially for cache pools... Question, what happens with files that are stored on cache drive and are then, via mover, moved to array?  With snapshots, does the entire drive get snapshot? Or can you pick individual folders? I really need to watch the video linked above....

    With Mover, they are treated as normal files, only the primary, most recent non-snapshotted files are moved over into the array.

    With Syncoid/Sanoid you can choose individual datasets and sub datasets.

     

    4 hours ago, hernandito said:

    Sounds like ZFS or snapshots are not really for array drives (filled with media files).

    It can be used in the array. For example, I have my cache drive as ZFS and converted 1 array drive to ZFS so I could use it as a replication target.(ZFS->ZFS) The other drives in my array are XFS. I have my app data(dockers and docker data living on cache) replicated over to the ZFS drive for backups.

     

    4 hours ago, hernandito said:

    I have had to use XFS Repair dozens of times to fix issues w/ HDDs. Again sounds like ZFS is not really for array drives.

    It can be. RAID is not backup, so for catastrophic cases(any filesystem) that would be the recovery mechanism.

     

    4 hours ago, hernandito said:

     

    Does it spin the array drives if XFS is only used on cache pools?

    I was only referring to the ZFS_Master plugin spinning up ZFS drives. It didn't touch any of my XFS drives. I actually set the plugin to manual refresh and it seems to have stopped the spinning up and writing.

     

    4 hours ago, hernandito said:

     

    Thanks again,

     

    H.

     

    • Thanks 1
  14. ZFS snapshots take less space because they only store the deltas

    ZFS replication will be faster since it only sends the deltas

    The corollary is that deleting file won't save any space until all snapshots are also deleted

    ZFS replication of app data/cache drive doesn't require shutting down and restarting dockers.

    ZFS doesn't have a built in file repair system

    ZFS can detect bit-rot but can only repair bit-rot in a pool with redundancy(mirror/raidz, etc)

    ZFS Master plugin seems to spin up and write to drives often.