tjb_altf4

Members
  • Posts

    1003
  • Joined

  • Last visited

Everything posted by tjb_altf4

  1. New disks are setup into pools now and ready for use, which takes total storage in the server to just over 200TB
  2. Your local syslog server is still enabled
  3. This works and I do this for chia currently, but there is a massive difference in time to expand and balance a pool (days) vs non-parity protected array (minutes). I've also noticed sneaker nets becoming highly popular for chia plotting (and would apply for media ingestion also), and Unraid arrays facilitate these very nicely. Another point is you don't want pools getting to big, borking themselves and nuking your time and effort, so you end up running a tonne of smaller pools... it's a PITA.
  4. The current default settings of the tuning plugin disable the mover's schedule. Uninstall, or seek assistance on the configuration, in the mover tuning plugin thread (as Squid suggests).
  5. Not done this myself, but I'm pretty sure you need to have the template in a subfolder, the name of that subfolder becomes the header in the dropdown (like default, user etc). Also be aware the feature is being removed in 6.10
  6. In good news, the wipefs command logs what it wipes, so I was able to restore the erased filesystem and I have that drive up and running and will be migrating that data to its new pool later today. note: the reboot in between committing the pool changes (via array start) seems to have also caused this issue
  7. Recycle bin (for SMB shares). Great fail safe for accidental deletions on network shares
  8. iSCSI plugin for Unraid Server, which allows you to connect to storage on your NAS like it was a local hard drive. A still a lot of people don't seem to know this plugin exists and I still see iSCSI asked for in feature requests frequently.
  9. Just noticed the GUI is already updated with correct capacity (balance will take a while still), happy days! Thanks @JorgeB
  10. They were added on the previous boot, but array wasn't started (related to the other bug report I guess). Still, this log entry seems to know it should have been 4 devices, maybe some weirdness due to reboot and possible drive allocation changes? Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool TotDevices: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumDevices: 4 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumFound: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMissing: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumMisplaced: 0 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool NumExtra: 2 Oct 9 17:12:24 fortytwo emhttpd: /mnt/chiapool LuksState: 0 Thanks for the suggestion, I'll do that as soon as I have the chance.
  11. Attached Note I went to balance again after the original balance finished, but then cancelled that operation when I realised that was going to be a long one, so there will be a cancelled balance operation in there. fortytwo-diagnostics-20211012-1615.zip
  12. GUI shows incorrect capacity also. I would expect this to read 72TB, 35.1TB, ~36TB
  13. I started with a BTRFS pool of 2x 18TB in raid0, this has been running for a few months. I added another 2x 18TB to the pool a few days ago, this operation completed this afternoon. Now that it has finished, I note that it does not show the expected capacity (72TB): Data, RAID0: total=31.88TiB, used=31.87TiB System, RAID1: total=32.00MiB, used=2.20MiB Metadata, RAID1: total=33.00GiB, used=32.57GiB GlobalReserve, single: total=512.00MiB, used=0.00B No balance found on '/mnt/chiapool' Normally I would run balance a second time before posting, but this looks like it will take another few days... I don't want to waste time and energy if this is a known bug with a possible work around. Tagging @JorgeB because you seem to be across these BTRFS issues. Thanks in advance
  14. If you just need the database up before other services, you can use the native unraid docker start ordering and start delay for this. In advanced mode on Docker page, use the arrow hooks on far right to change ordering, just to the left you can change the wait time of that container before starting. So in the example above, my jira database starts before jira application. Once Jira database starts, the jira application waits 15 seconds, then starts.
  15. Take a look at docker folder plugin, this can group the icons under a "folder", which you can select an icon and even add context menu entries for custom commands such as docker compose i.e. compose up/down. It's a little bit more work than setting up an unraid template, but it beats reinventing the wheel with complex compose deployments.
  16. Caveat: I still need to confirm this 100% as a pool rebuild is taking time so I can't examine the disk. So in the process of adding more storage to one of my pools, I accidentally added an unassigned disk ('storage') I normally use as a separate pool (not converted it to pool yet). I didn't notice my mistake initially, but needed to restart to reconfigure some drives... no array start has occurred yet. On next reboot, still no array start array since reconfig, I noticed my mistake and unallocated that drive from the pool. At this point the 'storage' disk is not part of the pool, all allocations are correct, array has just started. Currently one of my pools is rebuilding (taking a while as its a large disk), however the unassigned disk ('storage') is not mounting, and logs show Oct 9 17:13:01 fortytwo unassigned.devices: Adding disk '/dev/sdi1'... Oct 9 17:13:01 fortytwo unassigned.devices: Mounting partition '/dev/sdi1' at mountpoint '/mnt/disks/storage'... Oct 9 17:13:01 fortytwo unassigned.devices: No filesystem detected on '/dev/sdi1'. Oct 9 17:13:01 fortytwo unassigned.devices: Partition 'WDC_WD40EZRZ-xxxxCB0_WD-WCCxxxxxxxx' cannot be mounted. following this one Oct 9 17:12:17 fortytwo root: Device /dev/sdi1 is not a valid LUKS device. Oct 9 17:12:17 fortytwo emhttpd: shcmd (369): exit status: 1 Oct 9 17:12:17 fortytwo emhttpd: shcmd (370): /sbin/wipefs -a /dev/sdi1 Oct 9 17:12:17 fortytwo root: /dev/sdi1: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42 At this point I don't know if its being held back from mounting for the btrfs operation, or whether its actually gone, but the wipe command does not look promising. note: the BTRFS operation looks like it has another 1.5 - 2 days before it completes. fortytwo-diagnostics-20211010-0952.zip
  17. Added another H310 HBA via M.2 adaptor, so now I have another 8 drives hooked up... thinking about moving all this to another case (JBOD build), wiring is getting out of control and hard to manage nicely. Also concerned on temp management as we're going into summer and I do like to lean on the 1950x from time to time... something for me to monitor over time. Anyway, this is good enough for now, I've got the expansion I wanted, next stop is making some custom power cables to help fix up cable management.
  18. Click on the VM name next to icon (blue text)
  19. You should know if you are running syslog server, as you didn't know, it's probably been turned on accidentally, so you should disabled it in settings. If you did mean to have syslog running, then enable log rotation as the log file has reached the maximum supported size.
  20. Settings > Syslog Server You should know if you are running syslog server, but sounds like you may have accidentally enabled syslog as @trurl was leading I can see in your logs that rsyslog is starting Oct 6 02:22:43 Tower rsyslogd: [origin software="rsyslogd" swVersion="8.2002.0" x-pid="1899" x-info="https://www.rsyslog.com"] start
  21. The licence is tied to the usb guid, this is also enforced for the trial, hence no bootable ISO. During the trial it will also need internet access to validate the trial period. Also note that virtualized Unraid is not officially supported, but this forum is here for those that choose to do it anyway. Virtualbox can certainly support usb passthough, so there is no physical blockers to running a trial in Virtualbox, as long as you have a physical usb key with unraid installed.
  22. USB Manager might fit your needs, instead of passing an entire controller through.
  23. Going to take a guess that you have a limited amount of PCIe lanes, and one of your m.2s is using the lanes from the PCIe slot you want to use. I'd recommend consulting the motherboard manual and see if you can move around PCIe slots and/or M.2 slots so all are active.
  24. If you use the docker folder plugin you can already add your compose commands to the parent folder i.e. compose up, compose down. as context menu items like the native Unraid UI. With the label support coming in 6.10, it should start to integrate nicely with the UI.