All Activity

This stream auto-updates

  1. Past hour
  2. Apr 26 14:24:41 Jonsbo kernel: zfs: `' invalid for parameter `zfs_arc_max' Apr 26 14:24:41 Jonsbo kernel: zfs: unknown parameter '6302227456' ignored Edit zfs.conf and remove the space before the value, should be: options zfs zfs_arc_max=6302227456 not options zfs zfs_arc_max= 6302227456
  3. Schicksal

    Plex

    cache drive vielleicht randvoll?
  4. Hi ! When I did my server, I had in mind two issues : Being able to store all my files and access them Being able to (in the future) be able to use it as something universal for all my PC (aka a VM) My actual unraid server is simple. Not enough RAM, no GPU, not even an amazing CPU. With 12TB of HDD. (The case is cool tho, just saying haha). I had plans to upgrade everything, but in a second time. Today I did my first VM setup. It works flawless, I linked to my shares, tried to use the mstsc.exe, the potential for that is amazing. But now, before going further i'd like to upgrade my server. So i've a few questions : I'd like to run my VM(s) on a SSD. Like either a nvme or a SATA one. Could I just plug one in, ignore all the rules based on my array (size, parity etc since it'll just be done for storing my WM data) ? Is it that easy ? Is it possible / easy to migrate my VM's boot file (the .img right ?) from my user/domains to the future SSD that way I can just move and have no reinstallation to do ? I'd like to throw more RAM and a GPU to really play and everything on that VM. Can I just add them, and config them without resetting up a new VM ? Seems the case, just want to be sure. Thanks a lot !
  5. Apr 26 11:01:46 streamengine kernel: mdcmd (30): import 29 Apr 26 11:01:46 streamengine kernel: md: import_slot: 29 empty Apr 26 11:01:46 streamengine emhttpd: import 30 cache device: (sdd) Samsung_SSD_870_EVO_2TB_S6PNNS0T603387M Apr 26 11:01:46 streamengine emhttpd: import 31 cache device: no device Apr 26 11:01:46 streamengine emhttpd: import flash device: sda Apr 26 11:01:46 streamengine root: Submitting SysDrivers Build Apr 26 11:01:46 streamengine SysDrivers: SysDrivers Build Starting Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdj Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdk Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdh Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdg Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdd Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sde Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdb Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdf Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdc Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdl Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sdi Apr 26 11:01:46 streamengine emhttpd: read SMART /dev/sda Apr 26 11:01:56 streamengine kernel: mdcmd (31): set md_num_stripes 1280 Apr 26 11:01:56 streamengine kernel: mdcmd (32): set md_queue_limit 80 Apr 26 11:01:56 streamengine kernel: mdcmd (33): set md_sync_limit 5 Apr 26 11:01:56 streamengine kernel: mdcmd (34): set md_write_method Apr 26 11:02:13 streamengine SysDrivers: SysDrivers Build Complete Apr 26 14:54:13 streamengine kernel: mdcmd (35): set md_num_stripes 1280 Apr 26 14:54:13 streamengine kernel: mdcmd (36): set md_queue_limit 80 Apr 26 14:54:13 streamengine kernel: mdcmd (37): set md_sync_limit 5 Apr 26 14:54:13 streamengine kernel: mdcmd (38): set md_write_method Apr 26 14:54:13 streamengine kernel: mdcmd (39): start STOPPED Apr 26 14:54:13 streamengine kernel: unraid: allocating 51590K for 1280 stripes (10 disks) Apr 26 14:54:13 streamengine kernel: md1p1: running, size: 19531825100 blocks Apr 26 14:54:13 streamengine kernel: md2p1: running, size: 7814026532 blocks Apr 26 14:54:13 streamengine kernel: md3p1: running, size: 7814026532 blocks Apr 26 14:54:13 streamengine kernel: md4p1: running, size: 7814026532 blocks Apr 26 14:54:13 streamengine kernel: md5p1: running, size: 7814026532 blocks Apr 26 14:54:13 streamengine kernel: md6p1: running, size: 11718885324 blocks Apr 26 14:54:13 streamengine kernel: md7p1: running, size: 11718885324 blocks Apr 26 14:54:14 streamengine kernel: md8p1: running, size: 11718885324 blocks Apr 26 14:54:14 streamengine emhttpd: shcmd (25494): udevadm settle Apr 26 14:54:14 streamengine emhttpd: Opening encrypted volumes... Apr 26 14:54:14 streamengine emhttpd: shcmd (25495): touch /boot/config/forcesync Apr 26 14:54:14 streamengine emhttpd: Mounting disks... Apr 26 14:54:14 streamengine emhttpd: mounting /mnt/disk1 Apr 26 14:54:14 streamengine emhttpd: shcmd (25496): mkdir -p /mnt/disk1 Apr 26 14:54:14 streamengine emhttpd: /usr/sbin/zpool import -f -d /dev/md1p1 2>&1 Apr 26 14:54:17 streamengine emhttpd: pool: disk1 Apr 26 14:54:17 streamengine emhttpd: id: 1522930789915103990 Apr 26 14:54:17 streamengine emhttpd: shcmd (25497): /usr/sbin/zpool import -f -N -o autoexpand=on -d /dev/md1p1 1522930789915103990 disk1 Apr 26 14:54:22 streamengine kernel: VERIFY3(range_tree_space(smla->smla_rt) + sme->sme_run <= smla->smla_sm->sm_size) failed (281460079722496 <= 17179869184) Apr 26 14:54:22 streamengine kernel: PANIC at space_map.c:405:space_map_load_callback() Apr 26 14:54:22 streamengine kernel: Showing stack for process 25399 Apr 26 14:54:22 streamengine kernel: CPU: 0 PID: 25399 Comm: z_wr_iss Tainted: P O 6.1.79-Unraid #1 Apr 26 14:54:22 streamengine kernel: Hardware name: System manufacturer System Product Name/ROG STRIX B450-F GAMING, BIOS 4901 07/25/2022 Apr 26 14:54:22 streamengine kernel: Call Trace: Apr 26 14:54:22 streamengine kernel: <TASK> Apr 26 14:54:22 streamengine kernel: dump_stack_lvl+0x44/0x5c Apr 26 14:54:22 streamengine kernel: spl_panic+0xd0/0xe8 [spl] Apr 26 14:54:22 streamengine kernel: ? rs_get_start+0xc/0x1d [zfs] Apr 26 14:54:22 streamengine kernel: ? range_tree_stat_incr+0x28/0x43 [zfs] Apr 26 14:54:22 streamengine kernel: ? range_tree_remove_impl+0x3b7/0x406 [zfs] Apr 26 14:54:22 streamengine kernel: ? zio_wait+0x1ee/0x1fd [zfs] Apr 26 14:54:22 streamengine kernel: space_map_load_callback+0x50/0x79 [zfs] Apr 26 14:54:22 streamengine kernel: space_map_iterate+0x2d6/0x324 [zfs] Apr 26 14:54:22 streamengine kernel: ? spa_stats_destroy+0x16c/0x16c [zfs] Apr 26 14:54:22 streamengine kernel: space_map_load_length+0x93/0xcb [zfs] Apr 26 14:54:22 streamengine kernel: metaslab_load+0x33b/0x6e3 [zfs] Apr 26 14:54:22 streamengine kernel: ? slab_post_alloc_hook+0x4d/0x15e Apr 26 14:54:22 streamengine kernel: ? spl_kmem_alloc_impl+0xc1/0xf2 [spl] Apr 26 14:54:22 streamengine kernel: ? __kmem_cache_alloc_node+0x118/0x147 Apr 26 14:54:22 streamengine kernel: metaslab_activate+0x36/0x1f1 [zfs] Apr 26 14:54:22 streamengine kernel: metaslab_alloc_dva+0x8bc/0xfce [zfs] Apr 26 14:54:22 streamengine kernel: ? preempt_latency_start+0x2b/0x46 Apr 26 14:54:22 streamengine kernel: metaslab_alloc+0x107/0x1fd [zfs] Apr 26 14:54:22 streamengine kernel: zio_dva_allocate+0xee/0x73f [zfs] Apr 26 14:54:22 streamengine kernel: ? kmem_cache_free+0xc9/0x154 Apr 26 14:54:22 streamengine kernel: ? spl_kmem_cache_free+0x3a/0x1a5 [spl] Apr 26 14:54:22 streamengine kernel: ? preempt_latency_start+0x2b/0x46 Apr 26 14:54:22 streamengine kernel: ? _raw_spin_lock+0x13/0x1c Apr 26 14:54:22 streamengine kernel: ? _raw_spin_unlock+0x14/0x29 Apr 26 14:54:22 streamengine kernel: ? tsd_hash_search+0x70/0x7d [spl] Apr 26 14:54:22 streamengine kernel: zio_execute+0xb4/0xdf [zfs] Apr 26 14:54:22 streamengine kernel: taskq_thread+0x269/0x38a [spl] Apr 26 14:54:22 streamengine kernel: ? wake_up_q+0x44/0x44 Apr 26 14:54:22 streamengine kernel: ? zio_subblock+0x22/0x22 [zfs] Apr 26 14:54:22 streamengine kernel: ? taskq_dispatch_delay+0x106/0x106 [spl] Apr 26 14:54:22 streamengine kernel: kthread+0xe7/0xef Apr 26 14:54:22 streamengine kernel: ? kthread_complete_and_exit+0x1b/0x1b Apr 26 14:54:22 streamengine kernel: ret_from_fork+0x22/0x30 Apr 26 14:54:22 streamengine kernel: </TASK> Apr 26 14:54:43 streamengine network: reload service: nginx
  6. Today
  7. First of all, thank you for the awesome docker containers! How would I best add these variables, so that I can install the public test of Ashlands? I tried adding the variables like in the attached image, but no luck. @ich777
  8. If electricity costs are important to you then go with the 5900x for your Unraid server. The Intel 13900K is generally known to be much more power hungry. https://beebom.com/intel-core-i9-13900k-review/
  9. There is no free version of Unraid v6 and you always need a valid licence. The closest is a trial licence valid for 30 days which is free and intended to allow no users to evaluate whether Unraid meets their needs but has the time limitation built in.
  10. Hello So I've tried to change the RAM allocation of the ZFS cache drive and it seems that it broke everything. my single ZFS cache drive is unmountable. I have managed to return the values of /etc/modprobe.d/zfs.conf to the previous ones (thorugh editing the /config/modprobe.d/zfs.conf), but it is still unmountable. in the terminal, the response to: zpool list is The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them. Any idea how to load these modules and get my zfs cache drive mounted again in the pool? Thank you very much! Jonsbo Diagnostics 20240427.zip
  11. ok thank you! it will take me a few days, I'm not back to my home until after this coming Thursday
  12. There is no user data on the parity drive, just enough information to rebuild a failed disk (in conjunction with all the other non-failed data drives. There is a good write-up on how Unraid parity works in the online documentation. As long as you do not have any disabled disks (marked with a red ‘x’) then rebuilding parity is zero risk to your data and is in fact necessary to protect it against future Fisk failures.
  13. I think it worked - after rebooting, I can see the "Airflow" widget on the dashboard, and the Dynamix Fan Auto Control app sees the qnap_ec pwm fan controllers right away, I didn't need to fiddle with the terminal to get them to show up. Thank you! If I may ask, what changes did you make to fix this? And is this a global fix for all TS-464s in the qnap-ec plugin?
  14. Thanks @JorgeB. I've ruled out all the add-on cards now. I've got another Ryzen system I can cannibalise and will report back.
  15. Sorry, I should have known to include that: Firefox 124.0.2 & 125.0.2 (updated & tested again) Brave v1.65.114 chromium 124.0.6367.60 It does NOT happen on Brave v1.65.122 chromium: 124.0.6367.82 One thing I just realized that I'd never noticed before - this is barely noticeable if the graph is set to a 2 minute history, and quite obvious at 5 min, but doesn't happen at 1 minute or shorter. On the Firefox GIF, I started at 2 min interval, then changed to 30s, 1m then 5m. The Brave GIF is from 1.65.114.
  16. Not universally. In Unraid, the root folders on each disk or pool can be exported as user shares. I recommend watching spaceinvader one's Unraid videos on youtube, he has a load of very informative content.
  17. I let my system sit a bit longer and took: <source type='memfd'/> <access mode='shared'/> out of the XML. At this point the non Virtiofs VMs don't boot at all. As soon as I add the shared memory backing in they boot right way. It seems that when memory backing is in use on one VM it needs to be enabled on all VMs otherwise after the memory usage settles in other VMs will not start.
  18. Easier and more secure to just TOTP. Just add it to your 2 factor app of choice and you are done. This of course only protects the login page from remote users. Anyone with physical access has the keys to the kingdom. And if there is a vulnerability in the web UI that could bypass any security implemented. A simple implementation of this could be implemented in an hour or two probably. Just a screen for the setup, and some setting on the USB for the time seed.
  19. Alright, I messed around with that alot and it either didn't work or I am doing something wrong.
  20. thanks, so in conclusion: I can copy everything to the single 8TB drive and then point plex to the corresponding folders. folders are called shares in linux? no partitioning supported/needed. thanks for the link!
  21. Yea, got this from @DanL via the support ticket I have Hello, It turns out it's pretty simple. Here are the steps: Upzip and put the attached executable in your '/flash/custom/' folder: Add the following commands to a User Script that runs on first array start or to your go file. # Install the docker buildx component cp /boot/custom/docker-buildx /usr/bin/ chmod +x /usr/bin/docker-buildx Now the docker build command will be: docker-buildx build... Let me know if it works for you. Works fine for me. I'll work towards getting this built into Unraid. Dan L docker-buildx.zip
  22. @alturismo das war es, habe ich garnicht bedacht das man darauf schauen sollte........ vielen dank, für dein wachsames Auge 😊😊😊
  23. Short answer, no, each array drive is a single volume using all available space. Long answer, Unraid is linux based, and the paths are going to be completely different between a windows and linux install of plex. Disclaimer, I am an emby user, so I don't have first hand experience with this, but here is supposed to be a guide of sorts. https://support.plex.tv/articles/201370363-move-an-install-to-another-system/
  24. I ended up getting read-only filesystem alerts for the USB drive, which was causing all these issues above. Not sure how I missed these errors the first time around. Now it makes sense why I couldn't adjust share configurations. I do wish a descriptive error message appeared from the UI, like "Couldn't write updated configuration" rather than nothing. IMHO It is annoying that Unraid is suspectable to a single point of failure with a USB even if recovery is always possible. Guessing there are opinionated topics around that design. I'm also unsure how this occurred in the first place as I'm using a somewhat decent flash drive for my setup.
  1. Load more activity