Jump to content

pyrater

Members
  • Content Count

    508
  • Joined

  • Last visited

Community Reputation

7 Neutral

About pyrater

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have various sized hhds with a few 1 and 2 TB drives. Due to them being the oldest and the fullest i want to upgrade about 6 drives onto two 8TB drives. Removing them one at a time to do a parity rebuild seems excessive... Would a better/faster way be to simply add the 8 tb drives and then just mv/rsync all the files off the old drives to the new ones. Then remove the old 6 drives and create a new config Finally rebuild the pairty?
  2. i have a built a u-nas 800 and the new 810A. I love the unas 810A i even put a zotac mini 1050 ti GFX card in it which is a feast for such a small case. Just note all small cases are going to have issues with space and heat. and the 1U flex PSU i have is silent! see here:
  3. Not sure if you guys are aware but ditfender hates it when you reboot your server.... Not concerned just more of a FYI...
  4. this should be folded into the main dev branch and included by default i think...
  5. i have been getting that for weeks, its driving me insane!
  6. top - 18:34:28 up 6 days, 11:52, 0 users, load average: 7.49, 6.33, 6.69 Tasks: 6 total, 1 running, 5 sleeping, 0 stopped, 0 zombie %Cpu(s): 27.6 us, 31.5 sy, 0.0 ni, 36.7 id, 2.2 wa, 0.0 hi, 2.0 si, 0.0 st KiB Mem : 16109520 total, 285872 free, 3157732 used, 12665916 buff/cache KiB Swap: 0 total, 0 free, 0 used. 10459472 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 206 abc 20 0 1932576 635384 37080 S 91.0 3.9 10:45.07 mono 1 root 20 0 204 4 0 S 0.0 0.0 0:00.01 s6-svscan 31 root 20 0 204 4 0 S 0.0 0.0 0:00.00 s6-supervise 204 root 20 0 204 4 0 S 0.0 0.0 0:00.00 s6-supervise 252 root 20 0 4504 704 636 S 0.0 0.0 0:00.01 sh 261 root 20 0 38296 3588 3128 R 0.0 0.0 0:00.00 top
  7. does anyone else get really high CPU usage from RADARR? I am getting 40-50% CPU usage idling from RADARR.
  8. So i noticed plex buffering alot and logged into the server. The dashboard is showing 99-100% usage on the CPU but HTOP is not showing barely any...... Not sure what is going on here????
  9. It looks like this feature has been added in RC3. Version 6.7.0-rc3 2019-02-09 Base distro: jq: version 1.6 oniguruma: version 5.9.6_p1 php: version 7.2.14 Linux kernel: version: 4.19.20 md/unraid: version 2.9.6 (support sync pause/resume) patch: PCI: Quirk Silicon Motion SM2262/SM2263 NVMe controller reset: device 0x126f/0x2263 Management: emhttp: use mkfs.btrfs defaults for metadata and SSD support emhttp: properly dismiss "Restarting services" message firmware: added BCM20702A0-0a5c-21e8.hcd added BCM20702A1-0a5c-21e8.hcd vfio-pci script: bug fixes webgui: telegram notification agent bug fixes webgui: VM page: allow long VM names webgui: Dashboard: create more space for Dokcer/VM names (3 columns) webgui: Dashboard: include links to settings webgui: Dashboard: fix color consistency webgui: Syslinux config: replace checkbox with radio button webgui: Docker page: single column for CPU/Memory load webgui: Docker: usage memory usage in advanced view webgui: Dashboard: fixed wrong display of memory size webgui: Dashboard: fix incorrect memory type webgui: Plugin manager: align icon size with rest of the GUI webgui: Plugin manager: enlarge readmore height webgui: Plugin manager: add .png option to Icon tag webgui: Plugin manager: table style update webgui: Added syslog server functionality webgui: syslog icon update webgui: Main: make disk identification mono-spaced font webgui: Added parity pause/resume button webgui: Permit configuration of parity device(s) spinup group.
  10. Is there a way to ack or disable the utilization warning banner? I do not care that my drives are full i only care when my entire array is full. See snippet for an example.
  11. This is what i use for my Icarus server.
  12. Yea your right i didnt see that. O well with your new settings it fixed the primary issue so thank you again!!!