Jump to content

JorgeB

Moderators
  • Posts

    63,674
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. ^this, but since it's reiser make sure you update to v6.3.3 first.
  2. There can be a small difference when changing filesystem, but never like that, only reason I can think of is that there are sparse files (like vdisks) on the source disk. ETA: On the next run add --sparse to the rsync command, it won't hurt and if there really are sparse files they will use the same space on the destination.
  3. If you mean cancel the rsync, yes, you can re-format and start again, but if the destination was really empty the result should be the same.
  4. One thing has nothing to do with the other, but I'm not sure I understand what you're doing, you'll need to explain in more detail what you did and what you're seeing.
  5. No, and it wouldn't matter if all 24 were in use with HDDs.
  6. It can be, since each 10GbE can push 1GB/s and max usable bandwidth of a PCIE 2.0 x4 slot is around 1.6GB/s, but only if would use both ports simultaneously and your hardware is capable of reaching those speeds.
  7. Max speed in a x4 slot with 8 disks is 190MB/s, so enough for most disks. https://forums.lime-technology.com/topic/41340-satasas-controllers-tested-real-world-max-throughput-during-parity-check/#comment-406521
  8. OOM errors started when the mover ran, I had a similar one recently and from what I've read they are quite common with kernels 4.8 and 4.9, apparently fixed on kernel 4.10, but unRAID is still on 4.9. Since then I decreased my RAM cache and so far no more OOM errors, you could try the same, manually or using the Tips and Tweaks plugin. Values to change are vm.dirty_background_ratio and vm.dirty_ratio, default is 10 and 20, I set mine to 1 and 2 just to see if it would help and for now they are still like that.
  9. Agree, memtest errors always mean a hardware issue, usually a bad stick, but definatly a problem, even if crashing has been limited to unRAID so far.
  10. You can close it and keep the operation running by using screen, part of the Nerdpack.
  11. Depends on what you want to do: Add a device (use method 2), replace/upgrade or remove.
  12. I need to update that, the procedures were added later (if you use the GUI it will still change back to raid1), note however that to you can't remove a disk past the minimum number that the profile in use requires, eg, if you have a 8 device raid10 pool you can remove 4 devices without changing the profile, to remove anymore you'll need to convert.
  13. You can add, remove and replace cache pool devices without changing the profile in use, see the FAQ on the general support forum.
  14. I tested on 2 of my servers, one has a cache pool and 2 unassigned SSDs, those were the only ones trimmed (+ loop devices), disks were spun down and continued, so no issues, other server is my SSD only server, only the cache was trimmed since array SSDs don't support it, no errors also, looks like it only attempts to trim devices that support it.
  15. You can't browse to the ISO location, it will list all the ISOs on your ISO share, share location is set in Settings -> VM Manager - > Default ISO storage path:
  16. Yes, but like I said I have a bunch of extra stuff connected, I was not trying to see how low it goes but how it compared to a Kabylake in equal circumstances.
  17. Seems like a good option to me, don't see any downsides, It didn't spun up my HDDs so it didn't try to trimmed them, and this will take care of trimming any btrfs unassigned devices (XFS unassigned devices don't need it since they are mounted with the discard option, although from what I've read that's not recommended for NVMe devices, fstrim should be done instead).
  18. cat /dev/urandom > /dev/null You need 1 session per core/thread, I also use this to see each core frequency: watch -n 1 grep MHz /proc/cpuinfo
  19. I see 1w difference between those settings, still seems very little, freq on the 1700 is 1550Mhz for power save and 3000GHz for performance, a few more readings (+/- 1w): # of cores/threads @ 100% utilization - power consumption: 0 (idle) - 62w 1 - 75w 2 - 83w 4 - 88w 6 - 102w 8 - 114w 16 - 115w
  20. That looks much better, I'd be happy with a similar idle consumption to Kabylake, maybe Linux is not yet using all the power saving features, I didn't check if the CPU was throttling down.
  21. Got my hands on a Ryzen just for today and while doing some other bandwidth tests also took some power readings. This is not trying to be as low as possible since I have a bunch of SSDs connected but mostly for comparison to a Kabylake i5, both system have the exact same things connected, only difference is the board and CPU, UnRAID v6.3.2 idle: Asus PRIME B350M-A + Ryzen 1700 + GPU: 62w Asus PRIME B250M-K + i5-7400 + GPU: 50w Asus PRIME B250M-K + i5-7400 (iGPU): 38w GPU used AMD 3450
×
×
  • Create New...