Leaderboard

Popular Content

Showing content with the highest reputation on 06/07/19 in all areas

  1. Would be nice to have 'cache groups' for different purposes. For example: "cache group":storage -> used strictly for storage "cache group": docker -> used strictly for docker images "cache group": VMs -> used strictly for VMs That way they won't have to fight for Disk I/O. One can only dream...
    2 points
  2. I stumbled on an unraid server that is fully exposed and running with no security on the internet. It looks like someone in the US in bloomfield Indiana. What can I do to alert the user of the issue? It has been up for 47 days and running 6.5.3. Running pro version- so maybe I can give LT the reg key, and they can contact the user? Looks like it is running serviio and not much else- bunch of movies on drives.
    1 point
  3. Top right of that screen, select advanced view instead of basic.
    1 point
  4. I can assure you- it's not that bad- most of the issues have already been resolved and the only issues I had disappeared with bios updates months ago.
    1 point
  5. Automated port scans. VPN server hosted on your router or similar is a better option.
    1 point
  6. Basically, I am sending the rig to sleep 30mins after all is "silent", meaning all disks spun-down, no network traffic etc... It`s the large storage, with media and stuff, only used when there is a need, which occurs to 99% at weekends only.. The day-2-day workload is covered my a smaller unRAID, idling at 10W which does not go to sleep as it also powers the main router VM.
    1 point
  7. Write cache is disabled for that drive: https://forums.unraid.net/topic/72862-drive-write-speeds-really-slow-solved/?do=findComment&comment=670028
    1 point
  8. In your 3rd screenshot, try changing Tuneable (md_write_method) to reconstruct write (used to be called turbo write). Writing a lot of small files will always be slower but the impact of parity will be more severe because it requires repeated seeking and switching between reading and writing. Reconstruct write helps at the cost of needing the rest of your array spun up. As long as the rest of your array is decent with random read, it generally will help. Also, remember your network adds latency and latency is a lot more important with small files. Having to wait 0.1s/file due to network latency is nothing for a 1GB file but for 1000 1MB files, it's 100s!
    1 point
  9. Go into your Access Server settings and change the hostname from the docker IP to your public IP/hostname.
    1 point
  10. Unraid is even worse then the standard effect on any filesystem as in addition to normal filesystem overhead it has to do parity calculations/writes for each small file. I gave up struggling with trying regular incremental backups of huge audio sample libraries (milions) because it would take ages. Different bakcup solution react differently but all suffer. Regardless if you have fast ssd cache aparently. I do them now only once a month to unraid where the bakcup tool has a local database so does not have to recheck every file at every backup run. But still insanely slow always. And daily outside unraid. Normal large files , which is most of my data are rocketing over my 10g net at over 500-700MB/s without issues Its the only downside of an otherwise stellar Unraid experience.
    1 point
  11. No concern, those file change time to time, best ignore them in FIP. Update / check their hash just waste resources.
    1 point
  12. I'm glad you've found a work around but I can't see that stopping the CRC errors since they are a hardware issue. The error message looks like file system corruption, likely caused by the inability of the SATA controller to communicate reliably with the SSD. Your syslog will show you if the CRC errors are still happening. You haven't fixed the problem - just hidden it for a while. Well, j.b has pointed out the likely cause and his advice is the best you'll get on the subject. I've suggested trying a different brand with a couple of examples of what work for me. I don't have any other suggestions, I'm afraid.
    1 point
  13. Filesystem Overhead. All systems have the same flaw where copying a ton of small files is significantly slower than large files Sent via telekinesis
    1 point
  14. Only a fraction of Unraid users read this forum, and only a fraction of those post. There is no guarantee that someone clueless enough to leave the server open is clueful enough to come here for help.
    1 point
  15. Power off, check cables, power back on.
    1 point
  16. @limetech are you still looking for testers for SAS spindown?
    1 point
  17. Ok np. Thanks for giving me a hand.
    1 point
  18. Not necessarily. Try running 'sensors' from a terminal prompt and see what it reports. My board has two sensors: 1 - coretemp 2 - nct6776 If sensors are detected on your board, try entering the name(s) manually in the plugin available drivers and click save. With some boards, a manual entry may work where detect does not.
    1 point
  19. Well, root can access eveything and modify everything and that's ok. At the moment, I have 3 VM working on my server (mac, linux & windows) and they are used by more than one user. We are programmers and sometimes we crash the OS that need to rebooted from the webGUI. I would like to give access to only VM management to certains users instead of giving them root access. This could be extended to about every tab in the webGUI.
    1 point
  20. Just thought I'd chime in here with an even easier method that doesn't require using btrfs subvolumes or btrfs snapshots: cp --reflink /path/to/vdisk.img /path/to/snapshot.img The --reflink command to cp instructs CP to use the COW features of BTRFS to create a "reflink" copy of the file, which is essentially a file-level snapshot. You can even delete the underlying base image and the snapshot will continue to work since btrfs remembers the dependencies that reflinks have on underlying block-level data.
    1 point