Leaderboard

Popular Content

Showing content with the highest reputation on 03/20/19 in all areas

  1. I just changed the share to use the cache and the problem is resolved. I will have to move a little at a time i guess
    1 point
  2. Parity is no substitute for backups, regardless of RAID or Unraid. Plenty of ways to lose data that don't involve a bad disk, including simple user error. You must have another copy of anything important and irreplaceable. You get to decide what qualifies. If none of this data qualifies, that's fine. You can't replace a disk with a smaller disk, so you would have to shrink the array by removing the disk and rebuilding parity, then add the smaller disk. If that is what you want to do then shut down, check connections, and proceed to copy the data from that disk to others.
    1 point
  3. 9211-8i connects directly to 8 devices (the 8i means that, there are 4i, 16i models, also models with external outputs, e.g. 9200-8e). To connect more than 8 devices to that HBA you'd need one or more SAS expenders.
    1 point
  4. Understanding volume mapping will go a long way toward avoiding those snags in the first place. See the Docker FAQ. https://forums.unraid.net/topic/57181-real-docker-faq/
    1 point
  5. Yes, likely not that many, though most users don't mention where they are from in normal posting, but since the hardware isn't country specific not really a problem in this case for the OP
    1 point
  6. The good thing to come out of all this is that you know how to set up scripts, and what they can and can't do on startup. It's pretty simple really, a script can be just a list of commands you could type at the command line, but don't particularly want to type over and over again manually. You can start dockers, VM's, do pretty much anything with a script.
    1 point
  7. I know @johnnie.black is Portuguese. I'm Brazilian. Not so many of us, "lusófonos", right here.
    1 point
  8. Latest version of Plex (1.14.1.5488) is now live. Sorry for the delay, ran in to a couple snags with the new version.
    1 point
  9. Two-tone mug and a gray polo on March 20, 2019
    1 point
  10. They mean parity was out of sync with the array, a correcting check should be run, if it wasn't already, and the next ones should result in 0 errors.
    1 point
  11. Got me a Two-Tone Coffee Mug on (March 20, 2019), looks like I will be buying a lot of stuff in the future.
    1 point
  12. Got myself some stickers for my servers and a mouse pad to replace the ratty hp one I've had for years..
    1 point
  13. I bought a coffee cup and a mouse pad... March 19, 2019 Thanks Russ Sent from my Pixel 3 XL using Tapatalk
    1 point
  14. Hey! Got myself a tote bag with a black strap today March 19, 2019, looks pretty nifty with the orange Unraid logo!
    1 point
  15. You may (or not) want to get in contact with Broadcom (LSI) and and check to see if it's a counterfeit. My data is too important to me, I'll stick with used server pulls, since very few servers would be built with lowest budget parts.
    1 point
  16. based on other post on this forum discussing the exact requirements and issues with trim on hba’s and lates5 firmwares / unraid releases (cant find it but do a search on trim) I moved from a flashed H200 up to a LSI 9300-8i (tnx ebay for cheap chinese card if you are not in a hurry) and also replaced all my evo 950’s with 960’s and got fully btrfs trim working finally. You need the proper card and the proper drive now to get it working on btrfs otherwise you are out of luck. nice little speedboost as well with faster hba. Before that i had to temporary connect to motherboard sata , do trim and connect back to hba.
    1 point
  17. Yeah I think the reason is, when people copy/paste from a code block on the forum, it adds another character that isn't visible. @SpaceInvaderOne has also noticed it, and I had encountered the issue previously from looking at the Nextcloud upgrade CLI instructions I wrote.
    1 point
  18. Use the script posted here and it will show up in Netdata. https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/?do=findComment&comment=728822
    1 point
  19. There's a sidebar entry titled nvidia smi. Click on that. Note that the command is "nvidia-smi" but the configuration option is "nvidia_smi: yes" If you have entered everything correctly, and restarted the docker per the instructions it should show. Alternatively, you could add my script from a couple posts ago to userscripts and run it.
    1 point
  20. Probably still best to ask if they can add support to their docker, since it should just "not work" if nvidia-smi isn't available. But, if they are opposed, have a user script you can run on a schedule: #!/bin/bash con="$(docker ps --format "{{.Names}}" | grep -i netdata)" exists=$(docker exec -i "$con" grep -iqe "nvidia_smi: yes" /etc/netdata/python.d.conf >/dev/null 2>&1; echo $?) if [ "$exists" -eq 1 ]; then docker exec -i "$con" /bin/sh -c 'echo "nvidia_smi: yes" >> /etc/netdata/python.d.conf' docker restart "$con" >/dev/null 2>&1 echo '<font color="green"><b>Done.</b></font>' else echo '<font color="red"><b>Already Applied!</b></font>' fi
    1 point
  21. Hi guys, so i've just built my New ryzen unraid server (see signature for details) I moved from an I3 unraid server + an i5 6600k gaming machine to this 😃 Everything is working great, i've succesfully passed my 2 graphic cards to vm and the gaming vm has quite good performance 😁 Now it's time for optimisation so is there mandatory BIOS settings to apply? And does a Windows vm use the Turbo capability or the cpu or is it better to overclock? Thanks best regards
    1 point
  22. Just because it is the latest version available on Alpine Linux repo: https://pkgs.alpinelinux.org/packages?name=firefox&amp;branch=edge
    1 point
  23. Thanks. In case anyone else searches for the same thing... you have to go to dockers, find krusader, edit the icon, go to unassigned devices which was mounted to /mnt/disks/ and edit that path to change the R/W or R to R/W Slave or R Slave. Did the trick even though i dont know the difference between these two options.
    1 point
  24. Unionfs works 'ok' but it's a bit clunky as per the scripts above. Rclone are working on their own union which would hopefully include hardlink support unlike unionfs. It possibly will also remove the need for a seperate rclone move script, automating transfers from the local drive to the cloud https://forum.rclone.org/t/advantage-of-new-union-remote/7049/1
    1 point
  25. Key elements of my rclone mount command: rclone mount \ --allow-other \ --buffer-size 256M \ --dir-cache-time 720h \ --drive-chunk-size 512M \ --log-level INFO \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --vfs-cache-mode writes \ --bind=$RCloneMountIP $RcloneRemoteName: \ $RcloneMountLocation & --buffer-size: determines the amount of memory, that will be used to buffer data in advance. I think this is per stream --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires --drive-chunk-size: for files uploaded via the mount NOT via the upload script i.e. if you add files directly to /mnt/user/mount_rclone/yourremote. I rarely do this and it's not a great idea --vfs-read-chunk-size: this is the key variable. This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback. 128M seems to work for most but try 64M and 32M --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc. I've set the limit as off to not cap how much is requested so that rclone downloads the biggest chunks it can for my connection Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683
    1 point
  26. Note to self: info here: https://packaging.python.org/tutorials/installing-packages/#ensure-you-can-run-pip-from-the-command-line pip does not work from scratch, first run: python -m ensurepip --default-pip Then optionally upgrade pip using: pip install --upgrade pip And then just install the package from https://pypi.org/project/requests/ using: pip install requests
    1 point
  27. On the Settings tab go to Disk Settings and enable Auto-Start
    1 point