Leaderboard

Popular Content

Showing content with the highest reputation on 07/30/19 in all areas

  1. We have this implemented for 6.8 release.
    4 points
  2. gremlins :-), im just beating them with a hammer right now, will have a gremlin free version shortly.
    2 points
  3. I'm taking this application out of BETA status. Version 2.0 has been released. Release 2.0 Added progress bars to the drive benchmarking Rewrote the Controller Benchmark to better test multi-drive performance Disabled drive activity monitor until cosmetic issues with rotated drive images is resolved Redesigned how the application aborts disk scans if the page is refreshed or otherwise left the page doing the scan For the Controller Benchmark, the application performs a read of each drive attached to the controller in sequence for 15 seconds each. Then all drives are read simultaneously for 15 seconds. If the sum of the percentage difference between all of the drives exceeds 5%, your controller bandwidth is likely being saturated and you're not getting the full possible performance (such as during a Parity check). This can help you plan your drive to controller assignment. Example of a Controller's bandwidth not being saturated: Here's my main rig with 8 WD Red Pro 6TB's attached and saturating the controller
    2 points
  4. Same problem here. Temporary solution that I came up with is to edit the docker settings and change repository to an older version: "linuxserver/duplicati:v2.0.4.23-2.0.4.23_beta_2019-07-14-ls27"
    2 points
  5. Is anybody using docker compose? Are there any plans to integrate it with unRAID?
    1 point
  6. This is continuation of this original issue report. There is some conflicting information in that topic and, therefore, with your help, we want to try and get to the bottom of this. We have several test servers running trying to reproduce this problem, so far hasn't happened. The first thing we want to definitively establish is whether the problem is isolated to only those cases where /config inside the container is mapped to: /mnt/user/appdata vs. /mnt/cache/appdata or /mnt/diskN/appdata Note: it does not matter what the "Default appdata storage location" is set to on Settings/Docker page. Of interest is the actual path used by an individual container. What I'd like to ask is this: For those of you who have seen this DB corruption occur after updating from Unraid 6.6.x to 6.7.x, please note the placement of your 'appdata' folder and how it's mapped to your problematic container. Then update to 6.7.2 and run normally until you see corruption. At that point, please capture diagnostics and report how appdata is being mapped. If this is a huge PITA for you, then bow out of this testing - we are not trying to cause big disruptions for you. Finally I want to make clear: we are not "blaming" this issue on any other component, h/w or s/w, and also not "admitting" this is our problem. We have not changed any code we're responsible for that would account for this, which doesn't mean we don't have a bug somewhere - probably we do, and a change in the kernel or some other component might simply be exposing some long hidden issue. We are only interested in identifying and fixing the problem.
    1 point
  7. Just want to say thanks! Working great. The new controller benchmark is much easier to understand and overview. My rig gives me average difference 0.1%
    1 point
  8. I love how Linus cringed when he casually drops the drives on the carpet. If footage was that valuable maybe another duplicate server offsite that replicates?
    1 point
  9. That's pretty cool, but at the end the only thing going through my head was: "Man, I really hope he is also going to set up an offsite backup" I'm also a bit curious how much data he actually ended up with. I'd be pretty surprised if those drives in the bins were actually all full.
    1 point
  10. It won't work on array devices, even if there's no parity, only option would be to mount them manually with the array stopped and run fstrim, since there's no parity it won't brake anything.
    1 point
  11. If you view the App Store for the TRIM plugin, it gives the commands it uses to trim the SSD's. You can manually run those on your non-parity protected SSD's. Optionally, you can put in a single spinner with no Parity as your array drives which is used only for backup purposes and otherwise spun down and has a short spin-down delay and put the SSD's in as unassigned drives managed by the "Unassigned Devices" plugin for hosting your VM's. I'm currently using this setup.
    1 point
  12. Yes, it might not be in IT mode, but flashing is relatively easy.
    1 point
  13. ahh sorry looks like the views have changed, try the 'Tags' tab instead to view all available tagged images, then follow the rest of the instructions in the linked faq.
    1 point
  14. alright man thx. i'll try this once i'm ready for my new PROXMOX installation.
    1 point
  15. Thanks both! I will give it a try, update the BIOS and see how it performs. Will need to figure out the least disruptive way to trail it, as if it's boarderline the the VM not be best as a daily Windows 10 platform.
    1 point
  16. I have an Asus H77 motherboard in my main rig that supports vt-d. IIRC there are a lot of forum posts complaining about it but I think it came to everything with bios updates eventually.
    1 point
  17. Can someone explain the functionality of the 'streams' function? I want to know how it works and what is does
    1 point
  18. May be not offical or fully vt-d capable, but some user report positive for Z77 chipset. https://forums.mydigitallife.net/threads/vt-d-enabled-motherboards-and-cpus-for-paravirtualization.33730/page-2#post-636451 Passthrough GPU not easy as other device type.
    1 point
  19. Follow the directions in the first post of the thread.
    1 point
  20. Depending on how valuable the data is, you may want to send the drive to a professional recovery service. The chances of recovering data with normal user tools after a significant portion of the drive has been zeroed out is pretty slim. Your first avenue should be recovering the lost data from your backups.
    1 point
  21. 6.8? RELEASE DATE? 😁 Just kiddin. Nice to see that feature will be in soon 😍
    1 point
  22. You may run into slowdowns over time with the main array ssd's, as trim is disabled on array devices to keep from breaking parity. I know you said no parity drive, but trim has been disabled programmatically on those devices regardless of the presence of valid parity. For now if you experience this you will need to unassign those drives and trim them manually. Some drives work fine without this, but it's something to keep in mind.
    1 point
  23. I concur.... coming over from FlexRaid I'm thoroughly enjoying my brand new Unraid Build that I just finished. All weekend I played "towers of hanoi" with my data and drives as I got them added into the Array one-by-one, and syncing/validating/validating-again my data before letting the drives get cleared. FlexRaid was a fantastic idea for my work from home and home server setup.... but the lack of support and somewhat schizophrenic disappearances of my array had me seriously concerned. As the original OP did, I then re-discovered Unraid and realized it offered the same benefits of a parity approach but with all power of being it's own OS. Luckily, I also upgraded my HTPC about a year ago, so I happened to have an AMD Athlon II and Asus M4A785-M motherboard (yes it's working perfectly for me) looking for a job to do . . . I tossed in a new Node 804 case, LSI HBA 9207-8i (and a few fans), and an Intel PRO/1000 PCI Express NIC . . . and voila I've got a new lean, slick looking, Unraid server up and running! I've been extremely impressed with the utter ease of use and confidence the software instills (once it's booting consistently as my M4A785-M MB was initially shakey until the LSI card was put in; known issue in the forums). I love the headless nature of it and the Web UI (and look forward to a mobile App option in the future). But my Gigabit network and Moca 2.0 adapaters are working wonderfully getting great speeds to/from the server at nearly 2-3x the speed that FlexRaid used to run at (~70-100 MB/s vs FlexRaids 30-40 MB/s) Needless to say I'm very impressed also and very happy to have a solution up and running again . . . Now next priority to work out is exactly how to get my BackBlaze Home account backing the array up to the cloud like it did with my FlexRaid (running on same machine)!
    1 point
  24. Just want to throw this out there, one of the main reasons some of us choose to mess with compose is to get around some of the limitation of the unRAID template system. In particular when it comes to complex multi-container applications, which often use several frontend and backend networks.
    1 point
  25. Turn on help on the cache settings. Cache: No means new files written to the share will go directly to the array, and the mover won't touch files already on the cache for that share What you want is Cache:Yes and run the mover, then make sure no new files are written to the cache by turning it back to Cache:No.
    1 point
  26. How do I limit the memory usage of a docker application? Personally, on my system I limit the memory of most of my docker applications so that there is always (hopefully) memory available for other applications / unRaid if the need arises. IE: if you watch CA's resource monitor / cAdvisor carefully when an application like nzbGet is unpacking / par-checking, you will see that its memory used skyrockets, but the same operation can take place in far less memory (albeit at a slightly slower speed). The memory used will not be available to another application such as Plex until after the unpack / par check is completed. To limit the memory usage of a particular app, add this to the extra parameters section of the app when you edit / add it: --memory=4G This will limit the memory of the application to a maximum of 4G
    1 point
  27. I figured it out. I needed to specify the byte offset of where the partition begins. For anyone who might have the same question in the future, here is what I did. From the unRAID command console, display partition information of the vdisk: fdisk -l /mnt/disks/Windows/vdisk1.img I was after the values in red. The output will looks something like this: [pre]Disk vdisk1.img: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xda00352d Device Boot Start End Sectors Size Id Type vdisk1.img1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT vdisk1.img2 206848 41940991 41734144 19.9G 7 HPFS/NTFS/exFAT[/pre] To find the offset in bytes, multiply the sector start value by the sector size to get the offset in bytes. In this case, I wanted to mount vdisk1.img2. 206848 * 512 = 105906176 Final command to mount the vdisk NTFS partition as read-only: mount -r -t ntfs -o loop,offset=105906176 /mnt/disks/Windows/vdisk1.img /mnt/test
    1 point