lionelhutz

Members
  • Posts

    3731
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by lionelhutz

  1. I'm not seeing any big changes in the font size causing issues with stuff fitting the screen except for the following, On the main Docker tab the Advanced view no longer fits my screen width due to the added load columns. As for the basic view, I hate those drop down arrows - all they achieve is cluttering the screen up to hide a few lines of text. Then, when I click them I have to click both the port and volume mapping because some goofy bottom justify thing is happening. I vote just get rid of attempting those arrows all together. See the screenshot of what happened clicking the volume mapping down arrow. The other thing is the hard boxes around all the options to set. Example places would be a VM edit screen or the disk setting screen. The hard lined boxes around each setting are very odd to see compared to how it's done by pretty much everyone else these days. I think it also contributes to requiring more space between items that isn't necessary to have and just causes more scrolling. If you want to talk how it's a modern layout, hard outlined setting boxes like that are very old school and not modern at all. It looks good otherwise. The other main tabs seem to fit the screen about the same as before.
  2. I've tried both themes. They both suck. Overall, this is now the worst forum layout of any forum I regularly visit. This forum at 100% in the browser is still larger than most forums at 125%. There are fewer responses on the screen at once compared to other forums, which means more scrolling to read anything. The comments about more air are true, there is less content and more blank screen on this forum compared to others. Somehow with the large font sizes, I still find it harder to read. By this, I'm talking when zoomed in the browser to have about the same text size as I use on other forums. Likely has something to do with the exact fonts and colors, but could also be due to the monochromatic color scheme - just too much white or black background. Other forums have softer color variations for borders and post dividers and such. Overall, I do find it rather sad to read that you paid for this. There probably are better default forum themes.
  3. No, there were no indications. It was process of elimination. I started installing Windows using VNC and it worked, but installing with the GPU and USB cards would not. So, I narrowed it down to the USB card. I doubt it's the core system or else you'd be seeing issues with unRAID itself being unstable. At one time I had hard lock-ups happening too without any indications of why and had to buy a new power supply and then a new motherboard taking shots at what could be the issue. Luckily, it was the motherboard. But then, before even that I had upgraded to an AMD system to have enough power to use a simple VM and then found I couldn't install the drivers for both the iGOU or dGPU in the Windows VM. No way, no how, trying every recommended step to make either work. Ended up buying an Intel platform and got that bad motherboard. Isn't this server stuff fun? I easily went through 500 into parts before I got a system that was stable. Luckily, I did re-purpose some of it.
  4. Should have worked unless you were trying to use the cache drive. The minimum free space should be set larger than the largest file you usually copy to avoid errors when disks are getting full.
  5. I had my VM's start to do stupid things. It was a bad USB card I was passing through. It did work, but was still causing issues.
  6. Is one of the disks full? What did you set for allocation method, split level and minimum free space on the Media share? I wouldn't use a share called media. I would use a share for each type of media. IE, use a Movies share. If you have TV use a TV share.
  7. Well, that's not exactly true. An Emby docker container can use the integrated GPU to help with transcoding.
  8. Usenetserver has been good so far for what I need. I have been seeing errors but it seems I've been trying them too soon before all parts are posted.
  9. I had similar lockups. Finally dumped the ASRock H170 Pro 4 motherboard for a Gigabyte Z270 board and no issues since. Probably not what you want to hear, but it's not likely you'll find anything to narrow it down, But, my money would be on the ASRock motherboard.
  10. With a single disk, the checksum can only tell you the data is bad. It's not capable of fixing it. It can fix the file system metadata. The only time I tried BTRFS on a spinner, the checksum was just a handy way to confirm the disk had corrupted even though I already knew it was. If you do chose to use it, then I'd recommend you stress test the file system before trusting it.
  11. What really dictates the usability of unRAID is the working storage you need vs the finished file storage. You could NOT use the mover with unraid and basically use it as a SSD cache storage area for working and a spinner disk storage area for archiving files. You could build a system with a RAID 10 cache array. Say you use 4 x good 1T SSD's and you'd get 2T of very fast storage in RAID10. This array should be fast enough for what you want. As for the main array, you can use something like 28 disks total (I can't be bothered to search for the number right now but I think it's 28). So, after using 4 SSD disks for the cache you can still use 24 disks in the main array. 2 disks would be used for parity leaving 22 disks for data. At 8T+ each that is 176T+ of storage. This storage would be the speed of a single spinner disk you are using. You'd probably want to come up with some type of customized automation to assist in moving the finished files from the SSD's to this storage. So, the question is becomes - does something like 2T of fast storage and 176T+ of disk storage meet your speed needs?
  12. The mapping is /data on the container side and /mnt/user/Downloads/ on the unRAID side. So, when working inside the container, ie SAB settings, you use /data. When working in unRAID you use /mnt/user/Downloads/ or the Downloads share over the network. In SAB, you'd use something like /data/complete and /data/incomplete for the completed and temporary download folders. Then, in categories you can simply use a name like tv or movies with no slashes required. If you're using Sonarr, you MUST put the exact same container path and host path in the container settings for it. Some containers have /data for the container path and others have /download which will not work together when the programs exchange information over an API like SAB and Sonarr do.
  13. If you have a single cache drive then you have to re-format it and start over with it. If you have a cache pool then try the scrub on it.
  14. I've been having a lot of issues with my VM's lately. I just don't get it. I can't get a W10 install to run. It typically hangs on the TianoCore splash. It uses 100% CPU and nothing happens. I tried OpenELEC and it hung up and did the same thing. I've been using my Nvidia GT710 because the VNC window won't connect to a VM using VNC. VNC quit a while back and hasn't worked on any VM since that day. I always seemed to have trouble with VM's being stable because I'd randomly get one that would not re-start once it was shut down. This was usually taking months before a VM would give issues. But, in the last few weeks it started getting worse and now nothing VM related will work. The various logs all look fine, I scanned the flash, balanced and scrubbed my cache array and scrubbed the libvert image but there was nothing questionable and nothing improved. Is it possible to delete all of the VM stuff and completely start over without affecting the array setup or the Docker images? Get rid of all VM images and settings and previously stored stuff and start from scratch? I tried disabling VM's and then deleted the libvert.img file. When I re-enabled VM's I then had 5 images that had existed maybe 2 years ago. So, I did it again and then rebooted the server before re-enabling it and the same thing happened. Where the hell did unRAID get this data because I can't find a spot on the flash or drives were it was stored? I thought that was only kept in the libvert.img file? Also, if unRAID kept a backup somewhere, why would unRAID pull in those old images instead of the latest ones? If anyone has any suggestions I'm all ears. Could there be something messed up with the BRTFS SSD array that doesn't show-up using scrub?
  15. Sonaar uses the SAB API to to control SAB. SAB tells Sonaar what the directory is where the file was downloaded. This means the mapping in Sonaar for the download location must be exactly the same as SAB on both sides. For example, SAB has this mapped - /downloads On the docker container side to /mnt/user/Downloads/ On the host (unRAID) side. You must map both in the Sonaar docker the exact same. Using /data would be useless so you'd edit or delete it to create your own using /downloads instead.
  16. A share is the TOP level directory on a disk. Nothing more. So manually create the share directories on each disk where you want the share. Once again, share directories are just the top level directory on a disk. If you want a share called TV to use disk 1, disk 2 and disk 3 then create a directory called TV on disk 1, disk 2 and disk 3. Once the share directories are there, manual will only allow a sub-directory to exist on a single disk.
  17. Just don't create manual directories..... As I already explained.
  18. Make a directory called "Share" on every disk you want to have "Share". Then, every sub-directory you create in "Share" will only be created on one disk.
  19. No, that is not what is being asked for. #1 = when creating a new directory, only use the allocation method and include/exclude to determine which device gets the directory. #2 = when creating a file where the directory exists, only put the file onto the device where files exist inside the directory. Only use the device where the directory contains files even in cases where the directory exists on more than one device. I would also add about#2 that there MUST be a rule that gets followed if the file can't be created on the same device as the existing directory that contains files. Otherwise, what is supposed to be a directory of files on one disk could instead end up being spread out over 2 or more disks. The rules limetech proposed are the same as using the manual method except for the top level share directory automatically being created on every disk in the share. What I posted is NOT doing everything manual. You manually create the top level share directory on each disk ONLY. Once they exist, all other directories created inside the share AUTOMATICALLY get created on one disk and one disk only.
  20. That already kind of exists. Set split to manual and create the top level share directory on every disk you want the share. Create sub-directories in the share on each disk if there are other directories you want to be split across multiple disk. Then, each new sub-directory will only be created one time and then anything put into that sub-directory or below that sub-directory will remain on a single disk. It's not quite the same because it doesn't allow for a sub-sub-directory to go to a different disk. Your idea really doesn't work require more work and reading each directory because once /some/share/films/idc1 is created on 2 disks then the mover has to go to each disk and see if the directory specifically contains a file but not directories before knowing where the file should go. The current setup more or less works by directories only, not caring about there being files in the directory. So, it writes to the directory and the file goes to the disk dictated by the split level and allocation method without regard to reading the current disk contents.
  21. I really don't really know if that works, I just thought it might. It might only work on incoming requests to that PC. If you do DNS on the server you should be able to set up something. I just use bookmarks for the various apps.
  22. A lot of times, the gear isn't even obsoleted, it's just reached a certain service age and the company has a policy to replace hardware of a certain age. At work, every PC get's replaced within 3 years even though most people here would agree that for light duty use a PC has a decent service life well over 3 years. The corporate IT won't even let an older PC stay even when the OS is up to date and it is only used for the simple remedial task of running label making software.
  23. You can map as many paths as you want. Just use the add another path, port or variable link near the bottom of the container setup page and enter the info for the path into the box that pops up. Just don't use the common Linux paths like /bin or /var on the container side when mapping. Then use the new path you mapped in the category setting.
  24. Did you get hardware decoding working with Emby? If so, how? I tried the steps in that linked thread and couldn't get it to work. Even tried variations on the Docker container setup like mapping /dev/dri as a path instead of that device string.
  25. I can say 2 of the standard bottom mounting location holes on the HGST were removed. 2 are still there so you should still be able to screw it down to the tray with 2 screws. I still could and the tray still went in no problem.