Darkguy

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by Darkguy

  1. Currently running a Dell PERC H310 and a Dell H200 (both PCIe 2.0 x8) in IT Mode on my current setup. Both allow for 8 drives, each I have a total of 7 (SATA drives) connected to each right now, which is the maximum possible in the case I use. I'm thinking about switching over to a case with 24 bays (plus 2x 2.5" internal drives), so I would either have to get another HBA (the board I'm looking at has 3x PCIe 4.0 x16 slots, so it would fit) or use an SAS expander. The question I'm asking myself is what is the best option for me here: 1.) H310 AND H200 AND another HBA; connect 8 drives each? 2.) H310 AND H200 AND an SAS expander; connect 16 drives to one HBA and 8 to the other? 3.) H310 OR H200 AND an SAS expander; connect all 24 drives to one HBA and ditch the other one? (or do I just aimlessly sacrifice bandwith here, if I have a second HBA on hand anyway) What SAS expander should I be looking into, especially if I were to run all 24 drives off a single Dell H200/H310? Drives currently are a mix of old and new 2.5" and 3.5" (all SATA3), ranging from 500 GB to 8 TB CPU would be a Ryzen APU, so integrated iGPU and no need for a video card to use up a slot.
  2. Restarting this thread: I had to replace my flash drive again this weekend (the previous one I used showed read/write errors). I opted for a new, factory sealed SanDisk USB 2.0 drive this time around. There still seems to be some problem with the super.dat file/storage of disk slots: if I merely stop and restart the array, everything is file; disks keep their assignments and I can start the array without problems as soon as I change anything in the list of disks (right now, I'd like to replace disk5 as described here and set disk5 to "no device"), all disk assignments are lost; there is a corresponding entry in the log: Sep 26 08:49:53 Tower kernel: read_file: error 2 opening /boot/config/super.dat Sep 26 08:49:53 Tower kernel: md: could not read superblock from /boot/config/super.dat The super.dat file is fine up to this point, and can also be read and restored from backup. What I am trying to achieve is: replace disk5 by disabling the array, setting disk5 to be emulated by parity, insert a new disk, preclear the new disk using the Unassigned Devices Preclear script, then use the new disk in the disk5 slot repeat the whole procedure for disk3 Log attached, anything else I can try? Slowly running out of space and I have disks lying around to increase storage space by a total of 4.5 TB which I currently cannot use. darkrack-diagnostics-20220926-1006.zip
  3. That's basically what I did to create the current flash drive a few days ago.
  4. Seems I fixed the error, unplugged all SATA cables to the bay in question and reattached them. The error seems to be gone now, I copied the whole contents from the ddrescue-image of the original disk back over to the new disk and no more UDMA CRC errors. There still seems to be a problem with the super.dat file though - all drives lose assignments on a reboot (but no longer on a stop/start array action). There also is no new super.dat created in /boot - there was the first time I started off the new flash drive, but I still had the error about it in the syslog, deleted it and it doesn't get created now. Current diagnostics attached. darkrack-diagnostics-20220622-1925.zip
  5. New diags attached, thanks darkrack-diagnostics-20220619-1146.zip
  6. Update, starting to think there is some underlying issue with the server, either the drive bay, a cable or the LSI controller: First of all, the flash drive was damaged to some extent, which explained the super.dat issue. I could not reboot off of it, but was able to copy the config-folder onto another flash drive on my Windows workstation, boot off it, replace the license and set up the array. Parity is rebuilding now, BUT I get a ton of UDMA CRC errors (about 15,000 over the past 12 hours) for the new replacement drive in the old slot of disk6. I also popped the old disk6 (the one with all the read errors) into an external bay on my Widows workstation and cloned it in an Ubuntu VM (in Hyper-V) using ddrescue. Drive was cloned without issue in around 9 hours and without a single error, I mounted the image (in the VM) and all data seems to be there. Any suggestions in which order I should check components? Cable, bay, controller?
  7. Presuming parity is invalid now anyway, could I just remove disk 6 (and try to get data off it on another system) shut the system down, fix the issue with the flash drive put in the new disk build a new config with the new disk in the slot of disk6 rebuild parity then go on to replace the other two disks and rebuild them from parity?
  8. Makes sense, but I guess it's tool ate since the move is already underway. Sounds sensible. But probably too late. So probably no use in stopping the array and looking to emulate the faulty disk now?
  9. I've been aware of the SMART problems of course and have e-mail notifications turned on, but just got around to getting new disks today. I do know parity is not a substitute for backups, but since I opted for dual parity, I was under the assumption that I could wait a few weeks to replace the faulty drive. Nothing on that drive is irreplaceable, but it's still a hassle to replace some of it. I was not aware of the error with the super.dat since today, which seems to be the major issue here, since it basically forced a new config as I understand it and damaged parity. I believe that there should be a notification if any of the files on /boot cannot be read/accessed. Neither the e-mail reports, the GUI nor Fix Common Problems have alerted me to this, or I would have tried fixing that problem back when all of my disks were OK. I even have regular backups of those files, but they only go back about a month. Sure, I should have tried to dig deeper when the problem of the array config first came up, but since it never led to any problems, I didn't look into it.
  10. I'm currently trying to move some data (pictures) off it which I have not yet backed up using unBALANCER. Read speeds are very slow, so this will probably take around 8-9 hours. From what was written above, changes to the emulated disk6 will have been lost (can confirm, some data that would have been written to that disk after it got disabled is missing now). So I presume parity is invalid now as well, especially as two tiny writes (config files for a syncing tool I am using and which had a share on that disk) to disk6 have happened in the past hour. Once I unassign it, do I click parity is valid or not?
  11. It also has tons of read errors sadly, but so far everything seems to work. I've enclosed the current diagnostics. The disk in question is disk 6 I'd like to fix and replace things ASAP. All in all, I have got three disks to replace (disk 6 got disabled in May and I want to replace it right away, disk 5 also has read errors and I've got a second disk to replace it for here). I would also replace another 500 GB disk with larger one I have lying around. I'll patiently await further instructions. darkrack-diagnostics-20220617-1743.zip
  12. I used the old (previously disabled) disk, not the new one. Did I still somehow stick my foot in my mouth? All data on the disk seems to still be intact and accessible.
  13. No, I formatted it as an unassigned device. So what would my next steps be? stop array shutdown server CHKDSK flash drive on another machine - fix or potentially replace Flash drive and renew license? proceed with replacing the faulty drive once super.dat issue is fixed?
  14. Dual parity actually. Checking the logs, I saw that the super.dat file was bad: Jun 17 15:27:43 tower kernel: read_file: error 2 opening /boot/config/super.dat Jun 17 15:27:43 tower kernel: md: could not read superblock from /boot/config/super.dat Jun 17 15:27:43 tower kernel: md: initializing superblock I re-inserted the old disk, "checked parity is valid" and started the array. Could I re-create a new super dat after checking the flash drive for errors? The disk in question was disabled previously, but got re-enabled upon restarting the array, could this lead to problems with parity?
  15. Hi, for some time, whenever I stop the array or reboot the server, the info on the array order gets lost (all slots empty, I have to manually select each drive in the correct position; just parity/data drives, my two cache drives are not affected). Usually I just manually put every drive in the correct position again, check "parity is valid" and start the array without issue. Since this only happens when I reboot after an upgrade or a power outage, it was fine in the past and I only had to do that a few times. Today I went to replace a failed drive (red x mark), so I took out the old drive, put in the new (the system supports hot-swap) and stopped the array. Sure enough, all the slots were empty again so I manually reassigned them. I am not sure how to proceed at this point. I did replace drives over the years, but the last time was a few years ago. I did format the new drive as XFS but there was no preclear/zero drive option available. From what I remember, I did this to drives in the past. The manual for replacing disks states just to stop the array, assign the new drive in the slot of the old one and then click a checkbox that says "Yes I want to do this" (which I cannot see anywhere). I could check "parity is valid", but will this automatically rebuild the contents of the drive on the new disk? I am pretty sure I do not want parity to rebuild at this point, since the old disk fails and this would remove any info on the failed drive that had been emulated, correct? I am running Unraid 6.9.2, with two parity drives, ten data drives (XFS) and two cache drives (BTRFS). Hope someone can help me out here!
  16. I can't seem to find this container within the Apps section in Unraid. It's still available from DockerHub. Has the template for Unraid been removed for some reason?
  17. The container should have a web port added (default: 5800), so you should be able to get into the GUI (via VNC) in your browser at http://[SERVER IP]:[WEB PORT] (e.g.: http://192.168.0.1:5800). If you switch to advanced view in the container configuration, you can also add the URL to the Web UI in the form of http://[IP]:[PORT:5800]/ to get into the UI via the context menu of the list of Docker containers in Unraid. If you want to add the WebUI to a Reverse Proxy (to make it available through a public IP) such as the letsencrypt Docker, you can add this to your configuration: location ^~ /mediathekview/ { proxy_pass http://<server>:<port>/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_read_timeout 86400; }
  18. Yes, it's actually running flawlessly since the latest version of the Docker container. I've been in contact with the developer who provided it and with some of his effort, my debugging notes and the latest MediathekView update, it started working. I've been meaning to provide an iteration for Unraid, but been too busy to look into it. Here's how to get it working: Add container from https://hub.docker.com/r/conrad784/mediathekview-webinterface/ Choose a port, config and download location. Add a variable USER_ID = 99, GROUP_ID=100 and UMASK= 0000 That's it. The VNC Web GUI doesn't work too well on mobile (searching with a soft keyboard is a pain) but works perfectly on Desktop. If you are in Austria/use an Austrian VPN and want content from ORF, note the config changes required in the MediathekView forum to the ffmpeg/VLC parameters.
  19. Happy Birthday! Been using Unraid for more than 2.5 years now on a used server I upgraded a bit. Everything has been running smoothly, minor issues I had could be tracked down to faulty RAM and errors with specific Docker containers. Unraid made my life a lot easier, serving as centralized storage (upgrading a number of disks, including two parity drives earlier this year was a breeze), host for VMs and various Docker containers. It allows me to securely access media from a remote location through Kodi, have a VPN connection to my home network whenever I need it, sync files from various devices or download shows off public broadcast stations. Keep up the great work!
  20. Hi, I'm currently using an older system based around an AMD Phenom X4 on an 790GX/SB750 platform with integrated graphics in a server rack which I bought used a few years back. It still works fine and will hopefully last me a few more years, but the TDP is higher than it could be, and the DDR2 RAM it uses currently maxes out at 16 GB. I use two PCIe 2.0 8x disk controllers (one Dell PERC H310 and one Dell PERC H200, both using the SAS2008 chipset, flashed to HBA mode) to drive disks in 14 slots available in my rack at the moment. Ideally, I'd want to keep using the rack, disks (which I am currently updating to new and bigger ones), disk controllers and disk enclosures in the future. I also put in a new, modular ATX power supply, fans and cabling when I initially built my system about two years back, so I'd re-use all of that too. I'd probably want to go the route of a Ryzen 3-, Threadripper- or Xeon-based system (ideally 16+ threads, maybe ECC RAM support). No gaming, but maybe run a few more VMs (a mix of Windows and Linux) for a number of scenarios. As server-grade CPUs don't have included GPUs, I'd probably need at least one slot for a GPU and also at least two PCIe 8x slots to re-use the disk controllers. Any suggestions about a workstation/server CPU/chipset combination that offers at least three PCIe 8x slots to use the disk controllers plus some sort of GPU?
  21. Hi, I've found a container on Dockerhub which I would like to use on unRAID (an X11rdp-ready version of MediathekView, a JAVA application from Germany which allows downloading videos from various VOD services of German-language public service broadcasters). It was no problem to install it from DockerHub and adapt ports and mounts to work with my unRAID set-up and the basic functionality the application provides on any given desktop is there. The only problem is, the docker-compose.yml defines UID and UID as 1000 instead of the 99/100 combo containers built with unRAID in mind use. Files created that way, have their rights off, which probably will lead to problems down the line, once files are being transferred to their permanent homes on my folder structure.. Trying to change PUID/PGID accordingly via the 'parameters' field in the Advanced View of the container leads to an error message of a service within the container not being able to start. Is there an easy way to adapt the UID/GID withing unRAID, or would I have to fork the code and try to make the container itself work with 99/100 first and then add this forked version to unRAID? For those who want to take a look (or any German-speaking users who want to run MediathekView on their setup), the Dockerhub container is here and the according GitHub repository is here. (I also tried the more popular version by tuxflo on Dockerhub, which uses the proper UID/GID but has a number of display issues over X11rdp and still uses an outdated version of the application; the version I tried actually forked that one and improved it).
  22. Hi, I've been running unRAID for about two years now on a server built mostly from used components, including mostly older hard drives I had lying around. The storage is mostly used for media files, backups, cloud sync/storage back-ends, with a number of Docker containers and one Ubuntu server VM for various purposes,so nothing too fancy. The server is capable of handling eight 3.5" and six 2.5" drives. I have seven of the eight 3.5" and all of the 2.5" slots populated, all hooked up to two eight-port SAS2008 storage controllers and currently run a double parity (2×3 TB 3,5" drives) set-up with a cache pool consisting of two SSDs (2×525 GB, 2.5") drives and a total of 9 TB of storage (which is about 90% full right now, so another reason to replace some disks). Parity and cache drive also are on a different controller each, for (hopefully) performance reasons. One very old 3.5" 1 TB drive has recently shown a big number reallocated sectors, so this needs to get replaced ASAP. Also, one of the 2.5" disks only is capable of SATA I speed, and a few smaller (1-1.5 TB) drives are only capable of SATA II speeds. Also with the 3 TB parity drives, I am limited to putting in replacement drives <= 3 TB. In short, a number of drives will have to be replaced soon, both for storage/speed and age reasons. I generally know to swap out one drive at a time and let the array rebuild data from parity and I am pretty sure the same goes for replacing the actual parity drives. As the 3 TB parity drives are still in good shape (as far as their SMART values go at least), I'd like to re-purpose them as data drives down the line. The general idea from a cost/value point of view right now is to got with 8 TB parity drives and mostly 4 TB 3.5"/2 TB 2.5" drives as replacements over the next few months, bringing usable array capacity from 9 TB to 27-30 TB eventually. If I get a good deal on any 5-8 TB drives, I can also use those without a hassle. Here's my plan, just curious if this is alright or I am missing something: Replace faulty drive 1. Replace the dying old drive with a new 4 TB drive 2. Null 4 TB drive, add it to the array, let data re-build on it from parity (1 TB --> 4 TB drive, 11 TB array as I still only have 3 TB parity drives; will unRAID at that point tell me I can only use 3 of the 4 TB due to the size of the parity drives?) Replace parity drive 1 3. Replace 3 TB parity drive with a new 8 TB drive 4. Null 8 TB drive, set it up for parity, let parity re-build (3 TB --> 8 TB parity drive, 8/3 TB parity drives) Re-purpose old parity drive 1 5. Put former 3 TB parity drive in the one empty slot I still have, null 3 TB drive, add it to the array (<empty> --> 3 TB drive, 11 --> 14 TB array) Replace parity drive 2 6. Replace the second 3 TB parity drive with a new 8 TB drive 7. Null second 8 TB drive, set it up for parity, let parity re-build - at that point, I'll have usable 8 TB of dual-parity (3 TB --> 8 TB parity drive, 8/8 TB parity drives, 14 --> 15 TB array, as I now can use the full 4 TB of the new disk instead of just 3 TB) Re-purpose old parity drive 2 8. Replace some other 1 TB old age drive with the second former 3 TB parity drive 9. null 3 TB drive, add it to the array, let data re-build from parity (1 TB --> 3 TB drive, 15 --> 17 TB array) Replace other old/slow/small disks 9. Replace old drive with new drive, null new drive, add it to the array, let data re-build from parity (depending on what gets replaced, 17 --> 17.5-30 TB array) 10. wash, rinse, repeat Anything I am missing or not thinking of?
  23. I noticed two big issues with this container: every time I stop and restart the container (and sometimes at random after a time) all my shares will display "Database Error". I'll have to disconnect and reconnect them again, to make them sync again (for a while). I also tried deleting the local .sync directories before adding the shares back to no avail I've had docker.img use space up to 100% a few times and enlarged the file each time. As it began to fill up again today, I finally noticed what the matter was: the logfile within the container had grown to a whooping 27 GB! Any idea what may cause this? I've been using Sync for many years on a number of Windows and Linux computers as well as Android devices and virtually never had these problems on any other platform (except on Ubuntu, when I ran out of disk space on the volume that housed the Sync folders)
  24. If you used the mirror described above, here's how to get back on the main repo (git commands as described by the team in the News section of SickRage:) docker exec -it sickrage /bin/bash cd /app/sickrage git remote set-url origin https://github.com/SickRage/SickRage.git git fetch origin git checkout master git branch -u origin/master git reset --hard origin/master git pull