TangoEchoAlpha

Members
  • Posts

    46
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TangoEchoAlpha's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. Thanks @JorgeB - removed the NVME drive and server is happy again. Ordered a replacement and hopefully WD will honour a warranty claim (it's out of warranty in three months). But then again, they might look at the number of powered on hours and say no 😇
  2. Thank you @JorgeB - I will give that a go 👍
  3. Restarted my Unraid server this morning, for some unknown reason it had shutdown overnight. Now I am receiving a warning message that my cache drive can't mount, with an unsupported file system. I have spent some time looking through old posts. When I look in /dev, nvme0n1 is there but I can't mount it to a temporary mount point. Running blkid doesn't list a mount point and lsblk also shows just the disk: I have booted from a Debian live image and can also see the NVME disk listed, however that has an unknown file system as well and can't mount it. I have also tried checking the BTRFS and that doesn't work either: I am not sure what else I can do here, aside from reformatting my cache drive and starting again. The only data I expect I have lost will be all my Docker images and configuration, which would be annoying but not the end of the world. Any ideas much appreciated, thanks in advance. invader-diagnostics-20240411-0941.zip
  4. @alturismo Thanks, that did indeed resolve the problem and I could reach the tvhproxy from the Plex Docker container. Unfortunately TVHProxy fails to get the channels list from TVHeadEnd: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/usr/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/opt/tvhproxy/tvhProxy.py", line 54, in lineup for c in _get_channels(): TypeError: 'NoneType' object is not iterable ::ffff:192.168.1.231 - - [2021-07-16 02:50:57] "GET /lineup.json HTTP/1.1" 500 412 0.010207 [2021-07-16 02:50:59,315] ERROR in app: Exception on /lineup.json [GET] However when I connect with a browser, copying the value from the TVH_URL field into the address bar of the browser, I reach TVHeadend and get a channels list returned. I am reaching the point where I think I will just record from within TVHeadEnd soon, without the proxy flask.
  5. Hi Amardilo - did you ever get anywhere with this? I'm having the sam problems. My Plex server is the binhex-plex docker, running on network type 'host'. I can connect to Plex via the web GUI from the dockers page, or stream from Plex via other devices on my network - so that's good. My tvheadend server is running on a Raspberry Pi with a TV HAT. I can access the tvheadend GUI via 192.168.1.131:9981 and then login. I know there's a good connection between the Pi and Unraid as tvheadend records to a share on Unraid 🙂 I am a bit stumped as to what the problem might be and the TVHProxy container doesn't show anything in the logs at all.
  6. I'd like to request this as a new feature (a brief search hasn't found it as existing already). Is it possible to modify the GUI so that the web interface shows your public/WAN/ IP address, clearly visible? This would be useful for seeing if a tunnel at the default gateway level had dropped? Maybe it could be added to the banner, along with the server's local IP address, name, uptime etc or to the footer along with the Unraid copyright notice, manual link and the Dynamix temperature readings? Thanks 🙏
  7. Is that a permanent shortage or just a 'everything in the world is difficult to get hold of' shortage, or impossible to say? The retailer I buy a lot of my kit from in the UK has had the Define 7 XL 'overdue' on stock since the start of June.
  8. The loss of CaseLabs was a real shame. The only suggestion I can think of would be the Fractal Design 7 XL, although they seem to be like rocking horse poo at the moment. I keep on the lookout myself, but they are out of stock in the UK and have been for some time...
  9. So had a go with this, so for anyone else interested, who finds it in a search, or @Tydell then I hope this helps somewhat. My efforts were foiled by my hotswap cage/backplane being faulty, so never got to actually get the drives installed and use it in 'anger' as it were. The cage/backplane I tried was the Icybox IB-564SSK. So to take apart the horizontal section at the front right of the case, there are several screws that need undoing. The ones I have labelled with green and red hold some metal plates in place, which in effect act as runners for any 5.25" devices you want to install. The screws labelled in blue hold the I/O front panel in place. This photo (sorry for the quality, it was taken one handed and 'blind') shows the screws holding the I/O front panel in place.... This picture shows the front I/O panel and the vertical metal plate, retained by the red screws, can be seen as well. I undid the screws but couldn't get it out of the case, so it's possibly I missed a fixing somewhere or needed to be more medieval with it.... And as soon as you undo the blue screws, the front I/O panel succumbs to gravity.... This next photo shows that the I/O panel will indeed fit vertically - however there appears to be no way of fixing it permanently to the chassis using existing holes. Probably the best solution would be to attach the I/O panel to the edge of the chassis or the side of a drive cage using 3M stick pads or tape etc. And this picture shows the mechanism used to retain the default drive cages and fan assemblies - the metal clips on top spring up and hold the cage in place. This picture shows the vertical metal plate that was held in place by the green screws - I was able to remove it quite easily once the three screws had been undone: And this photo shows how the drive cage would fit inside the case using the retaining clips provided with the chassis. Note that they don't screw in to the exterior of the drive cage - they simply clip in: Finally, here's a photo of the IcyBox cage/backplane with the I/O panel in a vertical setup, mirroring the Rosewill RSV-L4412: So with a little effort, I think it will be quite doable. Depending upon which drive cages are used, there may be some small gaps between each of the drive cages - but they could easily be blanked off. Indeed, if you look closely at a RSV-L4412, there appear to be small gaps with that anyway....
  10. Ah, sorry. Yes, I think they will wipe the entire disk. The only tool I have ever used to wipe free space is on Windows, a free tool called file shredder - https://www.fileshredder.org/ Actually, shred can delete individual files. Check out the -u option:
  11. I don't know if either will be viewed as a 'full disk' but how about nwipe (kinda replacement for the old Darik's Boot And Nuke DBAN) or shred?
  12. @Cessquill Just in case you do pick up an Ironwolf Pro from Scan, here's the info from the drive - had hoped to add it to the server earlier, but an unexpected parity sync meant it had to wait. Just running preclear on it now to make sure it's a goodun.
  13. As per the title. I meant to unassign a disc from the array prior to removing it from service. However I was distracted and unassigned the wrong disc. After realising my mistake, I immediately stopped the server, assigned the same disc back to the same position in the array and started the array again. Now I am looking at a 20 hour parity resync/rebuild for some reason, that I can't understand? The contents of the disc I unassigned haven't changed and therefore neither should the parity. Do I have to let this parity operation finish, as it seems unnecessary?
  14. Hopefully this helps you, appears to resolve to the correct server for me when using the container.