Jump to content

JonathanM

Moderators
  • Posts

    16,741
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by JonathanM

  1. Try deleting the network.cfg file from the config folder on the flash drive.
  2. Since the tower was physically moved, reseating all moveable connections like cables and PCIe cards would be my first stop, as well as verifying all the connections are identical, for example if it has a GPU as well as CPU based graphics, connecting to the "wrong" video port could make a difference. Trying safe mode or GUI mode could also give hints. Also check the router to see if it assigned a DHCP address to the ethernet's MAC, if it did, see if the GUI is available on that IP even though the boot process appears to stop on the local console.
  3. Yes, assuming the target disk is still sde when you issue the command. CHECK IMMEDIATELY BEFORE YOU ISSUE THE COMMAND sd? designations can and will change, Unraid assigns the letters from scratch at each boot, so hardware changes can change which drive gets which letter. Do not assume just because the drive was sde last time the tower was started, it will be the same now. The consequences of getting it wrong are catastrophic. Double and triple check that you are sending the correct sd? command. I can't stress this enough.
  4. Take a look through this thread.
  5. Do you have any hardware passed through to VM's? Does the new or old hardware have RAID controllers? If the answer to both these questions is no, then moving all the storage drives and the flash drive to the new system should be seamless. Unraid "installs" itself into RAM on every boot, and identifies drives by serial number, so moving to a different motherboard isn't much of a deal.
  6. That is container specific. The very popular LSIO container now DOES update the app. There is no such thing as a generic overall best practice, as the thread topic asks. You MUST take each container and look at the support thread to figure out the specific care and feeding instructions.
  7. Since virtualizing Unraid is not officially supported, I moved your post to the user section where people who have made it work collaborate.
  8. Apparently you can get a right angle converter to use a normal 8x PCIe card like a LSI 9300-8e, which could net you a couple SAS external ports good for 4 drives each or more if plugged into a multiplier backplane. Disclaimer, while it looks like it might work from what I saw in the manual, I have no clue if it would actually function properly. The manual shows a video card, who knows if they neutered the PCIe slot so it only works with video.
  9. Not sure what you are asking. If the write destination is to the parity array drives, it's even worse if the share is spread out. Each write to the individual data drives in the parity array is also calculated and written to the parity drive. So if you write to disk1 and disk2 at the same time, the parity drive is thrashing back and forth to the 2 physical locations that map to the data written to the 2 different disks. Pools have no such limitation, the parity array parity disk is not touched for pool writes.
  10. Since you are looking at options, I recommend giving jellyfin a try as well. I have a premier Emby setup, but have jellyfin as a backup if Emby doesn't want to act right for some reason. I like the totally free model of jellyfin, even if it doesn't have all the options of the paid stuff.
  11. Disclaimer, I have NOT tried this, I have no clue if the map file needs to be unzipped first, or how to recover other than restarting Unraid. IF this works, (I doubt it) you will need to script this so it is run on every boot, as Unraid runs from RAM and unpacks fresh on each boot. loadkeys /usr/share/kbd/keymaps/i386/dvorak/ANSI-dvorak.map.gz
  12. Using dd to send zeros to the drive device id is probably the best option, since it will start at the beginning and work up. You can always cancel the operation after it's been running long enough to write the first parts of the drive. Be very careful, sending zeros to the wrong device will erase it with very little chance of good recovery. Something like this dd bs=1M if=/dev/zero of=/dev/sdX status=progress
  13. You can definitely use the drives you mentioned in Unraid, I was just pointing out the cost of setting up the number of SATA ports, power supply connections, and appropriate case for 14 SATA drives that you listed would probably exceed the cost of a pair of new drives, 16TB is currently the sweet spot in my opinion for $/GB of storage, You can probably put together something that will work for much less, but it's going to use much more electricity on an ongoing basis, and have many more points of failure than new hardware. By all means, throw some used parts together and try the trial of Unraid. I'm not being sarcastic, it will probably do exactly what you want for now. I was just trying to prepare you for the inevitable feature creep. If you put the three 320GB spinner drives in the parity array, and set up two pools, one with the 1TB SSD and another with the 250GB SSD that would be a nice start, you should be able to find an old board with 6 SATA ports. That would give you an entry point to see how Unraid works.
  14. Use the official procedure, https://docs.unraid.net/legacy/FAQ/parity-swap-procedure/ If you do a new config you will lose data.
  15. Kinda, not all shucked drives act this way, but a good percent of the 8TB WD that were so popular a couple years ago are. Thanks!
  16. Those drives may be the type that won't spin up if they have 3.3V applied on the SATA power cable, did you by any chance change how the power to the drives was routed? 4pin to SATA power adapters will allow the drives to work, regular SATA power may not unless the pins are masked off. This may NOT be the issue, just a possibility since they don't show in BIOS.
  17. Nope. Just be VERY careful not to populate either of the parity slots and you should be fine.
  18. Do the drives show up in BIOS? Attach diagnostics to your NEXT post in this thread.
  19. Maybe NPM is too restrictive in the options offered, perhaps you need to explore a full featured nginx reverse proxy solution like SWAG. That way you can utilize the nginx reverse proxy examples in the meshcentral docs.
  20. Make sure you have a good uncorrupted backup of your config folder, then reformat the stick with https://rufus.ie/ , being sure to select FAT32 as the format type, UNRAID as the disk label. Extract the files from the appropriate zip downloaded from the bottom of this page https://unraid.net/download, overwrite the config file with your backup, run the make bootable as admin, see if it boots.
  21. Does it work if you break it out from behind NPM and expose it directly?
×
×
  • Create New...