INSANE PetaByte Homelab! (TrueNAS Scale ZFS + 10Gb Networking + 40Gb SMB Fail)

Published 2023-04-10
Check out our INSANE 1PB network share powered by TrueNAS Scale!
FEATURED GEAR:
Netapp DE6600 geni.us/netapp_de6600_60bay
NetApp DE6600 digitalspaceport.com/netapp-de6600-dell-md3060e-60…
DE6600 SAS2 Modules geni.us/6600_SAS2_EMM
DE6600 PSU X-48564-00-R6 geni.us/DE6600_PSU
DE6600 Replacement Tray X-48566-00-R6 geni.us/DE6600_tray
DE6600 Fan Canister X-48565-00-R6 geni.us/DE6600_fan_canister

👇HOMELAB GEAR (#ad)👇
RACK - StarTech 42U Rack geni.us/42u_Rack

DISK SHELF (JBOD) + CABLE
Netapp ds4246 geni.us/netapp_4246_caddies
Netapp ds4243 geni.us/netapp-ds4243-wCaddy
QSFP to 8088 (SAS Cable needed for 4246 & 4243 JBODs) geni.us/mCZCP

HARD DRIVES shop.digitalspaceport.com

RAM
DDR4 RAM geni.us/DDR4_ECC_8x32GB

SERVER
Dell r720 geni.us/OAJ7Fl
Dell r720xd geni.us/5wG9n6
Dell t620 geni.us/dell_t620_256gb

SERVER RAILS + CABLE MANAGEMENT
APC Server Rails geni.us/APC-SERVER-RAILS
Cable Zip Ties geni.us/Cable_ZipTies
Monoprice 1U Cable Mgmt geni.us/Monoprice_1UCableMgmt
Cable Mgmt Tray geni.us/ServerRackCableMgmt
Dymo Label Maker geni.us/DYMO_LabelMaker

HBA
LSI 9207-8e geni.us/LSI-9207-8e

ENCLOSURE
Leviton 47605-42N geni.us/leviton_47605-42N

SWITCH
Dell 5548 Switch geni.us/Dell_5548
Mellanox sx6036 Switch geni.us/Mellanox_SX6036
Brocade icx6610 Switch geni.us/Brocade_ICX6610

UPS
Eaton 9PX6K geni.us/Eaton9PX6K
Eaton 9PX11K geni.us/Eaton9PX11K

Be sure to 👍✅Subscribe✅👍 for more content like this!

Join this channel to get Store discounts + more perks youtube.com/@digitalspaceport/join
Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalspaceport.com/

Please share this video to help spread the word and drop a comment below with your thoughts or questions. Thanks for watching!

☕Buy me a coffee www.buymeacoffee.com/gospaceport
🔴Patreon www.patreon.com/digitalspaceport

🛒Shop
Check out Shop.DigitalSpaceport.com for great deals on hardware.

DSP Website
🌐 digitalspaceport.com/

Chapters
0:00 TrueNas Scale PetaByte Project
0:48 Unboxing a PetaByte
1:55 Putting drives in NetApp DE6600
4:22 JBOD Power Up
4:47 Wiring Up 40Gb Network
7:00 ZFS SSD Array Install
8:10 TrueNas Scale Hardware Overview
9:24 Create ZFS Flash Array
10:00 Create PB ZFS Array
11:00 Setup SMB Share TrueNas Scale
12:30 Map 1PB Network Share
13:05 Moving Files over 40Gb
14:30 40Gb network SMB Windows 11
16:20 Troubleshooting SMB Windows networking performance
19:35 Could it be the EPYC CPU?

#homelab #datacenter #truenas #zfs #homedatacenter #homenetwork #networking



Disclaimers: This is not financial advice. Do your own research to make informed decisions about how you mine, farm, invest in and/or trade cryptocurrencies.

*****
As an Amazon Associate I earn from qualifying purchases.

When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Other Merchant Affiliate Partners for this site include, but are not limited to, Newegg, Best Buy, Lenovo, Samsung, and LG. I earn a commission if you click on links and make a purchase from the merchant

All Comments (21)
  • @HomeSysAdmin
    2:36 Ooooh that perfect drive cube stack!! Wow 1PB in a singe array - you're making me 8x 18TB look tiny.
  • @CaleMcCollough
    He must be single. There is no way the wife would allow that much server hardware in the house.
  • @BigBenAdv
    You probably need to look into NUMA and QPI bus saturation being the issue on your Truenas box since it's and older dual-socket Xeon setup. Odds are the QPI bus is saturated when performing this test. For some context: I've successfully ran single connection sustained transfers up to 93Gbit/s (excluding networking overheads on the link) between on Windows 2012 R2 boxes in a routed network as part of an unpaid POC back in the day (2017). Servers used were dual-socket Xeon E5-2650 v4 (originally) w/ 128GB of RAM, running Starwind RAMdisk (because we couldn't afford NVME VROC for an unpaid POC). Out of the box without any tuning on W2012R2, I could only sustain about 46-50Gbit/s. With tuning on the Windows stack (RSC, RSS, NUMA pinning & processes affinity pinning), that went up to about 70Gbit/s (the QPI bus was the bottleneck here). Eventually, I took out the 2nd socket proc for each server to eliminate QPI bus saturation and the pinning/ affinity issues and obtained 93Gbit/s sustained (on the Arista switches running OSPF for routing, the actual utilization with the networking overheads was about 97Gbit/s). The single 12C/24T Xeon was only about 50% loaded with non-RDMA TCP transfers. The file transfer test was done with a Q1T1 test on Crystaldiskmark (other utilities like diskspd or Windows Explorer copies seem to have some other limitations/ inefficiencies). For the best chance at testing such transfers, I'd say that you should remove one processor from the Dell server running Truenas. 1) Processes running on cores on socket 1 will need to traverse the QPI to reach memory attached to socket 2 (and vice versa). 2) If your NIC and HBA are attached to PCIe lanes on different sockets, that's also traffic that will hit your QPI bus. 3) Processes on socket 1 accessing either the NIC or HBA attached to PCIE on the 2nd socket will also hit your QPI bus. All of these will potentially end up saturating the QPI and 'artificially' limit the performance you could get. By placing all memory, NIC, and HBA to only one socket, you can effectively eliminate QPI link saturation issues.
  • @punx4life85
    Awesome vid! Thanks g! Picked up another 66tb for my farm
  • @rodrimora
    I believe that the windows explorer copy/paste is limited to 1 core go that would be the bottle neck. Also I think at 14:40 you said the "write cache", but the RAM in ZFS is not used for write cache as far as I know, only for read cache.
  • @thecryptoecho
    Love catching up on your build. You never stop building.
  • @chrisumali9841
    thanks for the demo and info, MegaUpload lol... Have a great day
  • Thanks for your video, can you tell me where you buy these disk (not available in your shop) ?
  • @notmyname1486
    just found this channel, but what is your use case for all of this
  • @TannerCDavis
    Arn't you limited to 6gbps sas cable connections? Do you have multi path option on to get above 6? The speeds above 12gbps are probably due to writing to ram, then slows down to write to disk thru the wire connections.
  • @xtlmeth
    What SAS card and cables did you use to connect the JBOD to the server?
  • @pfeilspitze
    19:38 "now we have this set up in a much more common-sense [...]" -- I'm a ZFS noob, but is 60 disks in a single Z2 really a good idea? Seems like the odds of losing 3/60 disks would be relatively high, particularly if they all come from one batch of returned drives. What if it was 6x (RaidZ2 | 10 wide) instead, say? Then it could stripe the reads and writes over all those vdevs too...
  • @mitchell1466
    Hi, loves your video I noticed when you where in the iDrac you were on a Dell 720XD, I am looking at going to 10GB for my setup and was wondering what 10GB NIC you have installed?