You’ve got disks to wrangle, lanes to feed, and zero time for flaky cabling. This piece turns the messy “NAS chassis + HBA + expander” puzzle into a tight, field-ready checklist. It’s chatty, a bit scrappy, and focused on what actually breaks or flies in the rack.
TL;DR: match protocol gen end-to-end, know your backplane type, don’t multi-path SATA, size uplinks not just bays, and keep the HBA cool. When in doubt, spec for SAS3 and IT mode.
SAS3 HBA (LSI/Broadcom 9300/9305/9400) vs SAS2
Pick an HBA that won’t bottleneck day one. SAS3 (12 Gb/s) cards—think 9300/9305/9400 family—play nice with modern backplanes and expanders. SAS2 works, sure, but it’s older, slower, and less forgiving when you start stacking expanders or mix SATA SSD/HDD. If you run ZFS, flash the card to IT mode and let the filesystem see raw drives. Simple, stable, predictable.
Why care? Because protocol mismatches turn into weird link drops and mystery timeouts. Also, your future self will add more drives. It always happens.
Backplane “EL1 vs EL2” and SATA vs SAS drives
EL1 = single-ported expander backplane. EL2 = dual-ported. SAS drives can use dual-path (for HA / MPIO). SATA can’t do dual-path in this context. So don’t run two uplinks to the same SATA group and expect magic bandwidth. You’ll get errors, not extra throughput. With SATA, wire um clean path and keep it boring. With SAS dual-port disks and EL2 backplanes, go wild with proper multi-pathing.

SAS expander (Intel-class, HP-class): when it helps, when it hurts
Expanders are great for density: many bays, fewer HBA ports. But every expander adds a hop, and your uplink lanes are the real ceiling. If your array bursts hard—scrubs, resilver, multi-client reads—make sure each expander has enough uplink bandwidth back to the HBA. Otherwise, you built a traffic jam with pretty LEDs.
Regra geral: scale uplinks with bays. More disks ⇒ more uplinks (where the backplane supports it), not just more caddies.
SFF-8087 / SFF-8643 / SFF-8654 (SlimSAS): match the cable plant
Connector alphabet soup matters.
- SFF-8087: classic internal mini-SAS (SAS2 era).
- SFF-8643: mini-SAS HD (SAS3).
- SFF-8654 (SlimSAS): newer compact, also seen on 24G gear.
Use the right cables or proper fan-out harnesses. Don’t force adapters with mystery brands; signal integrity ain’t a vibe, it’s physics. Keep runs short, lock connectors, test under load.
Bandwidth, uplinks, and oversubscription
A 24-bay shelf with one x4 uplink will crawl when you hit it with scrubs + backups + VM boots. Spread bays across multiple upstream ports if the backplane allows it. Watch your oversubscription ratio (how many downstream lanes share one upstream). If you can’t add uplinks, stagger workloads or cap rebuild concurrency.
ZFS/TrueNAS, IT mode, and driver notes (mpt3sas)
Run HBAs in IT mode so ZFS/TrueNAS can manage disks directly (no RAID BIOS shenanigans). On Linux, the mpt3sas driver backs most SAS3 HBAs. Keep OS and HBA firmware current. Older SAS2 cards work, but driver support and quirks show up faster under mixed SATA/SAS loads.
Cooling and power: the boring stuff that saves you
HBAs and expanders dump heat. Give them a front-to-back air tunnel, not a warm breeze. Add a small high-static-pressure fan wall if your chassis is stuffed. Secure expander power (Molex or slot power), and avoid daisy-chain splitters that sag under spin-up. Thermal budget first, pretty cabling second.

Compatibility checklist (print-me table)
| Item | O que verificar | Good sign | Notas |
|---|---|---|---|
| HBA generation & mode | Card model, firmware, IT vs IR | SAS3 (9300/9305/9400) in IT mode | Avoid mixing RAID BIOS with ZFS. |
| Backplane type | EL1 / EL2 marking or manual | EL1 for SATA single path; EL2 for SAS dual-path | Don’t dual-path SATA, you’ll chase ghosts. |
| Expander presence | Model on backplane or separate board | Known SAS2/3 expander with recent FW | More bays? Add uplinks, not just caddies. |
| Uplink count & speed | # of x4 links from backplane/expander to HBA | ≥2 uplinks for 16–24 bays heavy workloads | Reduce oversubscription during scrubs. |
| Cables & connectors | SFF-8087 / 8643 / 8654 match | Correct spec cables, short runs | Label both ends; test under load. |
| SATA vs SAS mix | Disk types per group | Keep SATA groups single-path | Use SAS dual-port if you need MPIO. |
| OS & drivers | Kernel + mpt3sas / platform notes | Current OS/HBA FW, known-good pairings | Pin versions in prod. |
| Cooling & power | Airflow over HBA/expander, stable power | Front-to-back flow, no hot pockets | Tie-down cables; avoid sags at spin-up. |
Three quick scenarios (real-world vibe)
Small homelab, 8-bay NAS (quiet side office)
You pick an ITX board plus a compact chassis. One SAS3 HBA in IT mode. Single EL1 backplane, one uplink, SATA HDD mix. Keep it simple, keep it cool. If you want a tidy footprint, a compact Caixa ITX works great and still leaves room for a low-profile HBA.
SMB file server, 24 bays, mixed HDD + a few SATA SSD cache
Here, a montagem em bastidor is the sane choice. Aim for an EL2 backplane even if you won’t dual-path yet; it’s future-proof with SAS drives. Add two uplinks from expander to HBA. Map your vdevs per backplane port to spread load. A solid caixa de pc para rack de servidor (4U) with a fan wall keeps the HBA and expander happy.
Media + VM lab, bursting reads, frequent scrubs
You want predictable throughput, not fireworks. Use a SAS3 HBA, verify the backplane lanes, and split pools across two uplinks. If you need airflow headroom and extra slots, a roomy Caixa de servidor 4U beats a cramped short-depth chassis every time.

Where IStoneCase fits (and why it saves you cycles)
You don’t only need boxes. You need boxes built for lanes, airflow, and real maintenance. IStoneCase is positioned exactly there—OEM/ODM work, fast tweaks, and high-volume builds for data centers, algorithm teams, MSPs, and builders who hate downtime. If you’re hunting a caixa para pc de servidor with proper fan placement, pre-routed backplane wiring, and room for HBAs and rails, start here:
- Caixa de servidor GPU — for AI/HPC nodes that still want tier-2/3 storage nearby.
- Caso do servidor — general purpose, ATX/E-ATX layouts, hot-swap fronts.
- Caixa para montagem em bastidor — your classic caixa de computador servidor in the rack, cable-friendly.
- Caso NAS — clean bays, sane airflow, HBA room.
- Caixa ITX — small, but not flimsy.
- Calha de guia do chassis — don’t skip rails; serviceability is uptime.
- Caixa de servidor OEM/ODM — you bring the spec, they bring drawings, proto, and runs.
SEO-heads: this also maps to the phrases folks actually search—caixa de pc para rack de servidor, caixa para pc de servidor, caixa de computador servidor, caixa do servidor atx—without stuffing or hype. We keep it natural.
Build notes you can steal
- Start SAS3 end-to-end. Backplanes and expanders behave better, and you won’t repaint later.
- Treat uplinks like gold. Add them early. Oversubscription isn’t free.
- SATA = single-path. SAS = dual-path. Mixing that up is how you get “why is my SSD vanishing, lol?” moments.
- Keep the HBA’s sink in the airflow. If the card is in a dead spot, add a directed fan. Tiny fix, huge win.
- Label cables. Future-you ain’t got time to tone-probe in a hot aisle.
- Test under stress. Scrub, resilver, rsync storms. If it lives through that, it’s probably fine.
Mini FAQ
Can I bond two cables from one SATA backplane group to one HBA for more speed?
Nope. That’s not how it works. You’ll likely see errors, not bandwidth.
Do I need expanders?
Only if port count vs bays demands it. If you do, mind the uplinks.
Will an ATX board fit fine in a 4U?
Generally, yes. Just check standoff pattern and PCIe slot clearance in a proper caixa do servidor atx.
Can I just throw NVMe in there?
Different story (U.2/U.3, PCIe switch, JBOF). For this checklist, we’re in SAS/SATA land.



