2015-07-01

vSphere/vCloud home lab -- layer 1

Starting from the bottom and working my way up I decided on the following cabling scheme:


This is possible because the system board on my PC has two Ethernet NICs. One NIC is connected directly to my ADSL router/modem/switch for direct Internet access. The second NIC is connected directly to the SG300 switch for my private network. This is necessary as the default subnet for the ADSL router/modem/switch is 192.168.2.0/24 and the default subnet for the SG300 switch is 192.168.1.0/24. I'm in too much of a hurry change the configuration so that both switches are on the same subnet, and I'm not sure I would want to.

The obvious question is what am I doing for routing? Even though the SG300 switch is capable of being configured for static routes I'm opting to use virtual routers on NODE 1. I would have like to have the version of the switch which includes RIP routing, but that more than doubled the price. My choice of virtual routers at this point are: (1) Zebra on Red Hat Linux or the current equivalent, or (2) GNS3 or something equivalent. I have already have prior experience with Zebra on Linux for an earlier lab and it worked exactly the way I wanted. GNS3 is compelling due to the opportunity to get my hands dirty with Cisco routing to complement the Cisco switch work.

BTW, those aren't the real cable colours.

2015-06-13

vSphere/vCloud home lab -- Cisco SG300-20 switch

If I'm building a home lab then I'm going to need a network. The DSL router/modem/switch thingy that I have falls a bit short of requirements. My requirements were:

  • Must have enough ports for the Dell C6100. A fully populated C6100 will need a total of twelve ports: four nodes each with three NICs (two for OS, one for BMC).
  • Gigabit. Require for things like vMotion.
  • IEEE 802.1q VLAN tagging.
  • Remotely manageable.
  • Some sort of layer 3 routing would be good.


I settled on the Cisco SG300-20 (SRW2016-K9-NA) as it has a total of twenty ports: 16 for LAN, plus four for connecting to other switches. The C6100 occupies twelve ports and two more are occupied for connectivity to my PC and DSL router/modem/switch thingy. This model includes a layer 3 mode for static routes. It would have been nice to have the model with RIP I/II but this more than doubled the price. Besides, my plan is to use virtual routing either in the form of a standard OS with routing added, or GNS3, or Cisco IOS XRv Software.

One thing I was a bit fuzzy on were the MGBT1 SFP ports: were they standard RJ45 copper or not. They look like a standard RJ45 ports. Experimentation with these will have to wait for another time.

In contrast to the Dell C6100 this switch is fanless and therefore quiet. The web GUI is quite handy as its been 10 years since taking CCNA and I really haven't done anything with Cisco since. My immediate focus is on vSphere and vCloud, the networking will have to wait until later.

vSphere/vCloud home lab -- Dell C6100

I have been working with VMware ESX since version 2.5 and Virtual Center/vCenter since version 1 where I work. I tried creating a lab at home a few years ago but my PC just didn't have enough juice. There is only so much you can do with 8 GB of RAM. Last year (2014) while on a VMware course I was talking to one of the instructors about home labs and he clued me in to the Dell C-series cloud servers. He had a C6100 that he bought from eBay and was happy with it.

The benefits of the C6100 are that there are four independent nodes within a single 2U chassis. Each node contains its own processors, RAM, NICs and one PCI slot. Each node is hot-pluggable in the back of the chassis. The HDDs are also hot-pluggable and inserted to the front of the chassis. Each slot is allocated to a specific node: HDD slots 1 to 6 to node 1, HDD slots 7 to 12 to node 2, and so on. It looks like this allocation is hardwired. There are a total of three NICs per node. Two of the NICs are useable by the OS and one is dedicated to the BMC (baseboard management controller) for OOBM (out of band management). The BMC is not the same as iDRAC (Integrated Dell Remote Access Controller), but it does provide the means to act as a IP KVM for remote console access without needing an OS, and control the power (power up, graceful power off, forced power off, reset). Finally, the power requirement is a NEMA 5-15 grounded (Type B) plug which is your normal everyday household plug in North America.

If there is one disadvantage to the C6100 it would be noise. This thing is freakishly loud. I'm used to 1U/2U server noise. This is louder than that. The fans, four in total, are rated at 70 dB. After doing research on what other people are doing about this I came to the conclusion to not to replace the fans. The concern I have is that with so much horse power concentrated into a small space I didn't want to run it hot. I ended up moving the server to my dining room and running a couple of Ethernet cables through the wall. Even then the noise is noticeable, but at least I no longer need to wear ear plugs. Like I said, freakishly loud.

My criteria in choosing this configuration were:
  • Something that was server class and was on the ESXi 5.x HCL. Research indicated that this hardware is on the ESXi HCL for version 5.1 Update 1 (need to double-check), and that other people have had success getting ESXi 5.5 working with some additional effort (more later when I get to that).
  • SAS HDDs, not SATA, as I wanted to reduce HCL compatibility problems with ESXi.
  • SSD HDDs, one per node, for two reasons: (1) there is no cache on the RAID controller, and (2) as I wanted to experiment with VMware vSAN. SSD is only available in SAS which fits nicely with the previous requirement.
  • A generous amount of memory, without being prohibitively expensive. I know from experience that this will be the first resource exhausted.
  • A selection of HDD speeds to experiment with SDRS. I can always add more storage internally (subject to the power envelope) or externally.
  • Low-ish power. Yes, this is a contradiction as this is four server class machines stuffed into a small space which is why I picked low power processors, a reasonable amount of RAM and didn't go HDD crazy.

After doing lots of research I bought the following from NES INTERNATIONAL:
  • Quantity = 1
    SYSTEM : DELL POWEREDGE C6100 XS23-TY3 w/ 4 x HOT PLUG NODES 24 x 2.5" BAYS
    PROCESSOR : 8 x INTEL XEON QUAD CORE L5630 2.13GHz 12MB CACHE LOW POWER
    MEMORY : 192GB DDR3 ECC REGD MEMORY (48GB PER NODE)
    HARD DRIVE : 4 x 160GB 2.5" SOLID STATE SSD HARD DRIVE
    NETWROKING : DUAL GIGABIT ETHERNET NIC CONTROLLER
    REMOTE ACCESS : IPMI 2.0 REMOTE MANAGEMENT PORT
    RAID CONTROLLER : LSI 1068E MEZZANINE RAID CONTROLLER Y8Y69
    POWER SUPPLY : DUAL REDUNDANT HOT PLUG POWER SUPPLY
  • Quantity = 2
    SYSTEM : DELL POWEREDGE 2950 1950 2970 R900
    HARD DRIVE : 600GB 10K 2.5" 6Gb/s SAS HARD DRIVE
  • Quantity = 2
    SYSTEM : DELL POWEREDGE 2950 1950 2970 R900
    HARD DRIVE : 500GB 7.2K 2.5" 6Gb/s SAS HARD DRIVE
  • Quantity = 2
    SYSTEM : DELL POWEREDGE 2950 1950 2970 R900
    HARD DRIVE : 146GB 15K 2.5" 6Gb/s SAS HARD DRIVE