So at the .NEXT conference today, Nutanix released Community Edition. CE is a free, fully featured version of the Nutanix Virtual Compute platform, with PRISM and Acropolis additions.

Community Edition is a 100% software solution enabling technology enthusiasts to easily evaluate the latest hyperconvergence technology at zero cost. Users can now experience the same Nutanix technology that powers the datacenters of thousands of leading enterprises around the world.

Y U SO PHYSICAL? (well, not anymore)

I wrote about Community Edition previously: Nutanix Community Edition: Y U SO physical? I discussed the lack of a virtual edition there, and today I’m announcing that the version that’s released today has support for nested installation! I guess Nutanix really does listen to it’s community, and I’m glad they did. 

Yes, that’s right. It will run on, at least, VMware ESXi, Fusion and Workstation. During one of the sessions at the .NEXT Conference, I demoed how to install on top of Fusion, and I’d like to share how to do so.

Installing CE on Fusion

So, now we have a simple, easy way to deploy CE in your home lab. You don’t need to wipe your existing lab setup, you can simply deploy CE on top of whatever you’ve got running now.

Step zero: register and check requirements

Screen Shot 2015-06-09 at 15.52.44

So, to access the bits you’ll need to register and download the binaries. For now, you’ll need an invite code, but I’ve got you covered. Just RT this tweet and follow me on Twitter. I’ll announce five winners of an invite code on Wednesday, June 10th, noon EDT via Twitter.

Also, I recommend you check out the minimum requirements, even if you’re going to run it nested.

  1. You can run a 1, 3 or 4 node nested cluster
  2. You’ll need Intel processors with VT-x support (but AMD runs, too, see below)
  3. You will really need at least 16GB of vRAM. The CVM needs 12GB, and 4GB is really the low mark for the nested KVM host and any VMs you might want to run inside of CE.
  4. You need a 200+GB SSD and a 500+GB HDD. But hey, we’re running nested, so who really cares? Just make sure the physical hardware is up to spec capacity- and performance-wise.
  5. Intel NICs. When running on a VMware hypervisor, just assign a e1000 NIC.

Step one: register, download and prepare the binaries

Screen Shot 2015-06-09 at 15.58.17

CE consists of a single .img.gz file. Unpack that gzip file. You’ll end up with a .img file. When installing to a physical server, you’ll need to image that file onto an USB stick, but when running nested, you can just rename the file to ce-flat.vmdk. For the -flat.vmdk file to be recognized, you’ll need a disk descriptor file, which I’ve created for you here. Rename that file to ce.vmdk, and you’re all set.

Step two: create and customize a VM

So now we need to create and customize a Fusion VM to run Community Edition.

  1. Create a new custom virtual machine
  2. Choose the ‘VMware vSphere 2015 beta’ Guest OS type to enable nested virtualization
  3. Choose the previously created disk and let Fusion move the disk into the VM folder
  4. Customize the VM
    1. Assign 4 vCPUs and 16GB of vRAM. Check if nested virtualization is enabled.
    2. Assign the existing VMDK to SATA BUS and set to boot.
    3. Add 200+ and a 500+ GB VMDKs to scsi0:0 and scsi0:1
    4. Ensure VM Hardware version 11
    5. Attach VM to virtual network

Step three: optionally change installer pre-requisite checks

So we now basically have everything set up to launch the installer. But before we do so, I want to show you two checks inside of the installer you could change, depending on your environment:

If you’re running an AMD system
You could disable the CheckVtx and CheckIsIntel checks in the /home/install/phx_iso/phoenix/minimum_reqs.py file

Screen Shot 2015-06-09 at 16.30.23

If your SSD is just not fast enough
You could lower the IOPS thresholds (SSD_rdIOPS_thresh and SSD_wrIOPS_thresh) in /home/install/phx_iso/phoenix/sysUtil.py.
There be dragons ahead, though, if you change these values. NOS really does need adequate performance, so please run a manual performance check before editing these values. Replace sdX with the SSD you want to check.

fio --name=random-read --rw=randread --size=1g --filename=/dev/sdX
--ioengine=libaio --iodepth=8 --direct=1 --invalidate=1
--runtime=15 | grep iops=
fio --name=random-write --rw=randwrite --size=1g --filename=/dev/sdX
--ioengine=libaio --iodepth=8 --direct=1 --invalidate=1
--runtime=15 | grep iops=

Step four: run the installer

That’s a pretty simple step. Run the installer, enter two IP-addresses, and off you go. In my experience, creating a single node cluster from the installer is hit-and-miss, so I opt to create a cluster manually afterwards.

Step five: create the cluster

After the install succeeds, log in to the CVM (the inner VM). You can SSH into the IP of the CVM or SSH into the local-only 192.168.5.2 address from the KVM. Log in with the ‘nutanix’ user (password ‘nutanix/4u’) and execute the cluster create command:

cluster –s $CVM_IP –f create

After cluster creation, first add a DNS server to the cluster. This is needed as the initial configuration in PRISM requires you to connect to the Nutanix Next Community.

ncli cluster add-to-name-servers servers=8.8.8.8

Finally, I always run diagnostics.py to validate cluster performance. Use –replication_factor 1 for single-node clusters.

./diagnostics.py --replication_factor 1 run

Step six: initial cluster configuration

Now, log in to PRISM (using the CVM IP-address) and execute these tasks:

  1. Change the admin credentials and attach the cluster to your .NEXT Credentials
  2. Rename Cluster to something useful
  3. Create a storage pool and storage container. Please name the container ‘default’, as Acropolis expects this name. Oh, and you could enable compression and deduplication if you want.
  4. Create VM Network in Acropolis for VM network connectivity
  5. Add NTP servers for reliable timestamps and logging.

So, that’s it. You’ve now created a usable Community Edition install.

Step seven: create VM

You can even create and run VMs via Acropolis!

Concluding

All-in-all, installing Nutanix Community Edition is pretty simple, and I like this. In true Nutanix spirit, the developers have gone through considerable trouble in smoothing out the install process, and it really shows. CE feels like a decent product, from both an install process and usability (PRISM & Acropolis) standpoint. I might even consider using CE in my klauwd.com community-based IaaS project.

It’s certainly a very welcome addition to my virtual toolbox: CE allows me to quickly test and confirm some features and scenarios without touching my production clusters, it allows me to dive into the deeper stuff so I can learn more about the tech and it allows me to give demos to coworkers and prospects, which really is the biggest addition for me. Now I get to show all that cool Nutanix stuff to everyone!