So at the .NEXT conference today, Nutanix released Community Edition. CE is a free, fully featured version of the Nutanix Virtual Compute platform, with PRISM and Acropolis additions.
Community Edition is a 100% software solution enabling technology enthusiasts to easily evaluate the latest hyperconvergence technology at zero cost. Users can now experience the same Nutanix technology that powers the datacenters of thousands of leading enterprises around the world.
Y U SO PHYSICAL? (well, not anymore)
I wrote about Community Edition previously: Nutanix Community Edition: Y U SO physical? I discussed the lack of a virtual edition there, and today I’m announcing that the version that’s released today has support for nested installation! I guess Nutanix really does listen to it’s community, and I’m glad they did.
Yes, that’s right. It will run on, at least, VMware ESXi, Fusion and Workstation. During one of the sessions at the .NEXT Conference, I demoed how to install on top of Fusion, and I’d like to share how to do so.
Installing CE on Fusion
So, now we have a simple, easy way to deploy CE in your home lab. You don’t need to wipe your existing lab setup, you can simply deploy CE on top of whatever you’ve got running now.
Step zero: register and check requirements
So, to access the bits you’ll need to register and download the binaries. For now, you’ll need an invite code, but I’ve got you covered. Just RT this tweet and follow me on Twitter. I’ll announce five winners of an invite code on Wednesday, June 10th, noon EDT via Twitter.
Also, I recommend you check out the minimum requirements, even if you’re going to run it nested.
- You can run a 1, 3 or 4 node nested cluster
- You’ll need Intel processors with VT-x support (but AMD runs, too, see below)
- You will really need at least 16GB of vRAM. The CVM needs 12GB, and 4GB is really the low mark for the nested KVM host and any VMs you might want to run inside of CE.
- You need a 200+GB SSD and a 500+GB HDD. But hey, we’re running nested, so who really cares? Just make sure the physical hardware is up to spec capacity- and performance-wise.
- Intel NICs. When running on a VMware hypervisor, just assign a e1000 NIC.
Step one: register, download and prepare the binaries
CE consists of a single .img.gz file. Unpack that gzip file. You’ll end up with a .img file. When installing to a physical server, you’ll need to image that file onto an USB stick, but when running nested, you can just rename the file to ce-flat.vmdk. For the -flat.vmdk file to be recognized, you’ll need a disk descriptor file, which I’ve created for you here. Rename that file to ce.vmdk, and you’re all set.
Step two: create and customize a VM
So now we need to create and customize a Fusion VM to run Community Edition.
- Create a new custom virtual machine
- Choose the ‘VMware vSphere 2015 beta’ Guest OS type to enable nested virtualization
- Choose the previously created disk and let Fusion move the disk into the VM folder
- Customize the VM
- Assign 4 vCPUs and 16GB of vRAM. Check if nested virtualization is enabled.
- Assign the existing VMDK to SATA BUS and set to boot.
- Add 200+ and a 500+ GB VMDKs to scsi0:0 and scsi0:1
- Ensure VM Hardware version 11
- Attach VM to virtual network
Step three: optionally change installer pre-requisite checks
So we now basically have everything set up to launch the installer. But before we do so, I want to show you two checks inside of the installer you could change, depending on your environment:
If you’re running an AMD system
You could disable the CheckVtx and CheckIsIntel checks in the /home/install/phx_iso/phoenix/minimum_reqs.py file
If your SSD is just not fast enough
You could lower the IOPS thresholds (SSD_rdIOPS_thresh and SSD_wrIOPS_thresh) in /home/install/phx_iso/phoenix/sysUtil.py.
There be dragons ahead, though, if you change these values. NOS really does need adequate performance, so please run a manual performance check before editing these values. Replace sdX with the SSD you want to check.
fio --name=random-read --rw=randread --size=1g --filename=/dev/sdX --ioengine=libaio --iodepth=8 --direct=1 --invalidate=1 --runtime=15 | grep iops=
fio --name=random-write --rw=randwrite --size=1g --filename=/dev/sdX --ioengine=libaio --iodepth=8 --direct=1 --invalidate=1 --runtime=15 | grep iops=
Step four: run the installer
That’s a pretty simple step. Run the installer, enter two IP-addresses, and off you go. In my experience, creating a single node cluster from the installer is hit-and-miss, so I opt to create a cluster manually afterwards.
Step five: create the cluster
After the install succeeds, log in to the CVM (the inner VM). You can SSH into the IP of the CVM or SSH into the local-only 192.168.5.2 address from the KVM. Log in with the ‘nutanix’ user (password ‘nutanix/4u’) and execute the cluster create command:
cluster –s $CVM_IP –f create
After cluster creation, first add a DNS server to the cluster. This is needed as the initial configuration in PRISM requires you to connect to the Nutanix Next Community.
ncli cluster add-to-name-servers servers=8.8.8.8
Finally, I always run diagnostics.py to validate cluster performance. Use –replication_factor 1 for single-node clusters.
./diagnostics.py --replication_factor 1 run
Step six: initial cluster configuration
Now, log in to PRISM (using the CVM IP-address) and execute these tasks:
- Change the admin credentials and attach the cluster to your .NEXT Credentials
- Rename Cluster to something useful
- Create a storage pool and storage container. Please name the container ‘default’, as Acropolis expects this name. Oh, and you could enable compression and deduplication if you want.
- Create VM Network in Acropolis for VM network connectivity
- Add NTP servers for reliable timestamps and logging.
So, that’s it. You’ve now created a usable Community Edition install.
Step seven: create VM
You can even create and run VMs via Acropolis!
Concluding
All-in-all, installing Nutanix Community Edition is pretty simple, and I like this. In true Nutanix spirit, the developers have gone through considerable trouble in smoothing out the install process, and it really shows. CE feels like a decent product, from both an install process and usability (PRISM & Acropolis) standpoint. I might even consider using CE in my klauwd.com community-based IaaS project.
It’s certainly a very welcome addition to my virtual toolbox: CE allows me to quickly test and confirm some features and scenarios without touching my production clusters, it allows me to dive into the deeper stuff so I can learn more about the tech and it allows me to give demos to coworkers and prospects, which really is the biggest addition for me. Now I get to show all that cool Nutanix stuff to everyone!
Hi Joep,
I am wondering if you managed to successfully setup a cluster of 3+ nodes running the nested solution?
On my workstation lab, I am encountering issues when adding additional nodes to the existing primary due to having a lack of IPMI to connect to :)
Realistically I know it’s probably not possible and way beyond the realms of support ability, but if you have any thoughts I’d appreciate it!
Cheers,
Ryan
I have created 3-node all-virtual and 3-node hybrid (virtual+physical) clusters, all without issues. I’m not sure why you’d need to connect to any kind ofIPMI? Just deployed three nested CE virtual machines.
Thanks for your reply. The only reason I ask is because I am seeing this: http://i.imgur.com/G8bzj7Q.png which is preventing me from adding a node to the cluster. I am unable to run ipmitool to investigate because it doesn’t exist.
Do you have any ideas? Thanks for your help!
I bet that’s a bug in the cluster expansion feature, as commercial Nutanix always has IPMI. The CE version does not (as IPMI is a hardware feature), and I’m figuring you’re seeing this now. I think you better open a topic on the CE forum (via next.nutanix.com)
Hi, I’ve wrote the same steps but in spanish, thank to your post – https://www.jorgedelacruz.es/2015/06/19/nutanix-instalar-nutanix-community-edition-en-vmware-fusion-y-primer-contacto-con-acropolis/ Really good can play with the Nutanix Community in our Laptops! Inception Hyperconverge!
Got one running 2015.06.08 today… Dell M6800, 32GB memory, 500GB SSD, Windows 8.1, VMware Workstation 11.1. Still working on building nested VM’s in there… Thanks for the write-up – couldn’t have done it without it.
-wes
FYI. The steps that require the “ncli …..” commands aren’t needed if you are doing a single node cluster. I was banging my head against the wall because I was getting an error saying the CVM was already part of a cluster. I watched this video and it confirms what I am seeing on Mac running Fusion. https://www.youtube.com/watch?v=3GGdy2I4THU&feature=youtu.be
One addition you don’t need the “cluster -s cvm_ip -f create” either, this was very confusing in CE Guides on the forum as well.
Hi Jason,
Both the ‘cluster create’ and ‘ncli’ commands are needed my guide, especially if you’re doing a single node cluster. The difference in approach is that you selected to create a s single node cluster from the installer, whereas I chose to do it manually afterwards:
“In my experience, creating a single node cluster from the installer is hit-and-miss, so I opt to create a cluster manually afterwards.”
Hi Joep, can I know how to edit this file? /home/install/phx_iso/phoenix/minimum_reqs.py.
I got it, login as root and vi the file.
Can lower the memory requirement for 3x12G of 3 nodes cluster?
As long as you don’t do stuff like dedupe, erasure coding, compression, etc. and you don’t run many (or any, to be safe), 12G might work.
Hi there, have you tried to run Nutanix nested on Hyper-V – if so do you have any instructions? I can get the Nutanix VM to boot in Hyper-V (converted vmdk to vhdx) and turned on virtualisation extensions, however it doesn’t appear to recognise the additional drives (210GB 510GB). I’ve tried these as SCSI attached and IDE but no joy. Any ideas?
I’m trying to do this in our vCenter environment through a normal VM. I’ve got it working and can ping the Nutanix Host IP but I can’t seem to be able to ping the Nutanix CVM IP. When the install completes successfully, it shows the IP I gave it, but there’s no network connectivity.
I’ve tried using a single NIC or dual NICs for the VM and its not helping.
Any ideas?
Hello Joep if i try to do a cluster -s cvm_ip -f create i get a unknown command line flag ‘f’.
I’m getting the same error about the unknown command line flag “f”..any ideas?
Hi, i am new to this nutanix side and anything related to any networking stuff, i am trying to install Nutanix CE, i am stuck near what could possibly be the password for [email protected].5.2 in Step 5: Create Cluster steps Image 1????(please help me here as i dont know where can i find the password or hot to get the password for this one).
Thanks,
Ronald
Hi Ronald, take a look into this Post, maybe it will help you with the default credentials – https://www.jorgedelacruz.es/2016/11/22/nutanix-credenciales-por-defecto-de-un-cluster-nutanix-apagado-y-arranque-de-un-cluster-nutanix/ Sorry for the SPAM
Hi jorg,
I have tried it, but still able to login into it.please help.
Hi,
What version of Fusion was this completed on? When I try and attach the vmdk files they are both greyed out.
Thanks
Has anyone gotten CE to work on Fusion 11.0 or 11.5? I am having issue with networking. There is not much in the UI to config for networking so I am at a loss.
Just to say THANK YOU! Was looking for a way to create a lab on my new AMD machine, and you did the job, buddy!
I am unable to download ce.vmdk from the mentioned link. it is opening in a notepad file(below format)
Disk DescriptorFile
version=4
encoding=”UTF-8″
CID=a63adc2a
parentCID=ffffffff
isNativeSnapshot=”no”
createType=”vmfs”
Extent description
RW 14540800 VMFS “ce-flat.vmdk”
The Disk Data Base
#DDB
ddb.adapterType = “lsilogic”
ddb.geometry.cylinders = “905”
ddb.geometry.heads = “255”
ddb.geometry.sectors = “63”
ddb.longContentID = “2e046b033cecaa929776efb0a63adc2a”
ddb.uuid = “60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9”
ddb.virtualHWVersion = “10”
Hi Sonu! The linked file is a text file. You can copy/paste the code into a notepad and save it to your computer. Make sure to save it as a .vmdk file, not as a .txt file.