NPP Training series – Data Structure on Nutanix with Hyper-V

To continue NPP training series here is my next topic: Data Structure on Nutanix with Hyper-V
If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

To give credit due, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it and have updated the graphics for Hyper-V.

Data Structure on Nutanix

The NDFS (Nutanix Distributed Filesystem) is composed of the following high-level structs:

Storage Pool

  • Key Role: Group of physical devices
  • Description: A storage pool is a group of physical storage devices including PCIe SSD (Solid State Drive), SSD, and HDD (Hard Disk Drive) devices for the cluster.  The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales.  In most configurations only a single storage pool is leveraged.

Container

  • Key Role: Group of VMs/files
  • Description: A container is a logical segmentation of the Storage Pool and contains a group of VM (Virtual Machine) or files (vDisks).  Some configuration options (eg. (RF) Resiliency Factor) are configured at the container level, however are applied at the individual VM/file level.  Containers typically have a 1 to 1 mapping with a datastore (SMB Share(s)).

vDisk

  • Key Role: vDisk
  • Description: A vDisk is any file over 512KB on NDFS including VM hard disks.  vDisks are composed of extents which are grouped and stored on disk as an extent group.

Below we show how these map between NDFS and the Hyper-V:
SP_structure Data Structure

Extent

  • Key Role: Logically contiguous data
  • Description: A extent is a 1MB piece of logically contiguous data which consists of n number of contiguous blocks (varies depending on guest OS block size). Extents are written/read/modified on a sub-extent basis (aka slice) for granularity and efficiency.  An extent’s slice may be trimmed when moving into the cache depending on the amount of data being read/cached.

Extent Group

  • Key Role: Physically contiguous stored data
  • Description: A extent group is a 1MB or 4MB piece of physically contiguous stored data.  This data is stored as a file on the storage device owned by the CVM (Controller Virtual Machine).  Extents are dynamically distributed among extent groups to provide data striping across nodes/disks to improve performance.

Below we show how these structs relate between the various filesystems:
NDFS_DataLayout_Text Data StructureHere is another graphical representation of how these units are logically related:
NDFS_DataStructure3 Data StructureNext up, I/O Path Overview

Until next time, Rob…

NPP Training series – Cluster Components with Hyper-V

To continue NPP training series here is my next topic: Cluster Components

If you missed other parts of my series, check out links below:
Part 1 – NPP Training series – Nutanix Terminology
Part 2 – NPP Training series – Nutanix Terminology
Cluster Architecture with Hyper-V

Data Structure on Nutanix with Hyper-V
I/O Path Overview

To give credit, most of the content was taken from Steve Poitras’s “Nutanix Bible” blog as his content is the most accurate and then I put a Hyper-V lean to it.

Cluster Components

The Nutanix platform is composed of the following high-level components:

NDFS_Cluster Components

Cassandra

  • Key Role: Distributed metadata store
  • Description: Cassandra stores and manages all of the cluster metadata in a distributed ring like manner based upon a heavily modified Apache Cassandra.  The Paxos algorithm is utilized to enforce strict consistency.  This service runs on every node in the cluster.  Cassandra is accessed via an interface called Medusa.

Medusa

  • Key Role: Abstraction layer
  • Description: Medusa is the Nutanix abstraction layer that sits in front of the cluster’s distributed metadata database, which is managed by Cassandra..

Zookeeper

  • Key Role: Cluster configuration manager
  • Description: Zeus stores all of the cluster configuration including hosts, IPs, state, etc. and is based upon Apache Zookeeper.  This service runs on three nodes in the cluster, one of which is elected as a leader.  The leader receives all requests and forwards them to the peers.  If the leader fails to respond a new leader is automatically elected.   Zookeeper is accessed via an interface called Zeus.

Zeus

  • Key Role:  Library interface
  • Description: Zeus is the Nutanix library interface that all other components use to access the cluster configuration, such as IP addresses. Currently implemented using Zookeeper, Zeus is responsible for critical, cluster-wide data such as cluster configuration and leadership locks.

Stargate

  • Key Role: Data I/O manager
  • Description: Stargate is responsible for all data management and I/O operations and is the main interface from Hyper-V (via SMB 3.0).  This service runs on every node in the cluster in order to serve localized I/O.

Curator

  • Key Role: Map reduce cluster management and cleanup
  • Description: Curator is responsible for managing and distributing tasks throughout the cluster including disk balancing, proactive scrubbing, and many more items.  Curator runs on every node and is controlled by an elected Curator Master who is responsible for the task and job delegation.  There are two scan types for Curator, a full scan which occurs around every 6 hours and a partial scan which occurs every hour.

Prism

  • Key Role: UI and API
  • Description: Prism is the management gateway for component and administrators to configure and monitor the Nutanix cluster.  This includes Ncli, the HTML5 UI and REST API.  Prism runs on every node in the cluster and uses an elected leader like all components in the cluster.

prism1 Cluster Components prism2 Cluster Components

Genesis

  • Key Role: Cluster component & service manager
  • Description:  Genesis is a process which runs on each node and is responsible for any services interactions (start/stop/etc.) as well as for the initial configuration. Genesis is a process which runs independently of the cluster and does not require the cluster to be configured/running.  The only requirement for genesis to be running is that Zookeeper is up and running.  The cluster_init and cluster_status pages are displayed by the genesis process.

Chronos

  • Key Role: Job and Task scheduler
  • Description: Chronos is responsible for taking the jobs and tasks resulting from a Curator scan and scheduling/throttling tasks among nodes.  Chronos runs on every node and is controlled by an elected Chronos Master who is responsible for the task and job delegation and runs on the same node as the Curator Master.

Cerebro

  • Key Role: Replication/DR manager
  • Description: Cerebro is responsible for the replication and DR capabilities of DFS(Distributed Storage Fabric).  This includes the scheduling of snapshots, the replication to remote sites, and the site migration/failover.  Cerebro runs on every node in the Nutanix cluster and all nodes participate in replication to remote clusters/sites.

Pithos

  • Key Role: vDisk configuration manager
  • Description: Pithos is responsible for vDisk (DFS file) configuration data.  Pithos runs on every node and is built on top of Cassandra.

Next up, Data Structures which comprises high level structs for Nutanix Distributed Filesystem

Until next time, Rob….