HPE Nimble Storage dHCI – What is it?

HPE Nimble Storage dHCI – What is it?

At Discover 2019 recently, Hewlett Packard Enterprise (HPE) announced their new Nimble Storage dHCI solution. An “intelligent platform with the flexibility of converged and the simplicity of Hyperconverged Infrastructure (HCI).1” So, what is it?

To give the premise some context, it’s first pertinent to examine the application landscape, so we will.

Traditionally, organisations look at this environment from the bottom up, Vendors, the marketplace and staff look at the application from storage upwards – creating a very focused view, which is typically a product play. Although fantastic products, this approach doesn’t translate well to understanding business outcomes.

To address this, HPE have taken a top-down approach, looking at the entire ecosystem from the application downwards, right down to storage. It facilitates the ability to decompose how an application consumes compute, storage and the network across an organisation’s entire environment. How these elements are utilised on a virtual platform and how they’re consumed and decomposed into their storage elements.

This fosters a strong understanding of the application’s behaviour which in turn, helps the organisation align their abilities to more effectively meet customer outcomes based on the workloads they utilise to operate their business.

The top-down approach is what organisations should be striving to implement, but to orchestrate it correctly, a clear understanding of an applications orientation must exist. These orientations are three-fold.

  1. Block Based – Traditional virtualised workloads with structured data that is latency sensitive. Problems are solved using SSD, NVME and storage class memory.
  2. Scale Out – These environments are typically very large, multi-petabyte data lakes that are bandwidth sensitive. Read-write workloads that require incredibly large amounts of throughput to manufacture large datasets.
  3. Cloud – Workloads that are either born in the cloud, light with the ability to be shifted to the cloud or utilised in Internet of Things (IoT) environments.

 

With many organisations leveraging applications across all three orientations, the next step is to critically evaluate the challenges that each workload presents. Take block based for example, it’s extremely difficult to move apps and data into a scale out environment and run analytics, create actions and insights, and then retrieve them again because of the contrasting orientations.

Consider a large, multi-petabyte environment. Organisations move data and apps from these environments into the cloud for agile testing, but then retrieve it again to run internal business analytics and create better outcomes, products, and services.

Data has gravity and it’s very challenging to take the app and its data from one category, move it to another, process, extract, run some intelligence and deep analytics and move it all back again. Scaling across environments and application orientations is extremely challenging.

But with workloads becoming increasingly unpredictable because of the thirst for innovation, how can we simplify the complexity?

You need a Common Data Play – whether its analytics, on-premise, the edge or in the cloud, a common data play enables optimised and composable applications and workloads. In order to have complete analytics - whether it’s on a workload that’s running a block based array like Nimble or Primera, or run it in the cloud or a large scaled out node environment - you need analytics to run apps that can move from on premise, into the cloud or the edge and back again, and you need the data to always follow the workload – you need an intelligent data platform founded on analytical functionality.

Hence, say hello to HPE Nimble Storage dHCI!

A storage solution that is fundamentally about the workload and data that organisations are creating. Nimble dHCI gives organisations the ability to, no matter where the data sits, leverage mobility and move data that’s not bound by gravity. Develop intelligence from it, contextualise and predict what it needs, and as a result, take advantage of a number of key benefits.

  • VM and Cloud administration staff can delivery incredibly large virtualisation or container workloads on-premise or in the cloud.
  • Data scientists can run genuine analytics and derive insights and actions in a matter of hours as opposed to a number of months.
  • Application owners can accelerate the ability to deliver new workloads and Dev-Ops, accelerating the continuous development of pipeline to enable faster workloads.
  • Admin staff can focus more on innovation rather than management of the environment.

 

For any organisation “looking for the easy button on a platform built for the unpredictable future, this is the answer,”3 it predictably scales but can also accommodate unpredictable scale – hence the ‘disaggregated’ component of the platform.

Whether you’re in the SMB space, a mid-market organisation or conducting business in the enterprise space – Nimble Storage dHCI can be applicable for you because it enables you to start small, and grow as needed.  

It’s the simplicity of HCI with the flexibility of converged!

Amidata is proud to partner with HPE to help businesses like yours optimise their IT environments, and through HPE Nimble Storage dHCI solution, can also ensure:

  • Complete integration with HPE InfoSight
  • Complete integration with HPE Cloud Volumes
  • Spanning out to other Cloud environments
  • Can deploy as a green field site, or as a brown field

Taking just 15 minutes to get the entire environment up and running, Nimble dHCI is simple to deploy, simple to manage, simple to scale, and simple to support the entire stack through predictive analytics.

For more information about HPE Nimble Storage dHCI, don’t hesitate to get in touch with one of our friendly staff.

1,2 https://community.hpe.com/t5/Around-the-Storage-Block/Give-me-more-hyperconverged-please/ba-p/7050469#.XXHFlCgzZPZ

Blog tags