BLOG | OFFICE OF THE CTO

Constrained Compute: The Case for Hardware Optimization at the Edge

Lori MacVittie Thumbnail
Lori MacVittie
Published September 20, 2021
  • Share via AddThis


There is a scene in the movie Apollo 13 in which the importance of power to operating equipment is driven home. The power needed to operate and subsequently restart the craft is central to the (spoiler alert) eventual success of returning the astronauts to earth.

Power is everything
Ed Harris, playing Gene Kranz, illustrates that power is everything in the movie Apollo 13.

The reality that many of us ignore—until we lose power due to a storm or other external event—is that every application we run consumes power. Our reliance on applications today to operate our lights, lock our doors, and run our cars means that power is calculated both in the form of electrical consumption and CPU cycles.

We may joke about how slow our browser is running today and sheepishly admit it may be because we have thirty or more tabs open, but the truth is that compute power is not limitless. In any constrained environment—like the edge—there is even less computing power upon which to execute the automation, data processing, and communications we rely upon virtually every day for work, for life, and for play.

Although we have pushed the engineering boundaries of what is possible, the cries of Moore’s Law ending continue to remind us that there are only so many transistors we can squeeze into one square inch. There are only so many components we can force into a phone, and there is only so much computing power we can expect from a rack of servers installed in a cell tower.

Thus, the edge—comprised of all the devices, endpoints, and constrained compute nodes—needs a way to increase its available computing power without a complementary increase in size and space. This need is behind the infrastructure renaissance; a movement flying under the radar of most people that focuses on leveraging specialized (optimized) computing power to effectively increase the overall capacity of these constrained environments.

The Evolution of Hardware-Optimized Compute

The evolutionary path for hardware-optimized compute began long ago with specialized acceleration “cards” targeting cryptography and eventually produced the GPU (graphic processing unit) and now the DPU (data processing unit).

Each evolution extracted specific processing tasks that become hard-coded, literally, in silicon to produce exponentially more capacity to process data faster and more efficiently. This is the basis for the cryptographic acceleration cards of the mid-2000s that eventually encouraged adoption of SSL Everywhere by dramatically improving the performance of encryption and decryption (cryptographic) processing. Similar advances occurred in adjacent markets with a focus on improving storage processing speeds. The TOE (TCP offload engine) is “a networking device that implements TCP/IP protocols on a hardware card. The TOE interface also gives Data ONTAP an interface to either the 1- or the 10-GbE infrastructure. The 10-GbE PCIe TOE card fully supports NFS, CIFS, and iSCSI TCP applications in Data ONTAP.”     

Basically, every time we’ve needed to improve capacity in constrained environments – whether that constraint was economic or physical – we’ve seen the introduction of optimizing hardware components.

The DPU, currently the darling du jour thanks to NVIDIA and rising interest in AI and ML related applications, is the current manifestation of our efforts to overcome physical constraints on compute.

The Role of the DPU at the Edge

Environments like edge need the power boost that comes from hardware-optimized compute. Whether it’s for use in manufacturing, where IIoT (Industrial Internet of Things) requires real-time processing of data with extremely low latency (less than 20ms), or in healthcare where the speed of processing for health data can mean the difference between life and death, hardware-optimized compute is a requirement.

That means that any application-centric platform seeking to enable organizations to take advantage of edge must include hardware-optimized compute as a key capability.

The DPU represents a democratization of optimized compute power. Combined with the right software stack and enabled by the right platform, the edge will be able to offer the same efficiencies and benefits to the enterprise currently enjoyed by large, hyperscale providers.

That’s why we continue to work with partners like NVIDIA. While software is eating everything – including the edge – it is still hardware that powers everything. And that power can be increased without requiring more space by taking advantage of hardware-optimization.