Traffic changes from January 2020 until June 2020 at multiple vantage points
One of the Forbes articles mentioned that the amount of data consumed has almost grown at 5,000% from 1.2 Trillion Gigabytes to 59 trillion gigabytes in the last decade. International Data Corporation(IDC) reports,
which measure the amount of data consumed and created in the world each year, predicted the data growth to continue through 2024 with a five-year compound annual growth rate (CAGR) of 26%.
The rate is quite alarming, which puts us wondering what data will do in the coming decade?
The latest Cisco Visual Networking Index of 2019 forecasts that global IP traffic “will grow at a CAGR of 26 percent from 2017 to 2022,” which would result in an annual IP traffic rate of 4.8 Zettabytes per year (396 Exabytes per month).
Also, with the advent of following generation workloads, such as Big Data, Machine Learning (ML), Internet of Things (IoT), Artificial Intelligence, etc.,
The Bottleneck
The increasing burden on data centers, mainly because of the work from the home situation during covid-19, has presented architects with a new challenge: processor speed vs. bandwidth.
You must be very aware of the famous Moore’s law, which holds that silicon device processing power doubles every two years. Tragically, conventional processors have not developed at anything distantly like that rate.
Many factors, including the breakdown of Dennard scaling and leveling-off of the von Neuman architecture progress, have contrived to slow the development in performance.
However, network port speeds have been dramatically growing due to the increasing demand for internet services. Researchers have estimated that server processors and current silicon technology will no longer go hand in hand.
With an exponentially rising port speed, there is also a need to either upgrade or supplant the servers to handle the growth.
The Way Out
Simply adding more servers doesn’t seem a plausible and beneficial solution as it will just increase the complexity and the cost. The enterprises needed to come up with a solution to slice and dice big data without adding servers.
Envision – briefly a server whose core hardware was configurable to assist with offloading assignments from the virtual CPU while giving high-speed transmission capacity abilities.
And this is very much possible and feasible by turning to accelerators to offload some of the algorithms of the applications, either to play out the necessary calculations more rapidly or to accomplish more execution with less power utilization to facilitate the load on the data center’s electrical power and cooling.
One or both the enhancement—performance and performance per watt—are essential for various applications.
New workloads targeted for acceleration include:
Data storage and analytics
Networking applications and cybersecurity
Media transcoding
Financial analysis
Genomics
These workloads employ algorithms that can be sped up by other computational hardware, bringing about better data throughput and lower response latency. Hence, It is pretty clear that an accelerator can do the job just the right way the data centers are demanding.
FPGAs – A state-of-the-art contender in the acceleration arena
FPGAs, the acronym of “Field Programmable Gate Array,” are superb contenders for the acceleration crown.
With around 30-years of history in the electronics industry, its use as a server accelerator in the data center is relatively new.
FPGAs do have the capability to burst the bottlenecks that hold back performance on analytical tasks—without bursting your power or cooling budgets.
FPGAs, which are integrated circuits like microprocessors, can be dynamically reprogrammed to coordinate with the exact computational requirement of a workload or algorithm.
This nearby coordination brings about quicker computational speed and lower power and energy utilization.
It is a matter of coincidence that wh
Analysts estimate that the FPGA market will grow at the highest CAGR of any technologies competing for the acceleration crown, which is also evident from the above-described rationale.
But let’s not rely upon the analyst’s opinion. Let’s look at some real deals, enough to support the above statement.
Intel spent more than $16B a couple of years before getting Altera – apparently because FPGAs would be a pretty serious deal in the server farm.
Also, cloud service providers like Amazon, Tencent, Microsoft, Alibaba, and Baidu have adopted the FPGAs as a reconfigurable heterogeneous processing asset.
Xilinx is, in any event, attempting to guarantee that their most recent FPGAs are another class of gadget: “ACAP.”
FPGAs value proposition is not just a one-liner; about re-programmability and computability,
but it adds a much more value addition as it adapts to the complex workztloads datapath and memory architecture with room to scale out without paying any significant penalties.
However, the accessible and scalable deployment of FPGA is a considerable hurdle to make it an obvious and optimal choice.
A large part of the difficulty of programming FPGAs are the long compilation times, leading to slacking off and absence of not so accessible and scalable deployment as available for CPUs.
Researchers are giving their best to achieve this remarkable feat, leading to a potential bloom in the FPGA market.
Atock Electronics Pte.Ltd, one of the contributing R&D houses in the FPGA market, offers innovative solutions to meet these workload demands and has the potential to support any level of organizations to provide end to end solutions.