Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling.
FPGA stands for “Field Programmable Gate Array“. As you may already know, FPGA essentially is a huge array of gates which can be programmed and reconfigured any time anywhere. “Huge array of gates” is an oversimplified description of FPGA.
FPGA is indeed much more complex than a simple array of gates. Some FPGAs has built-in hard blocks such as Memory controllers, high-speed communication interfaces, PCIe Endpoints, etc. But the point is, there are a lot of gates inside the FPGA which can be arbitrarily connected together to make a circuit of your choice.
More or less like connecting individual logic gate ICs (again oversimplified but a good mental picture nonetheless). FPGAs are manufactured by companies like Xilinx, Altera, Microsemi, FII, etc… FPGAs are fundamentally similar to CPLDs but CPLDs are very small in size and capability compared to FPGAs.
OVH and Accelize today announced that they have entered into a partnership to better enable OVH’s cloud customers to leverage the processing capabilities of FPGAs in the form of FPGA Acceleration-as-a-Service.
Cloud users seeking to evaluate FPGA acceleration can rent a so-equipped FPGA Server from OVH RunAbove and immediately evaluate the pre-built accelerators loaded on that server. They can then implement and test their own FPGA accelerator ideas by getting a QuickPlay subscription and using its complete, end-to-end, software defined tool flow to target the FPGA server, all without having any prior FPGA design expertise.
Xilinx FPGAs are available in the Amazon Elastic Compute Cloud (Amazon EC2) F1 instances. F1 instances are designed to accelerate data center workloads including machine learning inference, data analytics, video processing, and genomics.
With F1 instances you can:
- Quickly deploy custom hardware acceleration
- Enjoy predictable performance with dedicated FPGAs
- Use your existing FPGA algorithms
Alibaba Cloud Selects Xilinx for FPGA Cloud Acceleration
Alibaba Cloud offers “high-performance”, elastic computing power to over two million customers at the moment. Based on Xilinx FPGAs, the new ‘F2’ instances would give Alibaba Cloud customers access to acceleration for data analytics, genomics, video processing, and machine learning workloads.
“Xilinx FPGAs deliver the performance, flexibility, and application breadth needed by today’s ever changing cloud workloads,” said Steve Glaser, Senior Vice President, Corporate Strategy, Xilinx. “F2 instances offer the opportunity to develop and publish hardware-accelerated data center applications to millions of customers on Alibaba Cloud.”
FPGA accelerators would compliment CPU-based architectures and deliver both performance and power efficiency – working in tandem with a server’s CPU. Alibaba Cloud has recently revealed that the processing efficiency of F2 instances can be up to 30 times higher than CPUs alone, “resulting in more cost effective cloud solutions.”
“FPGAs are popular general-purpose parallel accelerators meeting the evolving computing needs of data center workloads,” said Jin Li, Vice President of Alibaba Cloud. “We look forward to working with Xilinx to harness advancements in heterogeneous computing.”
The announcement follows the news of another China-based company deploying FPGA accelerators from Xilinx. Three months ago, Baidu also deployed Xilinx FPGA – to accelerate performance of application services in their public cloud offering.