Semiconductor, or chip, design firms are most interested in time to market (TTM); TTMis often predicated uponthe time ittakes for workloads, such as chip design validation,and pre-foundry work, like tape-out,to complete. That said, the more bandwidth available to the server farm, the better. Azure NetApp Files (ANF) is the ideal solution for meeting both the high bandwidth and low latency storage needs of this most demanding industry. Read on to discover why you should consider this HPC on Azure solution.
More bandwidth means more jobs may be run in parallel, more checksperformed simultaneously, more, more, more, all without the added expense of time. ANF’s quality of service offering allows the scale out to 200, 500, 1000 concurrent jobs,as various Azure customers have done, without effectingrun time.
Please note that alltest scenarios documented in this paper are theresult ofrunning astandard industry benchmarkfor electronic design automation (EDA).
Alltest scenarios documented in this paper are theresult ofa standard industry benchmarkwe ran for electronic design automation (EDA)on Azure NetApp Files.
Scenario One answers the most basic question:How far can a single volume be driven? We ran Scenarios Two and Threeto evaluate the limits of a single Azure NetApp Files endpoint looking for potential benefits in terms I/O upper limits and/or latency.
at the Edge
at the Edge
Scenario Results Explained
Single volume: This represents the most basic application configuration; as such, it is the baseline scenario for follow-on test scenarios.
6 volumes: This scenario demonstrates a linear increase (600%) relative to the single volume workload.
More information about this configuration: In most cases, all volumes within a single virtual network are accessed over a single ip address,which was the case in this instance.
12 volumes: This scenario demonstrates a general decrease in latency over the 6 volumes scenario, but without a corresponding increase in achievable throughput.
Pictures are Worth a Thousand Words:
The Layout of the Tests
Total Number of Directories
Total Number of Files
The complete workload is a mixture of concurrently running functional and physical phases and, as such, overall representsa typical flowfrom one set ofEDAtools to another.
The functional phase consists of initial specifications and logical design. The physical phase takes place when converting the logical design into a physical chip. During the sign-off and tape-out phases, final checks are completed, and the design is delivered to a foundry for manufacturing. Each of these phases present differently when it comes to storage, as described below.
The functional phases are metadata intensive—think file stat and access calls—though they do include a mixture of both sequential and random read and write I/O as well. Althoughmetadata operations are effectively without size, the read and write operations range between less than 1K and 16K; he majority of reads are between 4K and 16K.Most writes are 4K or less.The physical phases, on the other hand,are entirely composed of sequential read and write operations, and a mixture of 32K and 64K OP size.
Most of the throughput shown in the graphsabove comes from the sequential physical phasesof workload, whereas the I/O comes from the small random and metadata intensive functional phases–both of which happen in parallel.
In conclusion, pair Azure Compute with Azure NetApp files for EDA design to get bandwidth delivered at scale. After all, bandwidth drives business success.