צרו קשר תנו פידבק
מאמרים

How to be successful in your artificial intelligence deployment

25.08.2021
4 min
Eddy Resnick, Senior Professional Service Engineer, Bynet Data Communications Ltd.
How to be successful in your artificial intelligence deployment

Analysts have forecasted the artificial intelligence (AI) market growing at a pace of over 30% CAGR in the coming 10 years [1], [2], stating that if your company is not somehow leveraging AI you may find your business being left behind. AI, machine learning (ML), natural language processing (NLP) –  whatever you like it being called – are being used in virtually every industry imaginable, from the obvious use in hi-tech automotive, energy, finance, healthcare, transportation to the less obvious education, farming, insurance & marketing. If for no other reason other than market pressure, your company’s management is probably looking into AI.

But like many other new advancements, the path to success has many hurdles and no “silver bullet”. Just throwing some hardware and data scientists together may not deliver the expected success. Spending some time on the details will improve the chances of success in using this new technology.

Considerations for a successful AI

Data and the scientist

Like for any compute system GIGO applies. If the data driving the decision-making process is not correct or properly understood, then the conclusions will be worthless.

Scientists have been using artificial intelligence from the 1950s but only recently have data sets grown to scale and neural network technologies have become available to make use of them. So, recognizing a cat from a dog has become quite easy.

But creating your own data and using the correct model is a job for the data scientist. The data scientist is different from other scientists, though, since she needs to be in touch with the customers and stakeholders in the business chain to make sure that what is being discovered makes sense. The data scientist needs to work with the domain experts and keep the noise to a minimum. Otherwise the generated models will not only be wrong but can be harmful to your business.

Does hardware solve everything?

Let’s assume that your company has put together a blended team of data scientists, domain experts, etc., having worked out the kinks around data gathering and normalization. The engineers have validated the models and produced several small-scale pilots to show that the path to artificial intelligence is well understood. Now the task is to scale up to production and to deliver results in a timely fashion.

One obvious direction is to go to “the cloud” and lease virtualized servers with GPUs and storage. Depending on the data set size and models this can be attractive but also very expensive over time.

Your company, though, may need to keep the data on premises. Your data scientists may require their models to be scaled over multiple GPUs with reduced latency. Another consideration would be the type of models being run and whether training and inference is to be considered. Any of these considerations may lead the company to look towards acquiring their own hardware.

Some companies may go the route of purchasing standalone servers loaded with GPUs and large NVME storage where the models are run within the single computer. Scaling up from this type of infrastructure will require custom software and jerry-rigging. Not having enterprise storage may limit your model’s access to data when the data set doesn’t fit into the local computer’s storage.

A piecemeal solution like above may look to be an inexpensive path to your deep learning needs but the cost down-the-line may be more than expected. In the case where you are starting the journey it might make sense to build an AI center that is easily scalable and already implements well understood software components.

Bynet Data Communication Ltd.’s and NVIDIA DGX server line have demonstrated that not all hardware solutions are equivalent. Designing a well-balanced deep learning environment requires some forethought and expertise.

Case Study: Road2’s AI Center of Excellence

Road2 is an AI service company based in Haifa supporting the needs of the start-up community.

“We were looking for an integrator who is experienced and can take ownership of the whole process from A to Z. Bynet was selected for just that, being able to physically construct the room all the way to server installation and SW definitions.” Eitan Kyiet, CEO Road2

Road2 requested Bynet to design for them a scalable AI platform around the NVIDIA DGX A100 that will support the expected wide arrange of data sets and models that would be used by the entrepreneurs in the North of the country but would fit within their modest budget. The DGX A100 is NVIDIA’s newest server and is built on the Tensor Core GPU integrating eight A100 GPUs to support training, inference and data analytics.

Bynet Data Communications Ltd. is an ELITE level partner in the NVIDIA Network Solution Provider Partner Program. Bynet works closely with NVIDIA to deliver complex AI deployments including full air-gapped environments for many Israeli customers.

Bynet proposed a rack containing a NetApp FAS 2750 storage array to provide network file storage and a Dell EMC R440 running VMware ESXi to run virtual machines, with all systems connected through a Cisco Nexus 3000 switch to support the high bandwidth data transfer between storage and GPU.

Bynet delivered the system taking into consideration not just the raw performance but addressing the user requirements for segregated access between the start-ups as well as remote access to the AI center. Using an experienced integrator can jumpstart an AI implementation and reduced future costs.


[1] Fortune Business Insights, Artificial Intelligence Market, 2020-2027, May 2021

פתרונות בינת

תתחילו להגדיל את העסק שלכם יחד איתנו

מוזמנים לפנות אלינו בכל שאלה, בקשה ועניין, אנו נחזור אליכם בהקדם