YaiGlobal PR

YaiGlobal PR

Saturday, 08 August 2020 12:20

Social Media Analytics for Sales Prediction

 

Social Media Analytics for Sales Prediction

 

1.     Introduction

Social data analytics has recently gained esteem in predicting the future outcomes of important events like major political elections and box-office movie revenues.  Related actions such as tweeting, liking, and commenting can provide valuable insights about consumer’s attention to a product or service.  Such an information venue presents an interesting opportunity to harness data from various social media outlets and generate specific predictions for public acceptance and valuation of new products and brands.  This new technology based on gauging consumer interest via the analysis of social media content provides a new and vital tool to the sales team to predict sales numbers with a great deal of accuracy.

This use case focuses on forecasting product sales based on social media and time-series analysis. We present a predictive model of product sales using sentiment and consumer reactions gathered from social media over time periods. Our predictive model illustrates how different time scale-based predictors derived from sentiment can improve the prediction of future sales. 

The widespread belief that social media data was simply too noisy and too biased, to accurately correlate with sales data was thankfully proven wrong using efficient AI models.  We developed a unique process that collects relevant data from influential social media outlets and uses state of the art machine learning algorithms to predict sales with state-of-the-art accuracy.  

The ultimate goal is to develop an accurate estimate of the product sale before its release to provide the sales team with a valuable knowledge of its potential profit and decide the quantity of the release in different regions based on customer request.  An interesting case that can be detected via social media is when there is a negative feedback that can hinder the business from earning leads.  To manage this particular case and other similar situations it is therefore necessary for companies to have access to authentic feedback from potential customers in order to react on a timely manner either by finding a way to satisfy customers or by improving the product quality.

In addition to predicting future product success or failure, the model can be easily configured to provide a detailed map of consumer satisfaction with an already launched product. Other criteria related to consumer demographics such as geographic location and age group can also be extracted and studied to build better sales strategy and targeted marketing campaigns.

 

 

 

2. Methodology

 

The adopted approach is to collect customer sentiments data via social media analytics to train a Machine Learning model that predicts the evolution of the commercial product or service.  The proposed model predicts the success of failure of commercial products/services and highlights the most important trends based on sentiment analysis of social media feedbacks. It aims at helping the sales team to improve or develop new sales strategies to increase customer loyalty and retention.  In addition, the tool can help in detecting false information and protecting the business brand and reputation.

 

Here are the main steps taken towards building the predictive model

 

        Extract data from social media (e.g. posts, comments, reactions…etc.)

 

        Analyze sentiments of social media feedback

 

        Generate datasets from Facebook, Instagram, and twitter

 

        Predict the impact of those sentiments on future product performance

 

3.  Technical approach

 

First step consists of extracting data including posts, comments, and reactions from social media, namely Twitter, Facebook, and Instagram through web scraping and relevant APIs.

Second step involves preprocessing the extracted data by applying a proprietary sentiment analysis algorithm and using well-known lexicon and rule-based libraries that are specifically attuned to sentiments expressed in social media. A dictionary of lexical features is used to score sentiments with a set of five heuristics. Lexical feature in this context refers to anything used for textual communication including words, emoticons like “:-)”, acronyms like “LOL”, and slang like “meh”.  These colloquialisms get mapped to intensity values in order to associate a numerical value to each lexical feature.  Lexical features are not the only things in the sentence which affect the sentiment. There are other contextual elements, like punctuation, capitalization, modifiers, and conjunctions that also impact the emotion.  

All these details are accounted for in the set of five heuristics. The effect of these heuristics is quantified using human raters in well documented processes that showed exceptional efficiency when analyzing the sentiment of movie reviews and opinion articles.

After extracting data and applying the sentiment analysis algorithm to it, the next step in the methodology is to generate a dataset that include the percentage of positive, neutral, and negative feedbacks by a specific period, also dubbed timestamp.  The developed model is flexible.  It can generate a dataset with different time stamps including months, weeks, days, minutes, seconds, or any arbitrary timestamp for that matter. A dataset is defined by a name, start date, end date, and timestamp.  

 

4.  Application

Recall that the mission of the predictive model that we built per the steps explained in the previous section is to estimate the evolution of commercial products/services based on the sentiment analysis of feedbacks from social media.  

We developed and tested several ML-based algorithms and train them using social media data that we collected and cured following the rigorous process detailed earlier. To account for seasonal fluctuations in sales, the model uses the technique of time series forecasting to insure a steady accurate prediction. Here are the sequential steps to be followed during the prediction process

 

i)                   Select a dataset of interest

ii)                 Train all ML algorithms with the given dataset.

iii)               After convergence of training algorithms, the model will select the ML-based algorithms that provides the best accuracy

iv)               Using the best algorithm identified in the previous step, predict the total sales revenues of the product of interest

 

We followed this prediction process to forecast the total revenues generated by the sales of Big Mac meal of McDonald's chain.  First part of the process is to build a dataset to train the model and to gauge its accuracy on historical data before going live.  The dataset is divided in two parts:

·         The first part is based on features defined by people feedback on social media. These features contain the percentage of positive, negative, and neutral feedback of people for a specific time span and timestamp defined by the user.

·         The second part includes sales or turnover following the same timestamp of the first feature. This feature will contain the sales provided by the customer.

Once the dataset is formed, it will be used to train the ML-based algorithms.  For the Application of McDonald’s Big Mac, we developed a small data (15 rows) contains the sales of McDonalds starting from January 2016 until March 2020. The predicted sales (average mean) are displayed every three months. We have been predicting the next average sales for the coming three months after March 2020.

The table below shows the performance of each ML algorithm that we tested including the best algorithm with the highest accuracy.

 

 

 

ML algorithm

Bagging

Decision Tree

Gradient Boosting

Random Forest

Xgboost

KNeighbors

SVM

Stacking

Accuracy (%)

87.68

83.56

83.25

87.53

82.36

86.44

58.54

84.28

 

Table 1. Performance of various ML algorithms based on social media dataset. The Bagging algorithm performed best and predicted sales revenues for the Big Mac of McDonald’s in the amount of $6.023M

 

 

We linked all the studied models for Facebook, Instagram, and twitter and created a desktop application where the user selects the parameters of the dataset including start and end date along with time period to get an estimate of the sales revenues for any product the user aims to forecast

 

4.  Conclusions and recommendations

This use case enumerated the steps needed to build a ML model based on social media content to predict the sales of commercial products.  All results presented here are based on sentiment analysis of social medias feedbacks.

We built a desktop app that can select the optimal ML algorithm and provides a prediction of a given product sales with an accuracy approaching 90%.  We are currently studying the effect of adding sales information from the competition to improve the model accuracy.

 

Machine Learning Study for

Predictive Maintenance

 

 

Table of Contents

Summary. 2

Data Collection. 2

Vibration data. 3

Temperature data. 3

Feature Extraction. 4

Feature definition. 4

Feature across all data sets. 7

Feature cross correlation. 8

Machine Learning Methods for Classification. 9

Supervised Classification via Neural Networks. 9

Conclusion. 10

 

 Summary

This use case summarizes findings of a health monitoring study using empirical vibration and temperature data to build a predictive maintenance model.  Sensors are placed in four different positions on the housing surface of three running motors at different health stages to study the model performance and its robustness with respect to sensor mounting and various operating conditions. Tens of thousands of data segments were processed and used to extract features and build supervised and unsupervised classification algorithms. A feed forward Neural Network was deployed to classify signals (unseen before by the network) from these 3 motors. Preliminary results look promising with 99.2 % classification accuracy. It is also worth to note the algorithm robustness with respect to sensor mounting.

Data Collection

Vibration and temperature data are collected from rotating machines with the purpose of classifying those machines in one of predefined classes, to wit, “Warning” (scheduled maintenance), “Alarming” (under watch), and “Normal” (no action required). Vibration and temperature sensors are placed on the surface of machine of interest to generate data that will be used to classify machine health and eventually raise warnings when necessary to avoid shutdowns and unscheduled maintenance.

In the experimental setting of this study, 3 motors (numbered 1, 2, and 3) are used. Motor #1 is deemed by the operating personnel to be in a critical condition and may fail at any moment. It generated a distinct loud noise and relatively strong vibration profile and higher than usual surface temperature. Motor # 3 sounded very quiet and smooth thus exhibiting a “normal” behavior.  Motor #2 is in between the other 2 motors in terms of noise and vibration strength. Ideally, those motors should run to failure with data being captured at all stages of the motor health for accurate labeling. However, since this is unrealistic and for the purpose of this study, data generated by these 3 motors will be labeled “Warning”, “Alarming”, and “Normal.

Vibration data


A high-quality vibration sensor of up to 48 KhZ sampling rate is attached via a magnet to the housing surface of each one of the 3 running motors.  Four different sensor positions are used for data gathering as shown in Figure 1.  Varying the sensor position is useful to study the model sensitivity to sensor mounting and operating conditions.  The vibration sensor sampling frequency is set at its maximum value of 48 KhZ. Each recording lasted about 90 seconds.     

 

 

 Figure 1. One of the 4 sensor positions used to collect vibration data. In all experiments the sensor is attached to the motor housing surface via a magnet

 

Temperature data

This preliminary setup did not include a temperature sensor. For the purpose of this study, a temperature sensor response is simulated to allow building realistic machine learning models for classification. Temperature response is simulated as a constant base value plus a random component taken from a set of uniformly distributed pseudo random numbers. Base temperature values for the 3 motors are set respectively at 100, 99, and 98 degrees while temperature spans are [96.6 103.8], [95.6 102.4], and [94.1 101.8] respectively. These overlapping temperature spans seem representative of real sensor measurements given noise and variability of operating conditions.

Feature Extraction

Each vibration track of “T” samples (e.g. T=48,000 x 90=4,320,000 samples)  is divided into non overlapping segments of equal length (i.e. S=1024 samples or 21.3 milliseconds per segment) to generate features in the time-frequency domain.

Feature definition

Preliminary features selected for this study are defined as follows:

1.      Time domain energy measure of the vibration signal. It is estimated as the root mean squared value of the vibration time series in the segment of interest (i.e. ""):

                                                                                                                                               (1) 

Where  V(k) is the vibration amplitude at time sample “k”, “S” is the segment length in samples (i.e. 1024), and “m” is the segment rank varying from 0 (first segment) to the rounded value (T/S-1) (last segment), with T being the total track length (in samples).

Since “S” is a given constant, the feature notation can be simplified as follows:

                                                                                                                                                                   (2)

With  being the vibration time series at segment # m, that is:                  

                                                                                                                                                                     (3)

Studying the effect of segment length “S” and its overlap with neighboring segments is an interesting factor that will be addressed in future studies.  This feature (i.e. time domain energy) is useful in classification as it provides a general indication of the machine health: the lower this feature value is, the healthier the machine is.  Figure 2 shows an example of this feature values across the 3 studied motors

Figure 2.  feature for the 3 motors

 

2.     Frequency domain energy measure in the 8 bins corresponding to the frequency band [f(10), f(18)]=[422 797] HZ.   This feature is driven by the frequency responses of the three motors since most of the energy for motors # 1 and 2 was concentrated in the area [400-800] HZ; it is calculated as follows:

                                                                                                                            (4)

Where  represents the Fourier transform operator.  Figure 3 shows this feature variation across the 3 studied motors

Figure 3. feature for the 3 motors: according to this feature, motor # 3 (“Normal”) is separated from the other 2 motors

 

3.      Peak energy value in the frequency domain. It is computed as follows:

                                                                                                                                       (5)

This feature exhibited robustness across sensor positions as it will be apparent later. Figure 4 shows the feature variation across the 3 studied motors.

Figure 4. feature for the 3 motors

 

4.      Simulated temperature

Figure 5 shows variation of the simulated temperature response across the 3 studied motors.

Figure 5.  Simulated temperature response across the 3 studied motors

Feature across all data sets

Figure 6 shows variations of the 4 features defined in the previous section based on all data captured from the 3 different motors at the 4 sensor positions with a total of 51,456 data segments based on a segment length of 1024 samples. Note that data captured with sensor in position # 3 is least strong; in fact, it coincides with the least audible vibration sound that was noticed during data gathering.  Note also, that feature # 3 (i.e. Peak energy value in the frequency domain) is effective (compared to other features) in discriminating between close cases such as motors #2 and #3 in sensor position # 3. 

 Figure 6. Time domain energy (upper left), band-limited frequency domain energy (upper right), peak frequency domain energy (lower left), and simulated temperature (lower right) computed over segments of time for the 3 studied motors across the 4 sensor positions. A data set for each feature is comprised of 4 segments juxtaposed horizontally corresponding to the 4 sensor positions. Each segment is comprised of 3 staircase-like pieces corresponding to the 3 motors.

 

Feature cross correlation

An important element of feature extraction is to study the correlation between features since it is a measure of their dependency. If the correlation index associated with two features is relatively high then those two features are highly correlated  and as such, it is more beneficial to carry only one feature instead of the two in order to reduce over fitting and improve the generalization of models. There are many ways of calculating the correlation coefficients depending on the nature of dependency between the features of interest (e.g. linear versus nonlinear for example). The Pearson correlation method is typically used as it provides a measure of linear dependency between features and is defined as follows:

                                                                                                                                                (6)

Where N is the number of N scalar observations of both features,  and  are the mean and standard deviation of feature A while  and are the mean and standard deviation of feature B.  The correlation coefficient matrix between two features A and B is the matrix of correlation coefficients for each pairwise variable combination.

=                                                                                                                                        (7)

Using equation (6), the correlation coefficient matrix for the 4 studied features across all gathered data is given by the following 4 by 4 matrix:

1.0000    0.9674    0.4086    0.4383

0.9674    1.0000    0.4011    0.4263

0.4086    0.4011    1.0000    0.4335

0.4383    0.4263    0.4335    1.0000

 

Note that features #1 and #2 (i.e. signal energy in time domain and frequency band [400-800 HZ]) are highly correlated.   Features # 3 (peak frequency domain energy) and #4 (Temperature) are, on the other hand, less correlated with the rest of features making them potentially more efficient for model generalization. The selection of final set of features is determined by the performance of the classification algorithm across various operating conditions 

 

Machine Learning Methods for Classification

The predictive modeling problem at hand is a classic case of machine learning. There are many supervised and unsupervised techniques that can be used to classify the three studied motors in their appropriate classes (i.e. “Warning”, “Alarming”, and “Normal”). At this early stage of the project with only a few data tracks collected, two classical algorithms will be tested: unsupervised K-means clustering and supervised feed forward neural networks. As more data will be collected more complex algorithms and architectures will be tried and tested for better classification performance. 

Supervised Classification via Neural Networks

A feed forwards neural network with 50-neuron hidden layer and 4 inputs (features) using 70% of the whole data for training (36,019 segments), 15% for validation (7,718 segments), and 15% for testing (7,718 segments) resulted in a successful classification of 99.2% of accuracy as shown by the confusion matrix shown in Figure 7.   The confusion matrix also known as error matrix is typically used to visualize the system performance. Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class

In this case, 19 data points of class # 1 (warning) are mistakenly labeled as class # 2 (alarming), 29 points of class #2 (alarming) are mistakenly labeled as class #1 (warning),  5 data points of class #3 (normal) are mistakenly labeled as class # 2 (alarming), and similarly 5 data points of class # 2 (alarming) are labeled as class # 3 (normal). No data point of class # 1 (warning) was misclassified as case #3 (normal) and vice versa. 7,660 (out of 7, 718) or 99.2 % are classified correctly in their appropriate classes.  

Figure 7. Confusion matrix of a 50-neuron hidden layer Feed Forward Neural Network model shows 99.2% classification accuracy

Conclusion

This report presented an initial machine learning modeling for predictive maintenance based on empirical vibration and temperature data.  Vibration sensors are placed in four different positions on the housing surface of three running motors at different health stages to study the model performance and its robustness with respect to sensor mounting and various operating conditions. Gathered data was analyzed to extract features and build supervised and unsupervised classification algorithms.  Initial results using feed forward Neural Networks look promising both in terms of robustness to feature selection and sensor position and in terms of algorithm performance with a 99.2 % classification accuracy

In more complex settings such as manufacturing floors and alike with hundreds of machines and millions of signal segments, a more complex structure such as a Deep Learning with recurrent neural networks is more suitable for classification towards an efficient predictive health monitoring approach.

In case of non-labeled data with no a priori knowledge about machine health, there are other methods and techniques to estimate the machine state (e.g. normal, alarm, warning) including clustering methods and other advanced techniques to estimate the remaining useful life of machines.  Such a scenario will be addressed in a future case study

 

AI: A Paradigm Shift in Pharmaceutical Industry- Use Case of Cancer Detection

 

 

Introduction

The current business model of pharmaceutical industry where a new drug may take a decade and Billions of dollars to develop is no longer viable in this digital era of big data and cloud computing.  Giant IT companies such as Amazon and Google are leveraging their deep pockets and strong AI footprints to lower the entry barrier to this vital sector and render classical models of drug discovery and development obsolete.  

AI, particularly Deep Learning field of it, can empower translational pharma research at each phase of drug development and discovery starting from initial candidate selection phase with its aim of drug and target selection up until phase III post launch with its aim of life-cycle management. Each phase in the drug discovery chart can be accelerated by developing and deploying accurate predictive models trained on relevant historical data.  For example, modeling diseased human cells by varying the levels of sugar and oxygen the cells were exposed to, and then tracking their lipid, metabolite, enzyme and protein profiles is an area where AI and cloud computing can add value and save both time and money.   Some of the pharmaceutical companies, including Novartis and AstraZeneca, managed to demonstrate impressive results on drug discovery and development by embracing AI in the last five years [1]. 

In the spirit of showing the benefits of AI and data analytics in pharmaceutical research, we present here the results of using a specific class of AI to detect Ovarian cancer.

Data collection and formatting

Data used in this study is courtesy of Federal Drug Administration-National Cancer Institute, Clinical Proteomics Program Databank.  Data consists of signatures of mass spectrometry on protein profiles of 216 patients including 121 patients with Ovarian cancer and 95 cancer-free persons used as control group in this study.   Signature extraction and identification is performed using serum proteomic pattern diagnostics where proteomic signatures from high dimensional mass spectrometry data are used as a diagnostic classifier [2].  Profile patterns are generated using surface-enhanced laser desorption and ionization (SELDI) protein mass spectrometry [3]. The objective is to build a classifier to classify patients in one of two classes (i.e. cancer and cancer free) based on a limited number of features selected from SELDI data of studied samples.  

Raw data is pre-processed and put in a 216 by 15,000 matrix.  The 216 rows represent the number of patients out of which 121 are ovarian cancer patients and 95 are normal (i.e. cancer-free) patients. The 15,000 columns represent the mass-charge values in M/Z where M stands for mass and Z stands for charge number of ions. M/Z (or simply |MZ|) represents mass divided by charge number and the horizontal axis in a mass spectrum is expressed in units of m/z. Each row in the data matrix represents the ion intensity level at a specific (one out of the 15,000) mass-charge values indicated in |MZ|.

Another 2 by 216 index matrix holds the index information to associate data samples with its appropriate class of patients. For instance, the first 126 elements of the first row of this matrix has the index value of “1” indicating its association with cancer patients, whereas the rest 95 elements of this first row are set to zero indicating its association with cancer-free patients.  So, the reduced dataset of features that will be considered for this study is 216 by 100 matrix. Each column represents one of 216 patients and each row represents the ion intensity level at one of the 100 highest mass-charge values for each patient. A 3-D representation of this dataset is shown below in Figure 1.

Figure 1Ion intensity levels at the 100 highest mass-charge values of the 216 patients

 

Classification Using a Feed Forward Neural Networks

Various clustering and classification techniques have been tested. We present in this section the results of classification using Feed Forward Neural Networks (FFNN), which is an important Machine Learning technique widely used in classification problems. The set of features identified in the previous section (i.e. highest 100 mass-charge values will) be used to classify cancer and normal samples.

 A 1-hidden layer feed forward neural network with 100 input neurons, 8 hidden layer neurons, and 2 output neurons is created and trained to classify data samples. Figure 2 shows the FFNN structure used in this classification study.

 

Figure 2Feed Forward Neural Networks architecture used for classification

 

The input and target samples are automatically divided into training, validation, and test sets. The training set is used to train and teach the FFNN. Training continues as long as the FFNN performance is improving

Data is distributed over training, validation, and test sets respectively with 152 data samples (or 70% of the entire data set of 216 samples), 32 data samples (or 15%), and 32 data samples (or 15%). The network performance on the test data set gives an estimate of how well the network will perform when tested with data from the real world.  Figure 3 shows how the network's performance improved during training using the well-known Scale Conjugate Gradient (SCG) algorithm. Note that training performance is improved by minimizing cross entropy loss function shown on a logarithmic scale.  It rapidly decreased as the network was trained.

 

Figure 3Training performance of the FFNN of Figure 2. Note that at training epoch 11, validation error was minimal; optimal network parameters are identified at such a training epoch

 

 

Classification Results

The trained neural network can now be tested with the testing samples that were

 partitioned from the main dataset. The testing data is excluded from training and hence provides an "unseen" dataset to test the network on.  One measure of how well the FFNN would perform is the confusion plot, also known as error matrix, to visualize the system classification accuracy as shown in Figure 4. Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class.  The confusion matrix shows the percentages of correct and incorrect classifications. Correct classifications are the green squares on the matrix diagonal. Red squares represent incorrect classifications. Class 1 indicates cancer patients and class 2 indicates cancer-free patients.

 

Figure 4Confusion matrix showing the proposed FFNN classification performance on “unseen data before” with an accuracy exceeding 96%

 

Figure 5 shows another way of measuring the FFNN performance using error histogram across the three datasets (i.e. training, validation, and test). As can be seen, most of the instances resulted on smallest errors for the three types of datasets.

 

 Figure 5.  FFNN performance on the three datasets (i.e. training, validation, and test). Most of instances resulted on a small errors showing accurate classification .

 

 

Conclusions

In this study and based on the Ion intensity levels of 216 individuals including 126 cancer patients and 95 cancer free control group, a straight Feed Forward Neural Networks classifier showed excellent classification results approaching 97% accuracy.  This use case study was just an example to show the promise of Artificial Intelligence in pharma R&D including drug discovery and drug development. Chronical diseases such as Alzheimer, diabetes, and cancer are expected to benefit from this new research paradigm in pharmaceutical companies built around AI and Cloud computing.

YaiGlobal is excited to have its mission set on the promises and challenges of this structural transformation that is touching almost every field of the economy. With its resolute commitment to develop and deploy AI and Cloud computing to address real complex issues, YaiGlobal is looking forward to being an active part of this paradigm shift of digital transformation.

 

References

[1] Alex Zhavoronkov, "Deep Dive Into Big Pharma AI Productivity: One Study Shaking The Pharmaceutical Industry”, Retrieved from  https://www.forbes.com/sites/alexzhavoronkov/2020/07/15/deep-dive-into-big-pharma-ai-productivity-one-study-shaking-the-pharmaceutical-industry/#b3cda10567d7

 

[2] T.P. Conrads, et al., "High-resolution serum proteomic features for

     ovarian detection", Endocrine-Related Cancer, 11, 2004, pp. 163-178.

 

[3] E.F. Petricoin, et al., "Use of proteomic patterns in serum to

     identify ovarian cancer", Lancet, 359(9306), 2002, pp. 572-577.

                                                                                                                                                                                                                                                                                                                                       

SAN FRANCISCO, June 15, 2020 – YaiGlobal, a global IT firm specializing in Artificial Intelligence, is pleased to be cooperating with AppTek- a leader in Artificial Intelligence and Machine Learning for Automatic Speech Recognition and Machine Translation. Working jointly with AppTek, YaiGlobal is deploying its Yai365TM platform to develop innovative solutions in neural machine translation and automatic speech recognition.  The Yai365TM platform include a set of innovative tools such as QualityGatesTM that streamlines the process of building successful AI applications. 

Yai365TM has well designed architecture that is compliant with the AWS technical baseline and fully satisfies business requirements in terms of performance, high availability, and data security. Web Application Servers are running on AWS Elastic Compute Cloud and store data in Amazon Aurora Cluster with read replica in a different availability zone and with a full replica (writer/read) in a different region. Files are stored in Amazon S3 with full replica in a second region.  “Security and high availability of our platform, thanks to AWS infrastructure, provide a peace of mind to our customers”, explained Mr. Mourad Othman, operation director of YaiGlobal.  “We are delighted to partner with AppTek to be in support of their innovative AI-based applications.  Working closely with the AppTek team, we were able to rapidly develop secure and accurate solutions built on AWS,” says Dr. Mokhtar Sadok, CEO of YaiGlobal. 

About YaiGlobal

YaiGlobal is global IT firm specializing in Artificial Intelligence.  It helps organizations process data in any world language. The company provides industry solutions built around cloud computing with a focus on security, availability, and efficiency. With offices in Silicon Valley California and Tunis Tunisia, YaiGlobal provides professional consulting services and technology solutions to AI companies and technology solutions for industries seeking digitization and cloud computing services.  Our proven deployment platforms and trusted design patterns enable security-conscious companies to increase performance and unleash innovation. www.yaiglobal.com

Media Contact:

Leila Hansen
North America: (+1) 408 - 3354341
Europe: (+33) 9 - 75181116
This email address is being protected from spambots. You need JavaScript enabled to view it.

 
Page 1 of 2