Machine Learning (3)
Social Media Analytics for Sales Prediction
Social data analytics has recently gained esteem in predicting the future outcomes of important events like major political elections and box-office movie revenues. Related actions such as tweeting, liking, and commenting can provide valuable insights about consumer’s attention to a product or service. Such an information venue presents an interesting opportunity to harness data from various social media outlets and generate specific predictions for public acceptance and valuation of new products and brands. This new technology based on gauging consumer interest via the analysis of social media content provides a new and vital tool to the sales team to predict sales numbers with a great deal of accuracy.
This use case focuses on forecasting product sales based on social media and time-series analysis. We present a predictive model of product sales using sentiment and consumer reactions gathered from social media over time periods. Our predictive model illustrates how different time scale-based predictors derived from sentiment can improve the prediction of future sales.
The widespread belief that social media data was simply too noisy and too biased, to accurately correlate with sales data was thankfully proven wrong using efficient AI models. We developed a unique process that collects relevant data from influential social media outlets and uses state of the art machine learning algorithms to predict sales with state-of-the-art accuracy.
The ultimate goal is to develop an accurate estimate of the product sale before its release to provide the sales team with a valuable knowledge of its potential profit and decide the quantity of the release in different regions based on customer request. An interesting case that can be detected via social media is when there is a negative feedback that can hinder the business from earning leads. To manage this particular case and other similar situations it is therefore necessary for companies to have access to authentic feedback from potential customers in order to react on a timely manner either by finding a way to satisfy customers or by improving the product quality.
In addition to predicting future product success or failure, the model can be easily configured to provide a detailed map of consumer satisfaction with an already launched product. Other criteria related to consumer demographics such as geographic location and age group can also be extracted and studied to build better sales strategy and targeted marketing campaigns.
The adopted approach is to collect customer sentiments data via social media analytics to train a Machine Learning model that predicts the evolution of the commercial product or service. The proposed model predicts the success of failure of commercial products/services and highlights the most important trends based on sentiment analysis of social media feedbacks. It aims at helping the sales team to improve or develop new sales strategies to increase customer loyalty and retention. In addition, the tool can help in detecting false information and protecting the business brand and reputation.
Here are the main steps taken towards building the predictive model
Extract data from social media (e.g. posts, comments, reactions…etc.)
Analyze sentiments of social media feedback
Generate datasets from Facebook, Instagram, and twitter
Predict the impact of those sentiments on future product performance
3. Technical approach
First step consists of extracting data including posts, comments, and reactions from social media, namely Twitter, Facebook, and Instagram through web scraping and relevant APIs.
Second step involves preprocessing the extracted data by applying a proprietary sentiment analysis algorithm and using well-known lexicon and rule-based libraries that are specifically attuned to sentiments expressed in social media. A dictionary of lexical features is used to score sentiments with a set of five heuristics. Lexical feature in this context refers to anything used for textual communication including words, emoticons like “:-)”, acronyms like “LOL”, and slang like “meh”. These colloquialisms get mapped to intensity values in order to associate a numerical value to each lexical feature. Lexical features are not the only things in the sentence which affect the sentiment. There are other contextual elements, like punctuation, capitalization, modifiers, and conjunctions that also impact the emotion.
All these details are accounted for in the set of five heuristics. The effect of these heuristics is quantified using human raters in well documented processes that showed exceptional efficiency when analyzing the sentiment of movie reviews and opinion articles.
After extracting data and applying the sentiment analysis algorithm to it, the next step in the methodology is to generate a dataset that include the percentage of positive, neutral, and negative feedbacks by a specific period, also dubbed timestamp. The developed model is flexible. It can generate a dataset with different time stamps including months, weeks, days, minutes, seconds, or any arbitrary timestamp for that matter. A dataset is defined by a name, start date, end date, and timestamp.
Recall that the mission of the predictive model that we built per the steps explained in the previous section is to estimate the evolution of commercial products/services based on the sentiment analysis of feedbacks from social media.
We developed and tested several ML-based algorithms and train them using social media data that we collected and cured following the rigorous process detailed earlier. To account for seasonal fluctuations in sales, the model uses the technique of time series forecasting to insure a steady accurate prediction. Here are the sequential steps to be followed during the prediction process
i) Select a dataset of interest
ii) Train all ML algorithms with the given dataset.
iii) After convergence of training algorithms, the model will select the ML-based algorithms that provides the best accuracy
iv) Using the best algorithm identified in the previous step, predict the total sales revenues of the product of interest
We followed this prediction process to forecast the total revenues generated by the sales of Big Mac meal of McDonald's chain. First part of the process is to build a dataset to train the model and to gauge its accuracy on historical data before going live. The dataset is divided in two parts:
· The first part is based on features defined by people feedback on social media. These features contain the percentage of positive, negative, and neutral feedback of people for a specific time span and timestamp defined by the user.
· The second part includes sales or turnover following the same timestamp of the first feature. This feature will contain the sales provided by the customer.
Once the dataset is formed, it will be used to train the ML-based algorithms. For the Application of McDonald’s Big Mac, we developed a small data (15 rows) contains the sales of McDonalds starting from January 2016 until March 2020. The predicted sales (average mean) are displayed every three months. We have been predicting the next average sales for the coming three months after March 2020.
The table below shows the performance of each ML algorithm that we tested including the best algorithm with the highest accuracy.
Table 1. Performance of various ML algorithms based on social media dataset. The Bagging algorithm performed best and predicted sales revenues for the Big Mac of McDonald’s in the amount of $6.023M
We linked all the studied models for Facebook, Instagram, and twitter and created a desktop application where the user selects the parameters of the dataset including start and end date along with time period to get an estimate of the sales revenues for any product the user aims to forecast
4. Conclusions and recommendations
This use case enumerated the steps needed to build a ML model based on social media content to predict the sales of commercial products. All results presented here are based on sentiment analysis of social medias feedbacks.
We built a desktop app that can select the optimal ML algorithm and provides a prediction of a given product sales with an accuracy approaching 90%. We are currently studying the effect of adding sales information from the competition to improve the model accuracy.
AI: A Paradigm Shift in Pharmaceutical Industry- Use Case of Cancer Detection
The current business model of pharmaceutical industry where a new drug may take a decade and Billions of dollars to develop is no longer viable in this digital era of big data and cloud computing. Giant IT companies such as Amazon and Google are leveraging their deep pockets and strong AI footprints to lower the entry barrier to this vital sector and render classical models of drug discovery and development obsolete.
AI, particularly Deep Learning field of it, can empower translational pharma research at each phase of drug development and discovery starting from initial candidate selection phase with its aim of drug and target selection up until phase III post launch with its aim of life-cycle management. Each phase in the drug discovery chart can be accelerated by developing and deploying accurate predictive models trained on relevant historical data. For example, modeling diseased human cells by varying the levels of sugar and oxygen the cells were exposed to, and then tracking their lipid, metabolite, enzyme and protein profiles is an area where AI and cloud computing can add value and save both time and money. Some of the pharmaceutical companies, including Novartis and AstraZeneca, managed to demonstrate impressive results on drug discovery and development by embracing AI in the last five years .
In the spirit of showing the benefits of AI and data analytics in pharmaceutical research, we present here the results of using a specific class of AI to detect Ovarian cancer.
Data collection and formatting
Data used in this study is courtesy of Federal Drug Administration-National Cancer Institute, Clinical Proteomics Program Databank. Data consists of signatures of mass spectrometry on protein profiles of 216 patients including 121 patients with Ovarian cancer and 95 cancer-free persons used as control group in this study. Signature extraction and identification is performed using serum proteomic pattern diagnostics where proteomic signatures from high dimensional mass spectrometry data are used as a diagnostic classifier . Profile patterns are generated using surface-enhanced laser desorption and ionization (SELDI) protein mass spectrometry . The objective is to build a classifier to classify patients in one of two classes (i.e. cancer and cancer free) based on a limited number of features selected from SELDI data of studied samples.
Raw data is pre-processed and put in a 216 by 15,000 matrix. The 216 rows represent the number of patients out of which 121 are ovarian cancer patients and 95 are normal (i.e. cancer-free) patients. The 15,000 columns represent the mass-charge values in M/Z where M stands for mass and Z stands for charge number of ions. M/Z (or simply |MZ|) represents mass divided by charge number and the horizontal axis in a mass spectrum is expressed in units of m/z. Each row in the data matrix represents the ion intensity level at a specific (one out of the 15,000) mass-charge values indicated in |MZ|.
Another 2 by 216 index matrix holds the index information to associate data samples with its appropriate class of patients. For instance, the first 126 elements of the first row of this matrix has the index value of “1” indicating its association with cancer patients, whereas the rest 95 elements of this first row are set to zero indicating its association with cancer-free patients. So, the reduced dataset of features that will be considered for this study is 216 by 100 matrix. Each column represents one of 216 patients and each row represents the ion intensity level at one of the 100 highest mass-charge values for each patient. A 3-D representation of this dataset is shown below in Figure 1.
Figure 1. Ion intensity levels at the 100 highest mass-charge values of the 216 patients
Classification Using a Feed Forward Neural Networks
Various clustering and classification techniques have been tested. We present in this section the results of classification using Feed Forward Neural Networks (FFNN), which is an important Machine Learning technique widely used in classification problems. The set of features identified in the previous section (i.e. highest 100 mass-charge values will) be used to classify cancer and normal samples.
A 1-hidden layer feed forward neural network with 100 input neurons, 8 hidden layer neurons, and 2 output neurons is created and trained to classify data samples. Figure 2 shows the FFNN structure used in this classification study.
Figure 2. Feed Forward Neural Networks architecture used for classification
The input and target samples are automatically divided into training, validation, and test sets. The training set is used to train and teach the FFNN. Training continues as long as the FFNN performance is improving
Data is distributed over training, validation, and test sets respectively with 152 data samples (or 70% of the entire data set of 216 samples), 32 data samples (or 15%), and 32 data samples (or 15%). The network performance on the test data set gives an estimate of how well the network will perform when tested with data from the real world. Figure 3 shows how the network's performance improved during training using the well-known Scale Conjugate Gradient (SCG) algorithm. Note that training performance is improved by minimizing cross entropy loss function shown on a logarithmic scale. It rapidly decreased as the network was trained.
Figure 3. Training performance of the FFNN of Figure 2. Note that at training epoch 11, validation error was minimal; optimal network parameters are identified at such a training epoch
The trained neural network can now be tested with the testing samples that were
partitioned from the main dataset. The testing data is excluded from training and hence provides an "unseen" dataset to test the network on. One measure of how well the FFNN would perform is the confusion plot, also known as error matrix, to visualize the system classification accuracy as shown in Figure 4. Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class. The confusion matrix shows the percentages of correct and incorrect classifications. Correct classifications are the green squares on the matrix diagonal. Red squares represent incorrect classifications. Class 1 indicates cancer patients and class 2 indicates cancer-free patients.
Figure 4. Confusion matrix showing the proposed FFNN classification performance on “unseen data before” with an accuracy exceeding 96%
Figure 5 shows another way of measuring the FFNN performance using error histogram across the three datasets (i.e. training, validation, and test). As can be seen, most of the instances resulted on smallest errors for the three types of datasets.
Figure 5. FFNN performance on the three datasets (i.e. training, validation, and test). Most of instances resulted on a small errors showing accurate classification .
In this study and based on the Ion intensity levels of 216 individuals including 126 cancer patients and 95 cancer free control group, a straight Feed Forward Neural Networks classifier showed excellent classification results approaching 97% accuracy. This use case study was just an example to show the promise of Artificial Intelligence in pharma R&D including drug discovery and drug development. Chronical diseases such as Alzheimer, diabetes, and cancer are expected to benefit from this new research paradigm in pharmaceutical companies built around AI and Cloud computing.
YaiGlobal is excited to have its mission set on the promises and challenges of this structural transformation that is touching almost every field of the economy. With its resolute commitment to develop and deploy AI and Cloud computing to address real complex issues, YaiGlobal is looking forward to being an active part of this paradigm shift of digital transformation.
 Alex Zhavoronkov, "Deep Dive Into Big Pharma AI Productivity: One Study Shaking The Pharmaceutical Industry”, Retrieved from https://www.forbes.com/sites/alexzhavoronkov/2020/07/15/deep-dive-into-big-pharma-ai-productivity-one-study-shaking-the-pharmaceutical-industry/#b3cda10567d7
 T.P. Conrads, et al., "High-resolution serum proteomic features for
ovarian detection", Endocrine-Related Cancer, 11, 2004, pp. 163-178.
 E.F. Petricoin, et al., "Use of proteomic patterns in serum to
identify ovarian cancer", Lancet, 359(9306), 2002, pp. 572-577.