import warnings
"ignore")
warnings.filterwarnings(
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
0) np.random.seed(
Predicting viewer engagement with educational videos
Note: data from the Coursera course Applied Machine Learning in Python
About the prediction problem
With the accelerating popularity of online educational experiences, the role of online lectures and other educational video continues to increase in scope and importance. Open access educational repositories such as videolectures.net, as well as Massive Open Online Courses (MOOCs) on platforms like Coursera, have made access to many thousands of lectures and tutorials an accessible option for millions of people around the world. Yet this impressive volume of content has also led to a challenge in how to find, filter, and match these videos with learners.
One critical property of a video is engagement: how interesting or “engaging” it is for viewers, so that they decide to keep watching. Engagement is critical for learning, whether the instruction is coming from a video or any other source. There are many ways to define engagement with video, but one common approach is to estimate it by measuring how much of the video a user watches. If the video is not interesting and does not engage a viewer, they will typically abandon it quickly, e.g. only watch 5 or 10% of the total.
A first step towards providing the best-matching educational content is to understand which features of educational material make it engaging for learners in general. This is where predictive modeling can be applied, via supervised machine learning. Here the task is to predict how engaging an educational video is likely to be for viewers, based on a set of features extracted from the video’s transcript, audio track, hosting site, and other sources.
About the dataset
We extracted training and test datasets of educational video features from the VLE Dataset put together by researcher Sahan Bulathwela at University College London.
Two data files are provided: train.csv and test.csv. Each row in these two files corresponds to a single educational video, and includes information about diverse properties of the video content as described further below. The target variable is engagement
which was defined as True if the median percentage of the video watched across all viewers was at least 30%, and False otherwise.
File descriptions
- train.csv - the training set
- test.csv - the test set
Data fields
train.csv & test.csv:
title_word_count - the number of words in the title of the video.
document_entropy - a score indicating how varied the topics are covered in the video, based on the transcript. Videos with smaller entropy scores will tend to be more cohesive and more focused on a single topic.
freshness - The number of days elapsed between 01/01/1970 and the lecture published date. Videos that are more recent will have higher freshness values.
easiness - A text difficulty measure applied to the transcript. A lower score indicates more complex language used by the presenter.
fraction_stopword_presence - A stopword is a very common word like ‘the’ or ‘and’. This feature computes the fraction of all words that are stopwords in the video lecture transcript.
speaker_speed - The average speaking rate in words per minute of the presenter in the video.
silent_period_rate - The fraction of time in the lecture video that is silence (no speaking).
train.csv only:
- engagement - Target label for training. True if learners watched a substantial portion of the video (see description), or False otherwise.
More details on the original VLE dataset and others related to video engagement here.
Load dataset
= pd.read_csv('data/train.csv')
df_train = pd.read_csv('data/test.csv')
df_test 'engagement'] = df_train['engagement'].astype(int) df_train[
Exploratoy Data Analysis
Feature Distribution
import seaborn as sns
= plt.subplots(3, 3, figsize=(10, 10))
fig, subaxes = 1
i for row in subaxes:
for this_axis in row:
=this_axis)
sns.histplot(df_train.iloc[:, i], ax'{}'.format(df_train.columns[i]))
this_axis.set_title(+= 1
i
plt.tight_layout() plt.show()
Feature Correlation
From the heatmap we can observe that there’s not large correlations between the variables, except for easiness and normalization_rate.
= df_train.iloc[:,1:].corr()
df_corr =(8,8))
plt.figure(figsize= True, square = True, annot=True, fmt= '.2f',annot_kws={'size': 7},
sns.heatmap(df_corr, cbar = df_corr.columns,
xticklabels= df_corr.columns,
yticklabels=sns.diverging_palette(120, 10, as_cmap=True))
cmap plt.show()
Feature Selection
Here the most important features are document_entropy, freshness and easiness.
from sklearn.feature_selection import SelectKBest, f_classif
import seaborn as sns
= df_train.iloc[:,1:-1]
X = df_train.iloc[:,-1] ## engagement
y
= 8
this_k = SelectKBest(f_classif, k='all')
selector
selector.fit(X, y)
# get the score for each feature
= selector.scores_
scores
= pd.DataFrame({'Feature': X.columns, 'Score': scores})
feature_scores = feature_scores['Score'].sum()
total 'Score'] = feature_scores['Score']/total
feature_scores['Score', ascending=False, inplace=True)
feature_scores.sort_values(
=(6,3))
plt.figure(figsize='Score', y='Feature', data=feature_scores)
sns.barplot(x'Score')
plt.xlabel('Features')
plt.ylabel('Features importance (Normalized)')
plt.title(# plt.xticks(rotation=20, ha='right')
plt.show()
Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
= train_test_split(X, y, random_state=0)
X_train, X_test, y_train, y_test
= ['document_entropy', 'freshness', 'easiness']
top_features = df_train[top_features], df_train['engagement'].astype(int)
X_train, y_train = df_test[top_features]
X_test
= {
param_grid 'n_estimators': [50, 100, 200],
'max_depth': [10, 20, 30],
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4],
'max_features': ['sqrt', 'log2']
}
= RandomForestClassifier(random_state=0)
rfc
= GridSearchCV(rfc, param_grid, cv=5, scoring='roc_auc', n_jobs=-1)
grid_search
grid_search.fit(X_train, y_train)
print(grid_search.best_params_)
print(grid_search.best_score_)
{'max_depth': 10, 'max_features': 'sqrt', 'min_samples_leaf': 1, 'min_samples_split': 5, 'n_estimators': 100}
0.8750181867018478
Gradient Boosting
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
= {
param_grid 'n_estimators': [50, 100, 200],
'max_depth': [10, 20, 30],
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4],
'max_features': ['sqrt', 'log2']
}
= GradientBoostingClassifier(random_state=0)
rfc
= GridSearchCV(rfc, param_grid, cv=5, scoring='roc_auc', n_jobs=-1)
grid_search
grid_search.fit(X_train, y_train)
print(grid_search.best_params_)
print(grid_search.best_score_)
{'max_depth': 10, 'max_features': 'sqrt', 'min_samples_leaf': 2, 'min_samples_split': 10, 'n_estimators': 50}
0.8646361089042813
Gaussian Naive Bayes
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import GridSearchCV
= {'var_smoothing': [1e-10, 1e-9, 1e-8, 1e-7, 1e-6, 1e-5]}
param_grid
= GaussianNB()
gnb
= GridSearchCV(gnb, param_grid=param_grid, cv=5, n_jobs=-1, scoring='roc_auc')
grid_search
grid_search.fit(X_train, y_train)
print(grid_search.best_score_)
print(grid_search.best_params_)
0.8271618585290608
{'var_smoothing': 1e-10}
Explore model performance
= train_test_split(X, y, random_state=0)
X_train, X_test, y_train, y_test
= ['GaussianNB', 'GradientBoostingClassifier', 'RandomForestClassifier']
clfs
= plt.subplots(1, 3, figsize=(12, 4))
fig, subaxes for clf, this_axis in zip(clfs, subaxes):
= eval(clf)().fit(X_train, y_train)
nbclf = nbclf.predict_proba(X_test)
y_probabilities = roc_curve(y_test, y_probabilities[:,1])
fpr_lr, tpr_lr, _ = auc(fpr_lr, tpr_lr)
roc_auc_lr =3, label='{}'.format(clf) + ' ROC curve (area = {:0.2f})'.format(roc_auc_lr))
this_axis.plot(fpr_lr, tpr_lr, lw0, 1], [0, 1], color='navy', lw=3, linestyle='--')
this_axis.plot([='lower right', fontsize=7)
this_axis.legend(loc'False Positive Rate', fontsize=8)
this_axis.set_xlabel('True Positive Rate', fontsize=8)
this_axis.set_ylabel('ROC curve {}'.format(clf), fontsize=8)
this_axis.set_title(
plt.tight_layout() plt.show()
for clf in clfs:
= eval(clf)().fit(X_train, y_train)
nbclf = nbclf.predict_proba(X_test)
y_probabilities = roc_curve(y_test, y_probabilities[:,1])
fpr_lr, tpr_lr, _ = auc(fpr_lr, tpr_lr)
roc_auc_lr =3, label='{}'.format(clf) + ' ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.plot(fpr_lr, tpr_lr, lw0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.plot([='lower right', fontsize=7)
plt.legend(loc'False Positive Rate', fontsize=8)
plt.xlabel('True Positive Rate', fontsize=8) plt.ylabel(
Peformance evaluation on the best models
Since labels for the 2309 test set videos are not provided, I evalaute performance of models on the validation set.
= ['document_entropy', 'freshness', 'easiness']
top_features
= df_train[top_features]
X = df_train['engagement'].astype(int)
y
= train_test_split(X, y, test_size=0.2, random_state=0)
X_train, X_val, y_train, y_val
## results from grid search
# {'max_depth': 10, 'max_features': 'sqrt', 'min_samples_leaf': 1, 'min_samples_split': 5, 'n_estimators': 100}
= RandomForestClassifier(max_depth=10, max_features='sqrt', min_samples_leaf=1,
clf_rf =5, n_estimators=100, n_jobs=-1,
min_samples_split=0)
random_state
## results from grid search
# {'max_depth': 30, 'max_features': 'sqrt', 'min_samples_leaf': 2, 'min_samples_split': 5, 'n_estimators': 200}
= GradientBoostingClassifier(max_depth=30, max_features='sqrt', min_samples_leaf=2,
clf_gb =5, n_estimators=200, random_state=0)
min_samples_split
clf_rf.fit(X_train, y_train)
clf_gb.fit(X_train, y_train)
= clf_rf.predict_proba(X_val)[:, 1]
y_pred = roc_auc_score(y_val, y_pred)
roc_auc print(f"Random Forest AUROC: {roc_auc:.3f}")
clf_gb.fit(X_train, y_train)= clf_gb.predict_proba(X_val)[:, 1]
y_pred = roc_auc_score(y_val, y_pred)
roc_auc print(f"Gradient Boost AUROC: {roc_auc:.3f}")
Random Forest AUROC: 0.854
Gradient Boost AUROC: 0.837
Cross validatated performance …
= ['roc_auc', 'average_precision', 'balanced_accuracy']
metrics = ['AUROC', 'AUPRC', 'Balanced Accuracy']
perfname
= [clf_rf, clf_gb]
clfs = ['Random Forest', 'Gradient Boost']
clfsname
for clf, clfname in zip(clfs, clfsname):
for perf, name in zip(metrics, perfname):
= cross_val_score(clf, X, y, cv=5, scoring=perf)
scores print(f"{clfname} - Averaged {name}: {scores.mean():.3f}")
Random Forest - Averaged AUROC: 0.875
Random Forest - Averaged AUPRC: 0.610
Random Forest - Averaged Balanced Accuracy: 0.699
Gradient Boost - Averaged AUROC: 0.849
Gradient Boost - Averaged AUPRC: 0.562
Gradient Boost - Averaged Balanced Accuracy: 0.712
Prediction on the 2309 test set videos
= ['document_entropy', 'freshness', 'easiness']
top_features
= df_train[top_features], df_train.iloc[:,-1].astype(int)
X_train, y_train = df_test[top_features]
X_test
## results from grid search
# {'max_depth': 10, 'max_features': 'sqrt', 'min_samples_leaf': 1, 'min_samples_split': 5, 'n_estimators': 100}
= RandomForestClassifier(max_depth=10, max_features='sqrt', min_samples_leaf=1,
clf =5, n_estimators=100, n_jobs=-1,
min_samples_split=0)
random_state
## results from grid search
# {'max_depth': 30, 'max_features': 'sqrt', 'min_samples_leaf': 2, 'min_samples_split': 5, 'n_estimators': 200}
# clf = GradientBoostingClassifier(max_depth=30, max_features='sqrt', min_samples_leaf=2,
# min_samples_split=5, n_estimators=200, random_state=0)
clf.fit(X_train, y_train)
= clf.predict_proba(X_test)
y_pred
= df_test['id'].values
indexes = y_pred[:,1]
probabilities
= pd.Series(probabilities, index=indexes)
pred pred
9240 0.010444
9241 0.031429
9242 0.066196
9243 0.751450
9244 0.012729
...
11544 0.018296
11545 0.023312
11546 0.015856
11547 0.863286
11548 0.053050
Length: 2309, dtype: float64