subsection{Recommending Unexpected Relevant Items} Once the forgotten items have been identified, we need to distinguish relevant ones from the rest. Given user taste shifts, as well as the changes in the system as a whole, not all unexpected items remain relevant, and consequently useful for recommendation. The key concept to identify relevant items is the extbf{relevance score} of the items at each moment. We propose four strategies to define the relevance score of each unexpected item. The goal is to recommend for each user only the Top $N$ items with highest scores.looseness=-1 subsubsection{Temporal-Distance Heuristic - TDH-2 } A simple strategy is based on recommending items that have been unexpected for long intervals. We assume that users are more willing to re-consume items that they have consumed long ago. The longer the period an item has been unexpected, the higher its final relevance score. We define the score of each item $i$ as the interval between the test moment $M_t$ and the most recent moment that $u$ consumed $i$ in the training set. Then, we sort the items decreasingly by this score and recommend the Top $N$ ones.looseness=-1 subsubsection{Relevance-Estimation Heuristic - REH-2} …show more content…
In this sense, we use the set of identified unexpected items as input for the UserKNN method. UserKNN will define the K-nearest neighbors of the target user $u$ and derive for each unexpected item $i$ a score based on the mean score assigned to it by the neighbors of $u$ in the training set. Then, items are sorted in descending order by such score and the Top $N$ items are issued. We implemented our version of extit{UserKNN} using Cosine as similarity function, such as presented in cite{adomavicius2005tng}. This version also incorporates the sample bias regularization approach proposed in MyMediaLite, with the original parameters cite{mymedialite}.looseness=-1