Modern web platforms and CMS follow a general trend to let the user set collection of *tags*, ie. attributes describing the content (a Corpus, a dataset, a collection…) which can be set to multiple values. This is not only true for web data (think about survey when users can answer with multiple values). So, multiple tags collections usually decorate even discrete datasets , and the values are in a freeform by definition. Thus, at the end of the day, we end up having thousands of different tags, most of them with very low frequency, almost useless to explain the data and the relations among observation. My idea is to classify the tags in *topics*, for better understanding the data and for dimensionality reduction, thanks to topic modeling solutions like LDA and Gibbs sampling.

For this tutorial, I’m going to use the BoardGameGeek dataset, a collection that describes more than 94,000 board games and their expansions, took from the entire human history. The dataset has several columns which host various kinds of *tags: *categories, mechanics, implementations, families, designers and so on. Multiple tags could be added for a given property and they are coded into the table in a comma-separated form. At the end of the day, we have more than 5,000 distinct tags into the data, and most of them with very low frequency (and explaining very low variance).

This tutorial presents an extensive use of the tidyverse collection, so a basic understanding of its programming pattern is recommended. Here’s how to load the data and all the needed dependencies:

1 2 3 4 5 6 | # install.packages("devtools") devtools::install_github("9thcirclegames/bgg-analysis") require("bggAnalysis") data(BoardGames) |

### Preprocessing the data

Before diving into the topic modeling stage, data need some sanitation first. To start, I removed game expansions from the database, as they almost always share the same tags of the main game, thus they won’t add variance on the data, while hugely increase the computation time. Then, I decided to remove all those games which have received less than 5 ratings from the community. This will help me to wipe-out unpublished games, vaporware, and, in general, all those forgotten games that nobody has tagged yet or have been tagged poorly.

1 2 3 4 5 6 7 8 9 | bgg.useful <- BoardGames %>% bgg.prepare.data() %>% filter(stats.usersrated >= 5, game.type == "boardgame") %>% mutate(stats.average.factor = arules::discretize(stats.average, method="frequency", categories = 5, ordered = TRUE)) rownames(bgg.useful) <- make.names(bgg.useful$details.name, unique=TRUE) |

Since tags columns can host more than one tag (comma-separated), I created a set of dummy variables from **all the tags** I found in the columns **category**, **mechanic**, **family** and **implementation**. Other tags collections exist in the data, for example the column **expansions,** but they are so sparse and quality of tags so low (in terms of frequency distribution) that they would add more noise than value, so I decided to ignore them. This is done from the nice **cSplit_e** function from the package splitstackshape.

1 2 3 4 5 6 7 8 9 10 11 12 | bgg.dummy <- cSplit_e(bgg.useful, "attributes.boardgamecategory", type="character", fill=0, drop=TRUE) bgg.dummy <- cSplit_e(bgg.dummy, "attributes.boardgamemechanic", type="character", fill=0, drop=TRUE) bgg.dummy <- cSplit_e(bgg.dummy, "attributes.boardgamefamily", type="character", fill=0, drop=TRUE) bgg.dummy <- cSplit_e(bgg.dummy, "attributes.boardgameimplementation", type="character", fill=0, drop=TRUE) colnames(bgg.dummy) <- gsub(" ", "", colnames(bgg.dummy)) colnames(bgg.dummy) <- gsub("/", "-", colnames(bgg.dummy)) colnames(bgg.dummy) <- gsub("_", ".", colnames(bgg.dummy)) colnames(bgg.dummy) <- make.names(colnames(bgg.dummy)) bgg.dummy.cat <- bgg.dummy %>% select(matches("attributes.boardgame(category|mechanic|family|implementation).")) |

I ended up with a quite impressive set of **5841** dummy variables, very sparsely distributed. And here the problems start to arise…

### Why topic modeling?

**game category**(click to enlarge).

Looking at the relative frequencies, we would see that, excluding the one or two top elements, tags distribute themselves in a very long tail, with the mostly part (outside the Top 20) sits in less than 1% of games. If you put terms in columns and games in rows, you will end up with a very sparse documents-terms matrix where the “documents” are the games and the “terms” are the tags themselves.

Anyway, the problem is that with a so sparse matrix, typical dimensionality reduction will simply not work. I’ve trained different MCA models and I ended up with the primary dimension explaining only ~2% of the total variance, secondary dimension adding ~0.005% variance and so on. No clear cutoff, thus no accurate dimensionality reduction.

But we must consider that tags are filled by the users so synonyms are really common. Thus, it’s natural to say that some relations between part of them are certainly possible. As I said, this data shape really resemble a documents-term matrix you would have while working on a Corpus of text documents. It’s admissible to try models usually used to solve Text Mining problems, like the **topic modeling** problem.

The point is: does a topic model exist for a very space document-term matrix? Yes, it does!

### Topic modeling: Build the model with LDA and Gibbs sampling

If you read in the aforementioned paper, you’ll see that the **Latent Dirichlet Allocation** is the best model so far to classify short and sparse text with hidden topics (ie. without knowing the topics in advance). LDA requires you to specify the number of topics you want….but which is the *optimal* number of topics for a given dataset?

To answer this question, I could build a very robust bag of indexes but it would take **days** to compute and I’m not sure they would work in this case as they tend to overestimate the number of topics in some circumstances. Since I need minimization more than accuracy (remember I’m pursuing dimensionality reduction?), I decided to not follow this path.

I think it’s much easier to use this function made by tinker which basically performs a trial-and-error simulation of various LDA models and select the number of topic which maximizes the harmonic mean of the log likelihood. After the execution of the code chunk below with my data I got 37 as optimal number of topics, but thanks to the plot I was able to see that **25 **was very near. Thus, I decided to keep this value instead.

1 2 3 4 5 6 7 8 9 10 11 12 13 | devtools::source_url("https://raw.githubusercontent.com/trinker/topicmodels_learning/master/functions/optimal_k.R") k.control <- list(burnin = 500, iter = 1000, keep = 100) # Only games with a least one tag can be used for LDA bgg.lda.valid <- bgg.dummy.cat[which(rowSums(bgg.dummy.cat != 0)>=1),] # Clean the columns names for better viz colnames(bgg.lda.valid) <- gsub("attributes\\.boardgame(category|mechanic|family|implementation)\\.(.*)", "\\2\\.\\1", colnames(bgg.lda.valid), perl = TRUE) k <- optimal_k(bgg.lda.valid, 50, control = k.control) plot(k) # Forcing 25 k <- 25 |

Now we’re ready to train the model using the

**Gibbs sampling**. If you want to try this at home, you could have a nap in the meanwhile, as it take a while (if you have more than 30 topics and a mid-size dataset you had better go to bed, instead!).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | #Set parameters for Gibbs sampling gibbs.control <- list(burnin = 2000, iter = 200, thin = 200, seed = list(2003,5,63,100001,765,287,899,101,49,3), nstart = 10, best = TRUE) #Run LDA using Gibbs sampling for topic modeling k topics bgg.ldaOut <-LDA(bgg.lda.valid, k=as.numeric(k), method="Gibbs", control=gibbs.control) # Best candidate topic column bgg.ldaOut.main.topics <- as.matrix(topics(bgg.ldaOut)) # Top 10 tags for each topic and their beta bgg.ldaOut.top.terms <- tidy(bgg.ldaOut, matrix = "beta") %>% group_by(topic) %>% top_n(10, beta) %>% ungroup() %>% arrange(topic, -beta) # Top 10 most representative document for each topic and their gamma bgg.ldaOut.top.documents <- tidy(bgg.ldaOut, matrix = "gamma") %>% group_by(topic) %>% top_n(10, gamma) %>% ungroup() %>% arrange(topic, -gamma) |

Here, I’ve also sliced a

**Top 10 terms-topic**matrix, evaluated with the index

*beta*, which represents the probability for a given term (tag) to be generated in each topic. The same can be done with the index

*gamma*which evaluated the probability for a given document (game) to be assigned to each topic. As a practical example, let’s see the top 10 tags that you will most probably meet in the

**topic 24**.

pos | term | beta |
---|---|---|

1 | AbstractStrategy.category | 0.346259871356088 |

2 | TilePlacement.mechanic | 0.166877521940046 |

3 | PatternBuilding.mechanic | 0.106151203206776 |

4 | GridMovement.mechanic | 0.0543473918312448 |

5 | Combinatorial.family | 0.0384999533898869 |

6 | AreaEnclosure.mechanic | 0.0375677511286306 |

7 | MensaSelect.family | 0.0162602708713428 |

8 | Puzzle.category | 0.0149285533552623 |

9 | ChessGames.family | 0.0119987748198852 |

10 | LooneyPyramids.family | 0.00986802679415642 |

### Visual Inspection of Results

The inspection of the results is crucial to understand if the topics are realistic and reliable, and this must be done manually (another good reason to keep the number of topics low!). The tags and the documents plots, as inspected alone, are probably difficult to interpretate, so I’m going to build an integrated plot. Staying with the topic 24:

Now it’s clear! This is the cluster of those abstract strategy games which probably have been inspired by Chess! And since this iconic game is not into the Top 10 of documents (games), but I expect it to fall into the topic 24 as well, so let’s have a look on where it is:

1 2 3 | ## chess.variants ## 1 2 3 4 7 10 13 16 18 19 24 25 ## 1 1 2 1 1 1 1 1 1 1 53 1 |

Nice to see that of the **65** Chess games (and variants) present in database **53** have been correctly assigned to topic 24. If this is not a good sign, nothing else will be! 🙂

Keep those steps for each topic and you’ll able to assign a label to all of them them and add the *topic* column to the original dataset. Luckily enough, in this case it seems that all the topics are reliable.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | # After manual inspection and validation... topic.labels <- c("Co-op Games", "SciFi Games", "Solitaires", "Corporate Strategy Games", "Traditional Card Games", "Napoleonic Wargames", "Modern Wargames", "Monopoly-inspired", "Dexterity Games", "Collectible Games", "Party Tabletop Games", "Party Card Games", "Children's Games", "Turnless Games", "Horror Miniatures", "Crowdfunded Card Games", "XIX Century Wargames", "Memory Games", "Fantasy Miniatures", "Classical Dice Games", "Licensed Games", "Area Control Games", "Sport Simulations", "Abstract Strategy Games", "Modern Card Games") bgg.topics <- (left_join(bgg.useful %>% mutate(symbol = rownames(bgg.useful)), as.data.frame(topics(bgg.ldaOut)) %>% mutate(symbol = rownames(bgg.ldaOut.main.topics.df)), by = 'symbol')) %>% select(-symbol) %>% dplyr::rename(topic = `topics(bgg.ldaOut)`) bgg.topics$topic <- factor(bgg.topics$topic, labels=topic.labels) |

### Distributions of results and further explorations

If you remember, the categories distribution (the most important tags collection) presents a short-head, long-tail distribution with no tags expressing a relevant amount of variance. So, let’s see the frequency plot for topics, instead.

Aaaah…much better, isn’t it? A finite set of discrete topics all having a relevant weight in expressing variance into the data! A perfect starting point for further analysis.

Let’s say, for example, we want to study the correlation between **rating** and **weight** (difficulty). It’s natural to use topics to slice data here.

This not only suggests me that BGG users prefer difficult games (this is something already known, actually), but that this is especially true for **Corporate Strategy Games**. This is realistic since fall in this topic all games like *Puerto Rico* and *Le Havre, *those german games made famous to be strategically rich, but very hard to play. Kinda cool, isn’t it?

### Relations between topics

While I assigned only one topic to a particolar game (the one with the highest probability of its term to be generated into), Gibbs’ matrix computes the probability for **all** terms and **all** topics, actually.. This is logic, as documents could be made by a variable mixture of all the topics (each of them with different probability). This gives me the possibility to explore the **relations** between topics based on terms (tags) probabilities and plot them into a very impressive network graph:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | bgg.post <- posterior(bgg.ldaOut) bgg.cor_mat <- cor(t(bgg.post[["terms"]])) bgg.cor_mat[bgg.cor_mat < .001 ] <- 0 diag(bgg.cor_mat) <- 0 bgg.graph <- graph.adjacency(bgg.cor_mat, weighted=TRUE, mode="lower") bgg.graph <- delete.edges(bgg.graph, E(bgg.graph)[ weight < 0.04]) E(bgg.graph)$edge.width <- E(bgg.graph)$weight * 5 V(bgg.graph)$label <- as.character(factor(V(bgg.graph), labels=topic.labels)) V(bgg.graph)$size <- colSums(bgg.post[["topics"]])/200 par(mar=c(0, 0, 3, 0)) set.seed(110) plot.igraph(bgg.graph, edge.width = E(bgg.graph)$edge.width, main = "Strength Between Topics Based On Word Probabilities", edge.color = "orange", vertex.color = "orange", vertex.frame.color = NA, vertex.label.color = "grey10", vertex.label.cex=.8) |

Conclusions

In this Kernel I introduced a LDA approach to topic modeling problem when you have discrete, many-to-many, several collections of tags. Although this model is usually used in text mining, if you have enough tags you can successfully find a finite number of topics, even if the documents-terms matrix is very sparse. After some labeling through manual inspections of results, it seems that topics are much reliable, and the model is robust. Thus, topics can be used for better inspection and, last but not least, for dimensionality reduction. Thanks for reading this post and don’t forget that all the code presented here can be run, modified and discussed on Kaggle. Aloha!