bandcamp

Hopping on the deep learning bandwagon

I’ve been meaning to get into deep learning for the last few years. Now, the stars having finally aligned and I have the time and motivation to work on a small project that will hopefully improve my understanding of the field. This is the first in a series of posts that will document my progress on this project.

As mentioned in a previous post on getting started as a data scientist, I believe that the best way of becoming proficient at solving data science problems is by getting your hands dirty. Despite being familiar with high-level terminology and having some understanding of how it all works, I don’t have any practical experience applying deep learning. The purpose of this project is to fix this experience gap by working on a real problem.

The problem: Inferring genre from album covers

Deep learning has been very successful at image classification. Therefore, it makes sense to work on an image classification problem for this project. Rather than using an existing dataset, I decided to make things a bit more interesting by building my own dataset. Over the last year, I’ve been running BCRecommender – a recommendation system for Bandcamp music. I’ve noticed that album covers vary by genre, though it’s hard to quantify exactly how they vary. So the question I’ll be trying to answer with this project is how accurately can genre be inferred from Bandcamp album covers?

As the goal of this project is to learn about deep learning rather than make a novel contribution, I didn’t do a comprehensive search to see whether this problem has been addressed before. However, I did find a recent post by Alexandre Passant that describes his use of Clarifai’s API to tag the content of Spotify album covers (identifying elements such as men, night, dark, etc.), and then using these tags to infer the album’s genre. Another related project is Karayev et al.’s Recognizing image style paper, in which the authors classified datasets of images from Flickr and Wikipedia by style and art genre, respectively. In all these cases, the results are pretty good, supporting my intuition that the genre inference task is feasible.

Data collection & splits

As I’ve already been crawling Bandcamp data for BCRecommender, creating the dataset was relatively straightforward. Currently, I have data on about 1.8 million tracks and albums. Bandcamp artists assign multiple tags to each release. To create the dataset, I selected 10 of the top tags: ambient, dubstep, folk, hiphop_rap, jazz, metal, pop, punk, rock, and soul. Then, I randomly selected 10,000 album covers that have exactly one of those tags, with 1,000 albums for each tag/genre. Each cover image size is 350×350. The following image shows a sample of the dataset.

bandcamp album covers by genre

It is apparent that some genres can be inferred more easily than others, especially when browsing through the full dataset. For example, metal albums tend to be pretty distinct. I doubt that predictive accuracy would be very high, but I think that it can definitely be much better than the random baseline of 10%.

For training, validation and testing I decided to use a static stratified 80%/10%/10% split of the dataset. It quickly became apparently that the full dataset is too big for development purposes, making it hard to quickly test code on my local machine. To address this, I created a local development dataset, using an 80%/10%/10% split of 1,000 images from the full training subset.

The code for downloading the dataset and creating the splits is available from the project repository on GitHub. This repository will include all the code for the project as it evolves. I will try to keep it well-documented enough to be useful for others, though it assumes some familiarity with Python. If you experience any issues running the code or find any bugs, please let me know.

Getting started

One of the things that has stopped me from playing with deep learning in the past is the feeling that there is a bit of a steep learning curve around the tools and methods. A lot of the deep learning libraries out there don’t seem as mature as general machine learning libraries, such as scikit-learn. There are also many more parameters to play with when building deep neural networks than when using linear models or algorithms such as random forests. Further, to enable any kind of meaningful experimentation, using a GPU is essential.

Fortunately, the tools and documentation have matured a lot in recent years. Motivated by Daniel Nouri’s excellent tutorial on detecting facial keypoints with convolutional neural nets, I decided to use the Lasagne package as my starting point. My plan was simple: Convert the MNIST example code to work on my dataset locally, setup an AWS machine with a GPU for full-scale experiments, and then play with various network architectures and techniques to improve accuracy and gain a deeper understanding of deep learning.

Initial environment setup

While Lasagne’s MNIST example code is pretty clear – especially once you get your head around the way Theano works – it doesn’t really lend itself to easy experimentation. I addressed this by refactoring the code in several iterations, until I got to the current state, where there’s a simple command-line interface that allows me to experiment with different datasets and architectures. This will probably change and become more complex as I start doing more sophisticated things.

To enable rapid experimentation, I had to set up an AWS machine with a GPU (g2.2xlarge instance). I wrote some simple deployment code using Fabric, which allows me to setup a machine from scratch, install all the requirements, package the project, and copy it to the remote machine.

Getting the code running on the CPU was trivial, but I hit several issues when running on the GPU. First, the vanilla Ubuntu 14.04 server I used didn’t come with CUDA installed. After trying and failing to get it working by following some tutorials, I ended up going down the easier path of using the AMI supplied by Caffe. This AMI also has the advantage of coming with Caffe installed (surprisingly), which I may end up using at some point.

The second issue I encountered was that using the GPU to run Lasagne’s enhanced example code on my full dataset was impossible due to memory constraints. The problem was that the example assumes that the entire dataset can fit in the GPU’s memory (as discussed here and here). This took a while to resolve, even though the solution is conceptually simple – just copy the dataset to the GPU in chunks rather than attempt to copy it all in one go. Resolving this issue was a good way of getting a better understanding of what the code does, since I ended up rewriting most of the original example code.

Next steps

So far, I left the network architecture from the original example mostly untouched, as I was busy collecting the dataset, getting the environment set up, and resolving various issues. One thing I did notice was that the example’s architecture diverges on my dataset, so instead I tested my code using a basic multi-layer perceptron architecture with a single hidden layer. This performs about as well as a random classifier on my dataset, but at least it converges. I also tested the modified code on the MNIST dataset and the results are decent, so now it is time to move forward and actually do some modelling, starting with convolutional neural nets.

The high level plan is to iteratively read tutorials/papers/books, implement ideas, play with parameters, and visualise parts of the network until I’m satisfied with the results. The main goal remains to learn as much as possible and get a good intuition of how things work. I’ll write more about my experiences in subsequent posts. Stay tuned!

Update: The second post in the series is now available.

BCRecommender Traction Update

This is the fifth part of a series of posts on my Bandcamp recommendations (BCRecommender) project.
Check out previous posts on the general motivation behind this project, the system’s architecture, the recommendation algorithms, and initial traction planning.

In a previous post, I discussed my plans to apply the Bullseye framework from the Traction Book to BCRecommender, my Bandcamp recommendations project. In that post, I reviewed the 19 traction channels described in the book, and decided to focus on the three most promising ones: blogger outreach, search engine optimisation (SEO), and content marketing. This post discusses my progress to date.

Goals

My initial traction goals were rather modest: get some feedback from real people, build up steady nonzero traffic to the site, and then increase that traffic to 10+ unique visitors per day. It’s worth noting that I have four other main areas of focus at the moment, so BCRecommender is not getting all the attention I could potentially give it. Nonetheless, I have made good progress on achieving my goals (first two have been obtained, but traffic still fluctuates), and learnt a lot in the process.

Things that worked

Blogger outreach. The most obvious people to contact are existing Bandcamp fans. It was straightforward to generate a list of prolific fans with blogs, as Bandcamp allows people to populate their profile with a short bio and links to their sites. I worked my way through part of the list, sending each fan an email introducing BCRecommender and asking for their feedback. Each email required some manual work, as the vast majority of people don’t have their email address listed on their Bandcamp profile page. I was careful not to be too spammy, which seemed to work: about 50% of the people I contacted visited BCRecommender, 20% responded with positive feedback, and 10% linked to BCRecommender in some form, with the largest volume of traffic coming from my Hypebot guest post. The problem with this approach is that it doesn’t scale, but the most valuable thing I got out of it was that people like the project and that there’s a real need for it.

Twitter. I’m not sure where Twitter falls as a traction channel. It’s probably somewhere between (micro)blogger outreach and content marketing. However you categorise Twitter, it has been working well as a source of traffic. Simply finding people who may be interested in BCRecommender and tweeting related content has proven to be a rather low-effort way of getting attention, which is great at this stage. I have a few ideas for driving more traffic from Twitter, which I will try as I go.

Things that didn’t work

Content marketing. I haven’t really spent time doing serious content marketing apart from the Spotlights pilot. My vision for the spotlights was to generate quality articles automatically and showcase music on Bandcamp in an engaging way that helps people discover new artists, even if they don’t have a fan account. However, full automation of the spotlight feature would require a lot of work, and I think that there are lower-hanging fruits that I should focus on first. For example, finding interesting insights in the data and presenting them in an engaging way may be a better content strategy, as it would be unique to BCRecommender. For the spotlights, partnering with bloggers to write the articles may be a better approach than automation.

SEO. I expected BCRecommender to rank higher for “bandcamp recommendations” by now, as a result of my blogger outreach efforts. At the moment, it’s still on the second page for this query on Google, though it’s the first result on Bing and DuckDuckGo. Obviously, “bandcamp recommendations” is not the only query worth ranking for, but it’s very relevant to BCRecommender, and not too competitive (half of the first page results are old forum posts). One encouraging outcome from the work done so far is that my Hypebot guest post does appear on the first page. Nonetheless, I’m still interested in getting more search engine traffic. Ranking higher would probably require adding more relevant content on the site and getting more quality links (basically what SEO is all about).

Points to improve and next steps

I could definitely do better work on all of the above channels. Contrary to what’s suggested by the Bullseye framework, I would like to put more effort into the channels that didn’t work well. The reason is that I think they didn’t work well because of lack of attention and weak experiments, rather than due to their unsuitability to BCRecommender.

As mentioned above, my main limiting factor is a lack of time to spend on the project. However, there’s no pressing need to hit certain traction milestones by a specific deadline. My stretch goals are to get all Bandcamp fans to check out the project (hundreds of thousands of people), and have a significant portion of them convert by signing up to updates (tens of thousands of people). Getting there will take time. So far I’m finding the process educational and enjoyable, which is a pleasant surprise.

Applying the Traction Book’s Bullseye framework to BCRecommender


This is the fourth part of a series of posts on my Bandcamp recommendations (BCRecommender) project.
Check out previous posts on the general motivation behind this project, the system’s architecture, and the recommendation algorithms.

Having used BCRecommender to find music I like, I’m certain that other Bandcamp fans would like it too. It could probably be extended to attract a wider audience of music lovers, but for now, just getting feedback from Bandcamp fans would be enough. There are about 200,000 fans that I know of – getting even a fraction of them to use and comment on BCRecommender would serve as a good guide to what’s worth building and improving.

In addition to getting feedback, the personal value for me in getting BCRecommender users is learning some general lessons on traction building. Like many technical people, I like building products and playing with data, but I don’t really enjoy sales and marketing (and that’s an understatement). One of my goals in working independently is forcing myself to get better at the things I’m not good at. To that end, I recently started reading Traction: A Startup Guide to Getting Customers by Gabriel Weinberg and Justin Mares.

The Traction book identifies 19 different channels for getting traction, and suggests a simple framework (named Bullseye) to ranking and quickly exploring the channels. They explain that many technical founders tend to focus on traction channels they’re familiar with, and that the effort invested in those channels tends to be rather small compared to the investment in building the product. The authors rightly note that “Almost every failed startup has a product. What failed startups don’t have is traction – real customer growth.” They argue that following a rigorous approach to gaining traction via their framework is likely to improve a startup’s chances of success. From personal experience, this is very likely to be true.

The key steps in the Bullseye framework are brainstorming ideas for each traction channel, ranking the channels into tiers, prioritising the most promising ones, testing them, and focusing on the channels that work. This is not a one-off process – channel suitability changes over time, and one needs to go through the process repeatedly as the product evolves and traction grows.

Here are the traction channels, ordered in the same order as in the book. Each traction channel is marked with a letter denoting its ranking tier from A (most appropriate) to C (unsuitable right now). A short explanation is provided for each channel.

  • [B] viral marketing: everyone wants to go viral, but at the moment I don’t have a good-enough understanding of my target audience to seriously pursue this channel.
  • [C] public relations (PR): I don’t think that PR would give me access to the kind of focused user group I need at this phase.
  • [C] unconventional PR: same as conventional PR.
  • [C] search engine marketing (SEM): may work, but I don’t want to spend money at this stage.
  • [C] social and display ads: see SEM.
  • [C] offline ads: see SEM.
  • [A] search engine optimization (SEO): this channel seems promising, as ranking highly for queries such as “bandcamp recommendations” should drive quality traffic that is likely to convert (i.e., play recommendations and sign up for updates). It doesn’t seem like “bandcamp recommendations” is a very competitive query, so it’s definitely worth doing some SEO work.
  • [A] content marketing: I think that there’s definitely potential in this channel, since I have a lot of data that can be explored and presented in interesting ways. The problem is creating content that is compelling enough to attract people. I started playing with this channel via the Spotlights feature, but it’s not good enough yet.
  • [B] email marketing: BCRecommender already has the subscription feature for retention. At this stage, this doesn’t seem like a viable acquisition channel.
  • [B] engineering as marketing: this channel sounds promising, but I don’t have good ideas for it at the moment. This may change soon, as I’m currently reading this chapter.
  • [A] targeting blogs: this approach should work for getting high-quality feedback, and help SEO as well.
  • [C] business development: there may be some promising ideas in this channel, but only worth pursuing later.
  • [C] sales: not much to sell.
  • [C] affiliate programs: I’m not going to pay affiliates as I’m not making any money.
  • [B] existing platforms: in a way, I’m already building on top of the existing Bandcamp platform. One way of utilising it for growth is by getting fans to link to BCRecommender when it leads to sales (as I’ve done on my fan page), but that would be more feasible at a later stage with more active users.
  • [C] trade shows: I find it hard to think of trade shows where there are many Bandcamp fans.
  • [C] offline events: probably easier than trade shows (think concerts/indie events), but doesn’t seem worth pursuing at this stage.
  • [C] speaking engagements: similar to offline events. I do speaking engagements, and I’m actually going to mention BCRecommender as a case study at my workshop this week, but the intersection between Bandcamp fans and people interested in data science seems rather small.
  • [C] community building: this may be possible later on, when there is a core group of loyal users. However, some aspects of community building are provided by Bandcamp and I don’t want to compete with them.

Cool, writing everything up explicitly was actually helpful! The next step is to test the three channels that ranked the highest: SEO, content marketing and targeting blogs. I will report the results in future posts.

Bandcamp recommendation and discovery algorithms


This is the third part of a series of posts on my Bandcamp recommendations (BCRecommender) project.
Check out the first part for the general motivation behind this project and the second part for the system architecture.

The main goal of the BCRecommender project is to help me find music I like. This post discusses the algorithmic approaches I took towards that goal. I’ve kept the descriptions at a fairly high-level, without getting too much into the maths, as all recommendation algorithms essentially try to model simple intuition. Please leave a comment if you feel like something needs to be explained further.

Data & evaluation approach

The data was collected from publicly-indexable Bandcamp fan and track/album (aka tralbum) pages. For each fan, it consists of the tralbum IDs they bought or wishlisted. For each tralbum, the saved data includes the type (track/album), URL, title, artist name, and the tags (as assigned by the artist).

At the moment, I have data for about 160K fans, 335K albums and 170K tracks. These fans have expressed their preference for tralbums through purchasing or wishlisting about 3.4M times. There are about 210K unique tags across the 505K tralbums, with the mean number of tags per tralbum being 7. These figures represent a fairly sparse dataset, which makes recommendation somewhat challenging. Perhaps this is why Bandcamp doesn’t do much algorithmic recommendation.

Before moving on to describe the recommendation approaches I played with, it is worth noting that at this stage, my way of evaluating the recommendations isn’t very rigorous. If I can easily find new music that I like, I’m happy. As such, offline evaluation approaches (e.g., some form of cross-validation) are unlikely to correlate well with my goal, so I just didn’t bother with them. Having more data would allow me to perform more rigorous online evaluation to see what makes other people happy with the recommendations.

Personalised recommendations with preferences (collaborative filtering)

My first crack at recommendation generation was using collaborative filtering. The broad idea behind collaborative filtering is using only the preference matrix to find patterns in the data, and generate recommendations accordingly. The preference matrix is defined to have a row for each user and a column for each item. Each matrix element value indicates the level of preference by the user for the item. To keep things simple, I used unary preference values, where the element that corresponds to user/fan u and item/tralbum i is set to 1 if the fan purchased or wishlisted the tralbum, or set to missing otherwise.

A simple example for collaborative filtering is in the following image, which was taken from the Wikipedia article on the topic.

Simple collaborative filtering example

I used matrix factorisation as the collaborative filtering algorithm. This algorithm was a key part of the winning team’s solution to the Netflix competition. Unsurprisingly, it didn’t work that well. The key issue is that there are 160K * (335K + 170K) = 80.8B possible preferences in the dataset, but only 3.4M (0.004%) preferences are given. What matrix factorisation tries to do is to predict the remaining 99.996% of preferences based on the tiny percentage of given data. This just didn’t yield any music recommendations I liked, even when I made the matrix denser by dropping fans and tralbums with few preferences. Therefore, I moved on to employing an algorithm that can use more data – the tags.

Personalised recommendations with tags and preferences (collaborative filtering and content-based hybrid)

Using data about the items is referred to as content-based recommendation in the literature. In the Bandcamp recommender case, the content data that is most easy to use is the tags that artists assign to their work. The idea is to build a profile for each fan based on tags for their tralbums, and recommend tralbums with tags that match the fan’s profile.

As mentioned above, the dataset contains 210K unique tags for 505K tralbums, which means that this representation of the dataset is also rather sparse. One obvious way of making it denser is by dropping rare tags. I also “tagged” each tralbum with a fan’s username if that fan purchased or wishlisted the tralbum. In addition to yielding a richer tralbum representation, this approach makes the recommendations likely to be less obvious than those based only on tags. For example, all tralbums tagged with rock are likely to be rock albums, but tralbums tagged with yanir are somewhat more varied.

To make the tralbum representation denser I used the latent Dirichlet allocation (LDA) implementation from the excellent gensim library. LDA assumes that there’s a fixed number of topics (distributions over tags, i.e., weighted lists of tags), and that every tralbum’s tags are generated from its topics. In practice, this magically yields clusters of tags and tralbums that can be used to generate recommendations. For example, the following word cloud presents the top tags in one cluster, which is focused on psychedelic-progressive rock. Each tralbum is assigned a probability of being generated from this cluster. This means that each tralbum is now represented as a probability distribution over a fixed number of topics – much denser than the raw tag data.

psychedelic-progressive-rock tag cloud

Using LDA for generating recommendations is straightforward, as each fan can be represented as the concatenation of the tags assigned to their tralbums, together with their own user tag. This representation is then converted to a topic distribution, which is compared to all the tralbums to yield the most similar ones.

This approach yielded much better results than collaborative filtering. I actually found albums I like and made some purchases, more than just the three that are annotated on my fan page (I didn’t want to be too spammy). Woohoo!

However, the problem with this approach is that it doesn’t take my mood into account, as it is based on my entire profile. To address this, I introduced similar music and cluster-based discovery.

Beyond static personalisation: similar music and cluster-based discovery

It is easy to see that the LDA-based tralbum representation makes it straightforward to calculate similarity between tralbums, and also explore tralbums that belong to the same topic/cluster. Adding this functionality to BCRecommender means that users can explore similar tralbums to a tralbum or a cluster in the style that they are interested in right now – based on their mood. Implementing these features helped me find more music I like, so again, I’m happy.

Tweaking the similarity algorithms is still a work in progress, as is finding a scalable way to generate useful cluster/spotlight pages. However, my focus now (in the time that I can allocate to working on this project) is on getting some people using it and iterate following their feedback.

Future extensions

It would be awesome to make BCRecommender’s discovery process smoother. For example, it’d be fairly straightforward to just stream all the recommendations rather than making users click album by album (like Pandora, Spotify, etc.). Iterating on the above approaches to improve the user experience is also likely to yield good results.

However, as mentioned above, my current focus is on getting more people to use BCRecommender. While the target audience is rather small, it doesn’t matter because I’m not trying to make money from this project. I am certain that many fans would discover new music using the website. At this stage, I just need to get them to visit, which is something that I will write about in future posts.

Building a recommender system on a shoestring budget (or: BCRecommender part 2 – general system layout)


This is the second part of a series of posts on my BCRecommender – personalised Bandcamp recommendations project.
Check out the first part for the general motivation behind this project.

BCRecommender is a hobby project whose main goal is to help me find music I like on Bandcamp. Its secondary goal is to serve as a testing ground for ideas I have and things I’d like to explore.
One question I’ve been wondering about is: how much money does one need to spend on infrastructure for a simple web-based product before it reaches meaningful traffic?
The answer is: not much at all. It can easily be done for less than $1 per month.
This post discusses my exploration of this question by describing the main components of the BCRecommender system, without getting into the algorithms that drive it (which will be covered in subsequent posts).

The general flow of BCRecommender is fairly simple: crawl publicly-available data from Bandcamp (fan collections and tracks/albums = tralbums), generate recommendations based on this data (static lists of tralbums indexed by fan for personalised recommendations and by tralbum for similarity), and present the recommendations to users in a way that’s easy to browse and explore (since we’re dealing with music it must be playable, which is easy to achieve by embedding Bandcamp’s iframes).

First iteration: Django & AWS

The first iteration of the project was implemented as a Django project. Having never built a Django project from scratch, I figured this would be a good way to learn how it’s done properly. One thing I was keen on learning was using the Django ORM with an SQL database (in the past I’ve worked with Django and MongoDB). This ended up working less smoothly than I expected, perhaps because I’m too used to MongoDB, or because SQL forces you to model your data in unnatural ways, or because I insisted on using SQLite for simplicity. Whatever it was, I quickly started missing MongoDB, despite its flaws.

I chose AWS for hosting because my personal account was under the free tier, and using a micro instance is more than enough for serving a website with no traffic. I considered Google App Engine with its indefinite free tier, but after reading the docs I realised I don’t want to jump through so many hoops to use their system – Google’s free tier was likely to cost too much in pain and time.

While an AWS micro instance is enough for serving the recommendations, it’s not enough for generating them. Rather than paying Amazon for another instance, I figured that using spare capacity on my own laptop (quad-core with 16GB of RAM) would be good enough. So the backend worker for BCRecommender ended up being a local virtual machine using one core and 4GB of RAM.

After some coding I had a nice setup in place:

  • AWS webserver running Django with SQLite as the database layer and a simple frontend, styled with Bootstrap
  • Local backend worker running Celery under Supervisor to collect the data (with errors reported to a dedicated Gmail account), Dropbox for backups, and Django management commands to generate the recommendations
  • Code and issue tracker hosted on Bitbucket (which provides free private repositories)
  • Fabric scripts for deployments to the AWS webserver and the local backend worker (including database sync as one big SQLite file)
  • Local virtual machine for development (provisioned with Vagrant)

This system wasn’t going to scale, but I didn’t care. I just used it to discover new music, and it worked. I didn’t even bother registering a domain name, so it was all running for free.

Second iteration: “Django” backend & Parse

A few months ago, Facebook announced that Parse’s free tier will include 30 requests / second. That’s over 2.5 million requests per day, which is quite a lot – probably enough to run the majority of websites on the internet. It seemed too good to be true, so I had to try it myself.

It took a few hours to convert the Django webserver/frontend code to Parse. This was fairly straightforward, and it had the added advantages of getting rid of some deployment scripts and having a more solid development environment. Parse supplies a command-line tool for deployment that constantly syncs the code to an app that is identical to the production app – much better than the Fabric script I had.

The disadvantages of the move to Parse were having to rewrite some of the backend in JavaScript (= less readable than Python), and a more complex data sync command (no longer just copying a big SQLite file). However, I would definitely use it for other projects because of the generous free tier, the availability of APIs for all major platforms, and the elimination of most operational concerns.

Current iteration: Goodbye Django, hello BCRecommender

With the Django webserver out of the way, there was little use left for Django in the project. It took a few more hours to get rid of it, replacing the management commands with Commandr, and the SQLite database with MongoDB (wrapped with the excellent MongoEngine, which has matured a lot in recent years). MongoDB has become a more natural choice now, since it is the database used by Parse. I expect this setup of a local Python backend and a Parse frontend to work quite well (and remain virtually free) for the foreseeable future.

The only fixed cost I now have comes from registering the bcrecommender.com domain and managing it with Route 53. This wasn’t required when I was running it only for myself, and I could have just kept it under bcrecommender.parseapp.com, but I think it would be useful for other Bandcamp users. I would also like to use it as a training lab to improve my (poor) marketing skills – not having a dedicated domain just looks bad.

In summary, it’s definitely possible to build simple projects and host them for free. It also looks like my approach would scale way beyond the current BCRecommender volume. The next post in this series will cover some of the algorithms and general considerations of building the recommender system.

Building a Bandcamp recommender system (part 1 – motivation)

I’ve been a Bandcamp user for a few years now. I love the fact that they pay out a significant share of the revenue directly to the artists, unlike other services. In addition, despite the fact that fans may stream all the music for free and even easily rip it, almost $80M were paid out to artists through Bandcamp to date (including almost $3M in the last month) – serving as strong evidence that the traditional music industry’s fight against piracy is a waste of resources and time.

One thing I’ve been struggling with since starting to use Bandcamp is the discovery of new music. Originally (in 2011), I used the browse-by-tag feature, but it is often too broad to find music that I like. A newer feature is the Discoverinator, which is meant to emulate the experience of browsing through covers at a record store – sadly, I could never find much stuff I liked using that method. Last year, Bandcamp announced Bandcamp for fans, which includes the ability to wishlist items and discover new music by stalking/following other fans. In addition, they released a mobile app, which made the music purchased on Bandcamp much easier to access.

All these new features definitely increased my engagement and helped me find more stuff to listen to, but I still feel that Bandcamp music discovery could be much better. Specifically, I would love to be served personalised recommendations and be able to browse music that is similar to specific tracks and albums that I like. Rather than waiting for Bandcamp to implement these features, I decided to do it myself. Visit BCRecommender – Bandcamp recommendations based on your fan account to see where this effort stands at the moment.

While BCRecommender has already helped me discover new music to add to my collection, building it gave me many more ideas on how it can be improved, so it’s definitely a work in progress. I’ll probably tinker with the underlying algorithms as I go, so recommendations may occasionally seem weird (but this always seems to be the case with recommender systems in the real world). In subsequent posts I’ll discuss some of the technical details and where I’d like to take this project.


It’s probably worth noting that BCRecommender is not associated with or endorsed by Bandcamp, but I doubt they would mind since it was built using publicly-available information, and is full of links to buy the music back on their site.