Soundchain

Making music from blockchain discourse on Twitter

The Soundchain project was created by Dr Pedro Jacobetty, from the sociology department at the University of Edinburgh, with the support of:

The project is currently hosted in an Eleanor cloud computing service's free tier virtual machine, from where it live streams 24/7 to YouTube and TwitchTV.

More on Eleanor

Dr Jacobetty provides a brief description below.

'This project started with an exploration of natural language processing and the vast amounts of qualitative (textual) data available on the internet for sociological research. My goal was to understand what people were writing about blockchain technologies, so I decided to use the topic modelling technique - a machine learning model for natural language processing (NLP). This technique is useful for analysing a great volume of documents, treating their content as a mixture of meaningful latent structures ("topics"). It's objective is to uncover those latent structures of meaning (semantic structures) that organise the distributions of words (what we commonly referred to as topics) present throughout the documents. In this case, the model was trained on 10,000 posts from the blog platform Medium, categorised by the authors with the tag blockchain.

I was then fortunate enough to attend the "Block that chain" work lab by the mur.at artist collective in Graz. This was an opportunity to join artists and technologists in a discussion about blockchain technology.

Since I had already trained the topic model, I thought it would be interesting to adapt it to a different kind of purpose; to generate aesthetic commentary (i.e. "music") about the discourse on blockchain technologies. The Soundchain music generator monitors online discourse about blockchain technology in real-time using the Twitter streaming API (filtered by the hashtag #blockchain). Tweets are interpreted as they are published through the previously trained topic model.

The "genesis tweet" (first tweet captured when the generator starts) is used to create the melody and the rhythm. I mapped the identified "topics" to some parameters of the music generator (e.g. financial technology and cryptocurrencies increase distortion, technological development increases BPMs), thus transforming the Twitter stream into a never-ending sound piece.

This work in progress is still at an early stage. While there's still much to do at the level of the different components (e.g. initial data collection should be extended, both the text pre-processing and the topic modelling techniques require improvements, the musical component could also be improved), the goal of the project was to explore how creative thinking, computation and the critical commentary of social sciences can be linked in practical and aesthetic ways.'

Dr Jacobetty is currently using Eddie to improve the topic model.

TwitchTV (live streaming) More on Eddie

Related items

Why don't you explore 'Research in Action' featured projects demonstrating similar resources and relevant 'Develop your Skills' opportunities? Have a look at the carousels below.