ERC funding

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the DigiScore project, grant agreement No 101002086

Friday, October 7, 2022

Final Session

The final session completed successfully with all the musicians agreeing that the results are fascinating and musically powerful. To summarise the process, there are three steps:

Step 1

Four remotely located musicians (in Sweden, Sri Lanka/London, Brighton and Leicester) create or record sounds that are meaningful to them. Wearing Emotiv Insight EEG brain readers, they then categorise the sounds based on six performance metrics: engagement, excitement, focus, interest, relaxation, stress. These categorised sounds are then navigated by an app programmed in Python by Craig Vear, which triggers sounds every ten seconds based on the response of one of the brains to the heard sounds. This creates worlds of sound that represent or express the neurological presence of each musician.

Step 2

The process of creating syzygies begins. A syzygy is an alignment or correspondence of different entities. In Psychology, it represents the communication of conscious and unconscious minds. In Astronomy, it refers to events such as an eclipse. By merging the categorised sounds of two, then three, then all four, participating musicians, syzygies emerge when the brainwave reader selects sounds files. These may be heard on the Listen page The alingments occur as the result of an unconscious process, yet the sounding results reveal the correspondences, connections and differences between the four musicians.

Step 3

Since one of the musicians, Elisabeth Wiklander, is a world-class cellist, some of the others (who are all composers), create some notations for her to perform. Simon Allen makes beautiful colour images, with suggestive instructions and graphical elements. Andrew Hugill creates more conventionally notated music, featuring some extended techniques. These cello elements are recorded by Elisabeth and then added to the quartet syzygies, creating a richer musical texture. At this stage, the brainwave readers are not used for this element. That is being saved up for a live performance.

This has resulted in the album and website, which are now available at the Digital Syzygies website.

Saturday, August 20, 2022

Fourth session

This blog entry does not just describe the session held on August 12th, but also the subsequent progress made.

The session really did so much to consolidate progress so far. Each musician has a clear sense of the sounds they want to use and what emerged most strongly was how different they are. 

Andrew is working mostly with composed sounds that combine digital and acoustic, using the image of a weather vane to convey the idea of the arbitrariness of the performance metrics emanating form the brainwave readers. 

Anya's sounds include minute examinations of materials such as velcro and her favoured metallic sounds. Her world is one that is apparently everyday, but in reality betrays a distinctive form of listening.

Elisabeth writes: "As someone who grew up in the Swedish outback, the sounds of nature always make an impression on me, catches my attention and are connected with deep sensory experience and memory. It’s a source which I tap into as a musician when performing and rely upon in my daily life for my overall well-being. My sounds are collected from the outside and inside of the Swedish log cabin where I reside and are particularly tweaked towards “life” because during this project I became a mother of a baby boy".

Simon's sounds deploy an array of self-built instruments which are described in detail in the section below. These sounds have a great deal of personal significance, for reasons that he gives in detail.

With four such distinctive sound worlds, compositional attention begins to focus on how these might be combined. The software is now ready in the form of a digital_syzygies app which takes the output streams of the six performance metrics provided by the headsets and uses them to trigger the sounds. Since the sounds are chosen precisely because of their effect on the brain of the musician, these pieces will naturally exhibit the neurological processes of the participants. Since the workshop, the software has been sent to the musicians, so they are now working with this "instrument". The software includes visual displays of cello notation. Decisions will be taken in the future about how the cello is to be deployed.

Meanwhile, I have devised the following structure to explore the potential combinations. We'll see which of these works well:


3 mins

01. Andrew's world
02. Anya's world
03. Elisabeth's world
04. Simon's world

4 mins

05. Syzygy 1: Andrew and Anya
06. Syzygy 2: Anya and Elisabeth
07. Syzygy 3: Elisabeth and Simon 
08. Syzygy 4: Simon and Andrew
09. Syzygy 5: Andrew and Elisabeth
10. Syzygy 6: Anya and Simon

5 mins

11. Syzygy 7: Andrew, Anya and Elisabeth 
12. Syzygy 8: Anya, Elisabeth and Simon
13. Syzygy 9: Elisabeth, Simon and Andrew
14. Syzygy 10: Simon, Andrew and Anya

6 mins

15. Syzygy 11: Full quartet
16. Syzygy 12: Full quartet


Simon Allen - Digital Syzygies - Sounds 16/08/2022.

 

 

When first searching for sounds to test the Emotiv Insight headset, I anticipated that a collection of suitable sources would draw upon extreme differences in sonic quality, technique, or physicality; leading to results of pleasing variety, measurable across the indicators offered by Emotiv’s software. In practice this was only partially true, instrumental choices that showed the most interesting results were less predictable than expected, revealing themselves only through experiment using the headset. Notable across the final choice of sounds is some personal significance attached to each source - pertaining either to the object itself, or its importance within my own compositional language. Barrel piano, goldfish bowls and clock chimes are habitual sounds within my musical palette; the remaining three instruments of mouthbow, horn and rattles are rarely used, but the objects themselves have enduring personal value.

 

 

The Barrel Piano, popular in 19thC, contains a sizeable hand-cranked wooden barrel that is pinned across its surface to trigger hammers against strings and a handful of tuned bicycle bells. The instrument has been prepared multiple times over the last 20 years, most recently in 2019. Since that project the strings and their preparations have remained untouched. The sounds produced by slow revolutions of the barrel are occasionally familiar as ghosts from past projects. Although the hammers are visible the precise moment of their release is unpredictable.

 

Goldfish bowls & wineglass are tuned with water to D, Eb (bowls) & C# (glass) rising from the middle of the bass clef. Played in circular motion with wet palms and fingers, the bowls can produce a high ovrtone by reducing the area of contact between skin and glass. The wine glass also sounds a crude undertone series when excess pressure is applied. Friction applied transversely to glass rims also gives very high squeals or ‘whistle’ type tones. Playing these instruments is particularly tactile, tone and colour being a product of speed and pressure – skin against glass.

 

Resonated through attachment to the casing of the barrel piano are 2 Clock chimes of three and four pitches belonging to mantle clocks, Their tines are bowed at different points to give 2 or 3 harmonics and struck with a piano hammer.

 

The Mouthbow is of a similar kind to the Hawaiian Ukeke. It is c.35cms, self-made from a wooden batten and wound guitar string. Resonated by resting lips against the wood and varying the volume of the buccal cavity to reinforce different harmonics, it can be bowed or struck. The highest partials of the string are beyond my hearing. It is pitched according to a ‘sweet spot’ that works best for its dimensions.

 

The Rama Double Twist rickshaw horn, a gift received in Kolkata, is shiny chrome, complete with flyscreen and green rubber bulb. This very loud, out of doors instrument, finds dynamic range by muffling and choking the bell of the horn to varying degrees.

 

Navajo medicine rattles – a pair of male and female rattles made from cowhide, painted and decorated with turkey feathers and river otter fur. Without technological assistance I find these high transient sounds virtually inaudible. The faster streams of recorded material are physically exacting in execution, requiring extreme concentration and draw to an extent upon my memory of the sound and its physicality, before hearing loss.

 

 

 

 

Each sound source was tested for c.1 minute in the order below, this whole process then being repeated three times. The three sets of results are shown separated by a slash.

 

Numbers rounded to the nearest 10 increments refer to maximum fluctuation in response, i.e. the difference between highest and lowest values observed. Dashes --- indicate negligible fluctuation in value.

 

 

            Barrel               Fishbowls         Mouthbow       Rattle               Horn                Clock    

            piano               & glass                                                                                     chimes

 

Engage 10/---/---           30/40/---          10/20/10          50/20/40          20/10/10          10/20/10

 

Excite   30/20/40          40/50/30          20/10/10          40/10/10          60/50/50          20/50/30

 

Focus   ---/---/---           10/10/20          10/10/10          30/20/30          40/10/20          20/10/10          

Interest ---/---/---           ---/---/---           ---/---/---           ---/10/---           ---/---/---           20/---/20          

Relax    30/30/30          20/30/30          10/10/10          20/10/---          ---/10/10          ---/20/10          

Stress   20/20/20          20/10/20          10/10/10          10/---/30          ---/---/---           ---/---/---

 

 

 

Recordings of these sounds were subsequently made without wearing the Insight headset.

 

 

__________________________

Thursday, July 14, 2022

Third session

 The session began with a sharing of our progress with the Emotiv Insight headsets. I had an individual session with Elisabeth a week before, in which we realised several key things. Simon shared his struggles with the headsets as well, which were causing him to question many aspects of the process. We all agreed that these brainwave readers are provoking so many thoughts and questions, regardless of how "scientific" they may be. There was a real dialogue taking place between us, with the Emotiv BCI acting as a real digital score by forcing a sharing and mutual understanding between the musicians. 

A number of things were decided. First: the level of connectivity does not seem to matter too much in respect of the performance metrics. This is important, because the training aspects seemed to demand high levels of connectivity, whereas in practice there were consistent spikes which could be understood if we ignore the training. Second: it doesn't matter whether the signal increases or decreases, both are indications of a change in state. One might think that an increase = more engagement, for example, but actually certain sounds have the opposite effect consistently. This is important for understanding how the system will interpret the data stream. Third: our feelings about a sound are not necessarily good indicators of our neurological reactions. Fourth: being consistent in terms of eyes open or closed is important to maintain an even set of responses. Visual and indeed other distractions can have a powerful effect. Fifth: the names given to the six performance metrics are fairly arbitrary subjective interpretations of electrical activity in the brain and are therefore not particularly important, although they will serve a purpose for compartmentalising the sounds and structuring the composition(s).

We discussed the cello part. I introduced the group to Neoscore  and described how it would be used to generate a score in real time for Elisabeth to play from. Anya, Simon and myself will write some pseudocode which will then be converted into Python for the neoscore library. I showed the group my example, reproduced below. We discussed whether the notation could also be text and graphics (which are both possible) and Elisabeth asked whether it had to be delivered in real time or whether there could be a printed version in advance for her to prepare. While that is possible, I felt that it would be more in the spirit of syzygy to have something unfold in real time. We all agreed to try this and see how it goes.

Alt the end of the discussions we had a way forwards. Everybody will spend the next few weeks working with the brainwave readers to assemble/record a collection of sounds, placed in folders corresponding to consistent changes in performance metrics. In the meantime,  will write pseudocode for the entire system, making decisions such as: what stream(s) will the system read? How many sounds will play at once? How will the piece be structured?

***

Cello solo - pseudocode

Tempo quaver = c.90

Note durations: quaver, crotchet, dotted crotched, triplet quavers.

Mode: Phrygian on E over four octaves (starting from E one ledger below bass clef, but including the D below that).

Phrase lengths (in crotchets): 3, 5, 7, 9, 11, 13

Each phrase must end on a crotchet or dotted crotchet followed by a quaver rest.

Phrases may begin and end on any note.

The opening and final phrases of the pece must begin and end on an E.

Phrase endings on E must be preceded by either D or F (in any octave).

Any phrase containing a triplet must be played quietly.

Any phrase containing a majority of quavers should be played loudly. 

All other dynamics are mp.

Triplets should never occur more than once in a phrase. 

Triplets should always move by step, either up or down.

Surpise accidentals may be Eb, Bb or Ab, which replace E, B or A respectively in a phrase.

Surprise accidentals should be infrequent, a maximum of one every three phrases, but ideally one every six phrases.

Ornamentation (turn, mordent or appoggiatura) may be included at a rate of one every three phrases.

Between one and three phrases to be played sul ponticello.

Between one and three phrases to be played sul tasto.

Between one and three phrases to be played pizzicato.

Every third or fourth dotted crotchet to be played as a harmonic.

Every fifth or sixth dotted crotchet to be played tremolo.




Thursday, June 23, 2022

Second session

 It has been quite a long time since the first session. In the meantime, Elisabeth has had a baby and we have finally taken delivery of the Emotiv Insight brainwave readers and had a chance to experiment with them. Unfortunately, Anya was unwell and could not join in this session, but we have recorded everything for her.

It emerged rapidly that all of us have struggled with the headsets to some degree. The major problem is connectivity, which seems very difficult to achieve steadily. Since there are two types of connectivity, there also seems to be issues with one or other type at a given moment, which can be frustrating. Despite this, everybody had made at least some progress. Simon had been the most systematic and had experimented not just with sounds such as shaking a cereal packet, but also with taste (chillis and cherries). This synaesthetic approach seemed potentially quite productive. Meanwhile, Elisabeth and Andrew had made the cube move a little, but were both unsure about achieving consistent results. We shared ideas for approaches and agreed that we would persist. If it comes to the point that we have to abandon this technology, then we will do so and record a scientific negative result. But we are not yet at that stage. In the meantime, the creative process is forming in an interesting way, so the headsets are providing a valuable shared location for developing the digital score regardless of their success.

The approach Andrew has proposed is that each of us should assemble a catalogue of sounds that consistently produce a spike on one of the six performance metrics (focus, engagement, interest, excitement,. stress, relaxation) of the Emotiv BCI. Each of us should then have a catalogue of six sounds, making a total of 24 sounds as a basic library for composition. As the project develops, we may expand on these, but this is a basic set. We can then combine these sounds to create a shared focus, engagement, etc. That is a starting point for making the digital score, which could take any form that triggers approbate sounds and responses at appropriate times.

This way forwards seems promising and we agreed to meet in a couple of weeks to discuss progress. We also propose to extend the project to September 30th, if the scientists agree, to take account of the delays. We concluded with a fascinating discussion about knowing one's emotions, alexithymia, the greeting "how are you?" and other mysteries of neurotypical/neurodivegrent communication.

Monday, March 14, 2022

First session

 What a first meeting this was! Elisabeth joined from her log cabin in Sweden, Simon from a shrine room in Sri Lanka, Anya from Brighton and myself from Market Harborough. We talked for three hours, in three parts with ten minute breaks in between. There are great similarities between us, but also great differences. Anya Ustaszewski is a Composer, Sonic Artist, Musician and active member of the Autistic Pride movement and various disability charities and organisations. Elisabeth Wiklander is a cellist with the London Philharmonic Orchestra and Cultural Ambassador of the National Autistic Society. Simon Allen is an aurally divergent composer, improviser, instrument-maker. And I am an autistic and aurally divergent composer, musicologist and Professor of Music, as well as Creative Computing.


In the first part of the meeting we discussed the aims of the research project as a whole and described what the participants could expect. The key to the project is exploring the differences and connections between us and our lived experiences. To that end, we developed a number of ideas:

1. A hearing-aid mediated piece developed by Simon and Andrew (the two hearing-aid wearers in the group). In this piece, the hearing aids would become the digital score, combining with the audio orchestrator to deliver music that works for two very contrasting hearing needs.

2. A ’sound seeker’ piece developed by Anya and Simon, with computational input from Andrew as required. The idea is to introduce a layer of mixing control into the Audio Orchestrator to allow for specific sound types (e.g. metal sounds, distortions, etc) to be diffused and focused by the musicians.

3. A collection of three compositions for cello and digital sounds written for Elisabeth to play and composed by Anya, Andrew and Simon, diffused by the Audio Orchestrator.

4. A quartet for the ensemble using neural devices (e.g. https://www.emotiv.com/ ) to control the system.

The last project allows for the collection of a data stream that tracks neurological difference, something that is really key to the project.

We had a concluding discussion about data handling and ethics, focusing in particular on creating a safe space in which the group felt comfortable to share freely on the understanding that they could withdraw at any time and that any data would only be held with informed consent.

Wednesday, February 16, 2022

BBC Audio Orchestrator first meeting

I have just had a first meeting with Emma Young and Kristian Hentschel, who are  part of the BBC R&D Makerbox team that developed the Audio Orchestrator. Please do experience 'Spectrum Sounds' to get an idea of the unique capabilities of this software. Essentially, it enables you to configure your own listening environment using a number of devices networked to a main machine. 

We discussed the Digital Syzygies project. I explained that this would involve four neurodivergent musicians with differences in hearing. The Audio Orchestrator is the tool we will use to create the digital score(s), which may be defined as a communications interface between musicians. We will see how this may transform or enhance musical exchange.

Of course, Emma and Kristian wanted to know exactly what I would be doing in the project and I had to confess that this is unclear at this stage! This is not the kind of composition project where I, the composer, write a piece and then people perform it. It is very much a process of mutual discovery as we try to find a way to create something. It may end in a co-located performance, or it may not - we don't even have a clear idea about that at this stage! 

Having listened to me talk in this way for a while, Emma then brilliantly summarised the proposition: four musicians will realise the score(s) in four different ways that meet their hearing needs. This will then demonstrate a comparison between the different hearing of each musician. I added that we may well make duets, trios and quartets too. This could be delivered live, or it could be performed locally by the individual musicians, or across the network. 

I suggested that the BBC team could be involved in future meetings and discussions, including potentially interviews as part of the research. They seemed happy enough with that idea. They are keen to see the software being used and want to learn about its characteristics. They are happy to help with any technical issues we encounter along the way.

Good progress!

Friday, December 10, 2021

The participants

I am pleased to announce the following participants in the Digital Syzygies project:

Anya Ustaszewski, Composer, Sonic Artist, Musician and active member of the Autistic Pride movement and various disability charities and organisations.

Elisabeth Wiklander, cellist with the London Philharmonic Orchestra and Cultural Ambassador of the National Autistic Society.

Simon Allen, aurally divergent composer, improviser, instrument-maker.



Final Session

The final session completed successfully with all the musicians agreeing that the results are fascinating and musically powerful. To summari...