Week13

I did my final performance on 11/24, 2018 at Soft surplus.

The performance is separated into three part.

Each part contains input songs, output soundtrack ,and a survey.

First part:

  • input - Adele - Someone Like You

  • output - Demo1(created by 12 inputs)

  • output - Mr. Cabbage(I use Demo 1 as inspiration to compose)

In the follow two part, I play two output demo to test if audience could tell which one is created by which input sets.

Second part:

  • input - Taylor Swift - Two Is Better Than One

  • input - Taylor Swift - White Horse

  • output - Demo 2

  • output - Demo 3

In this part, I made an visual projecting with two virtual musician. I want to know how the audience interact with the virtual musician and also how I feel as a performer.

Third part:

  • input - Lady Gaga - Million Reasons

  • input - Lady Gaga - You and I

  • output - Demo 2

  • output - Demo 3

Here is a video of my performance.

Week12

I have made a visual background which contained multiple myself as different instrument players. Also use processing code of music visualization for background projecting.

I did a rehearsal for my performance on Sunday Nov. 18, 2018.

Here’s a short video of my rehearsal.

I got some feedback from a friend to change the background visual, since she think the visual is distracted audience form listening to my music.

I got another feedback from a friend, he told me to change only the colorful dots from my visual background.

Week11

This week I used the MIDI file generated by computer as inspiration to write song.

So, the first short version is a soundtrack that after I listen to the output, I copy and paste the part I like and make them repeat to make it as a short hook.

After the short version, I transform the music based on the output into a structure more similar to pop music.

You can also hear it here.

Week10

This week I worked more on the coding part.

I added the function to train computer with multiple inputs. I planed to train computer with tons of inputs, and hoped it will learn music theory through them. So, I did train it with 15 songs, but the output became more random than I just train it with one songs.

The MIDI file I used to train data is found on this website.

Here is the song list I used to train data.

Screen Shot 2018-11-21 at 8.02.46 AM.png

The output from those data is here.

Week7

We did a use-testing on 10/16 at MAGNET. The feedback I got from this testing section shows me that I did need to work on the function to collect more input data which allows user to input more than just one midi scores. So, we can based on the input to target on the emotion and music genre the user prefer. Second, since the testers in this testing section are not professional in music, the feedback for the melody part all toward the good side. So, for my next user-testing, I can focus more on musician and to think about specific questions as melody line and chords progression.

I have also met with Liz, our writing advisor this week. After I introduced my project to her, she suggested me to combine the methodology part, my project and method section of my paper. This would better helped me explain my whole idea.

So, this week I worked more on my thesis paper draft.

Week6

This week I worked more on my prototype.

Below are the issues that I had solved:

  1. I fixed the function for calculating duration for each note.

Screen Shot 2018-10-09 at 7.36.41 PM.png

Above is the content in MIDI file. If the time=0 means the note is playing, if the time=10 means the note will be played after time value 10 after the previous note.

But the first version of this program, we only read in the message “noteon”. This would lead to the bug when the time value is keep running in other message such as “noteoff”, “set_tempo”, etc. The duration of a note would be wrong if we did not add up the time value in other message.

So, I did keep adding up the time variable when message did not equal to “note_on”. And when we run to “note_on”, we will use the function _sequence(previous_chunk, current_chunk, timer) to calculate the duration and create markov chain.

2. We need to detect chords

The first version I did was to read the note in chords track in each bars separately. And the compare the note with note in the chords list I defined.

The problem is that music will not follow the rules all the time. Sometime there will have extra note in a chord that would make the detection fail. So, I compare the note and record the most related list as result.