I had the second meeting with my thesis advisor Prof. Ivan.
I did some research on different means of algorithm composing. So, algorithm composing have two major types:
The computer compose by itself.
Using as an aid for composing.
Also there are two majored propose of algorithm composing:
To provide notation information(to create scores). (this would be my main focus)
To provide an independent way of sound synthesis(to create new sound).
There are six ways to process data:
Most common way is stochastic processes
ex: Markov Chain and Gaussian Distributions
Using integer sequence as 12-tone equal temperment music.
Choose specific input as training data, isolate to certain music genre.
Music structure is interate by transforming a simple composition into a complex one.
System which learn
No giving music genre, algorithm would modeling style of the inputs.(Also mention in Sony’s flow machine)
No such one way could perfectly produce music by algorithm, use hybrid system would help to get more musical results. But the complexity of growing while combining different method.
The project was did in 1994, he used TURBO C++ on DIS console.
Based on his own music composing style, the software first generated CHORDS then draw a curve based on the chords to match the melody notes(so the music would be in tune). Then, generate rhythm by using RHYTHM PATTERN.
What we can pick up from this project is his algorithm is based on his composing style, it also happens to be mine composing style. So, I might also could use this work flow.
The person use python and MIDI library(mido) to build the software. MIDI library read midi file into a vector contain different parameters for instrument/note/duration/volume.
In his example, he only uses one song as input data. (Too less training data would effect the result to be too similar to the data itself) So, based on the midi input, he generate a graphic for note progression.
Sony also uses markov chain in this project. The machine generate the scores, but the composer as a human filter, will listen to the music and filter out the part that they are not satisfy with.
Other emotion input
Often use machine learning to let computer learn the relation between emotion and music.
Other related research:
Some paper also talk about the authorship of the music generated by machine. I think this would also be an important topic to talk about.
My challenge would be to inquire MIDI scores for training data. (Too less training data would effect the result to be too similar to the data itself)
And also based on the categorize of emotion in music, it would be better if I categorize the inputs I will use to train the machine. The simpler the better.
I can also use my own music to test the algorithm to check if it does learn the traits of my composing style.
First I will build a simplest software, and then add music theory into my markov chain data sets to get more constrains. Hopefully the result would be more musical.
Also, put music theory in the data base would also help to build more musical results. I could also use existing midi data to test the simple music theory algorithm prototype to find out the random parameter. (Sometime music would sound better with those note that is out of the rules)
TO DO NEXT:
Keep on researching for algorithm composing and music analysis.
I will build a prototype this week using the simplest rules.