Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(suggestion) Composer for pop music #24

Open
ghost opened this issue Jul 9, 2019 · 3 comments
Open

(suggestion) Composer for pop music #24

ghost opened this issue Jul 9, 2019 · 3 comments

Comments

@ghost
Copy link

ghost commented Jul 9, 2019

First of all, I want to thank you for sharing this code. It's really impressive.

I trained my model using around 60 melodies of pop music and I am able to produce several catchy melodies. There are still some random notes scattered around, but that can be solved with a little polishing and more song samples.

I'm wondering if you could make a pop music generator using the same principle. The song midis can be divided into three parts: verse, pre-chorus, and chorus, and then use another layer of autoencoder.

Model

For the structure, it can selected by the user on the live edit.

@HackerPoet
Copy link
Owner

This would be an excellent way to do it if the dataset conforms to this model. It makes a lot of assumptions such as all songs in the dataset having labeled sections as well as all sections being exactly 4 measures long. I'm not sure if a dataset like that exists, or if you're planning to spend a lot of human-hours labeling one, but I'd love to see the result! You may also want to add intro and outro to make 5 sections if that's an option.

@ghost
Copy link
Author

ghost commented Jul 9, 2019

Right now, I manually edit the midis myself. I mostly take them from internet and just remove the parts I don't need (e.g. the verse, pre chorus, bassline). There are plenty of Synthesia-like piano tutorials on YouTube. Perhaps you could write a simple code that can read the video and translate it into midi and since they're just falling blocks I think it'd be fairly easy. The tricky part is classfying the sections. I'm not sure if there are one right now, so I'm gonna do it manually for now.

Intro and outro are also good ideas but they're often just a simple chord or melody which you can think of based on the chorus. I think it'd be best to leave them out at first if you're manually classifying the sections to save time. But with the help of a classifier it'd be a great addition.

I'm a complete amateur in python and machine learning stuff but I'll see what I can do. Thanks!

@ghost
Copy link
Author

ghost commented Aug 7, 2019

@HackerPoet What does 'O' when running the live_edit? It seems like it's trying to output its recreation of the trained midi, but I've noticed that most of the output has bunch of mistakes, but some of them are near perfect that it's probably taken directly from the trained midi files. If it is indeed giving output of its recreation of the trained midi, then I might have a mistake on the dataset.

I trained my model on 8 bars midis and each midi ranges from 2-4 octaves. When writing the code, did you put any limitations on the midi? I watched that you use a 96x96 so I think 4 octaves is still acceptable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant