This is all the music software I've developed! NanoTone Synth is my microtonal tuner, and Celody Life is my cellular automata music generator. These were all created with the Seraphim Automata engine I developed in 2016.
NanoTone Synth is a microtonal synthesizer and tuner. This is a useful tool for comparing the accuracy of different temperaments, and exploring unique harmonies. You can use the software in your desktop browser, or buy it on itch.io here.
Examine any temperament between 5 and 240-TET, and compare their stats to 12-TET.
Examine the accuracy of each harmony, up to 13-limit ratios.
Play music in any temperament, so you can hear the notes yourself.
You can also use this to tune your instruments. You can use this to tune a guitar in 24-TET to explore quarter tones, for example.
Use the mouse, touchscreen, or keyboard to play notes. Use the left and right arrow keys to change temperament. Use the up and down arrow keys to change the menu. Use the Z and Q rows to play music. Press L to view the help page.
Celody Life is generative music software. It uses cellular automata from Conway’s Game of Life to generate chords and melodies. You can buy the software on itch.io here.
Use the mouse to select cells. Press play to activate them. You can change instruments, tempo, musical scales, keys, and cellular automata rules.
Created one night by merging the keyboard code from my game Seraphim Automata with Cameron Penner’s code for the Game of Life. Hopefully this can inspire someone in the fields of music or game design.
Additionally I have some music generators I've chosen to keep private until now. I developed Algorithm for Angel Wings in 2017. It was originally a sequel game to Seraphim Automata, tackling ambient music rather than SA's theme of generative jazz music.
However I quickly came to the conclusion that Angel Wings was too powerful to release to the public at that time. There was already a very human-like quality I recognized in the generated music. I could see the possibilities, but also the apocalyptic implications for ordinary musicians. That was a major turning point for me as far as public releases.
You can hear Angel Wings at work in the ambient bbydoll album, I Am Anastasia. This is the 1.0 generative ambient style, with predominant usage of piano. I plan to switch up the samples and generation methods for future bbydoll releases.
In the wake of the "AI boom", I decided to create a new type of music generator on July 4, 2023, inspired by the usage of beat breaks in hip-hop. It's called remixgen. It's great for dance music and remixes. It can be used to generate anything from fast-paced jungle music to slow triphop.
You can get a taste of it on bbydoll's debut live jam la vita nuova, which remixes my old theme song "My Fire Opal". I hope to use remixgen more soon, once I add more to the code!
I'd like to clarify that my music generators are procedural and do not involve machine learning or neural networks. Music can be broken down to very simple universal laws compared to say, language. I think procedural music generation will always outmatch machine learning music, in terms of quality and sophistication.
Anyway yeah, enjoy the music software I've released, and check out my bbydoll albums if you're interested in generative music!
Project KOTONOHA (Japanese for "language" or "words") refers to a collection of notes I have pertaining to life sims, creature raisers and a-life systems that could potentially understand language.
This project began to take shape after attempting to write a book on game development. I was gathering up all my notes on games in a file called "The Gamer's Codex" to see how much of the writing was worth expanding upon.
I realized what I actually had was a detailed document on how to create life simulators in various fashions (think games like The Sims, SimCity, Animal Crossing, Tamagotchi, etc). I have been quietly exploring these ideas since then, mostly on paper.
I was considering making a prototype in 2021-2022, but ChatGPT's public release in 2023 made me seriously reconsider how safe it is to release certain creations to the public - particularly when they're still unfinished and unfocused.
Just think of all the variations that fall under the umbrella of "living beings". Or all the nuances and depth of a well-lived "life". Or all the little details that can be experienced in one day of a single "life". A "life" simulator could go in so many directions that I'd like to carefully outline my goals and models before proceeding.
For now, the goal is simply to create an "olam" or "world" with individuals (NPCs) that rely on their senses. It has to be complicated enough for emergent, unpredictable phenomena to occur, yet written in as few lines as possible.
More lofty goals include teaching the individuals "language" in a fundamental manner that allows them to have context and understanding that systems like ChatGPT do not have. But this will come later. One long-term goal is to make "communication software" that can help people learn how to understand each other and communicate better.
While much of the data for this project comes from psychology and neuroscience, I also find a lot of practical uses in ancient systems like the Chakra system and Kabbalah's Tree of Life (it's an algorithm!).
The rest is classified for now I suppose. Though, feel free to reach out via my contact form on bandcamp if you want to chat about these ideas, or simply creating something together.