some fat fingered chords via thumbjam…
Pulled out the ipad and fired up thumbjam to add some steel guitar to yesterday's pad. I like it! This is mostly just a proof of concept - but I think I'll take this basic idea and try to make something out of it. This is a new way of using thumbjam - directly recording via Logic instead of using the TJ loop recorder and exporting loops. Fits my process mo' betta.
changed to Carlos's "super just" tuning and boosted the gain. Which unfortunately accentuates the low-volume noise/distortion in the plugin - can't wait for Jari to fix this! But yikes! The G (second note of the "melody" in the 3rd section) really sounds out of tune with this tuning. It's quite different than the just tuning Terry Riley uses - which I found quite subtle.
Repetition legitimizes. Repetition legitimizes. Or so I hope. Inserted a variation of the intro transposed down to male voices and with some chord inversions and went ahead and appended the new section with some overlap. Not enough of a variation, but maybe something along these lines can work. (though now it sounds even less coherent). Some of the dynamics are still a bit unnatural…
I think my tools are getting in the way - for this style of music, I "hear" stuff in my brain, but the piano roll view in the DAW doesnt lend itself to capturing it. Dare I say that I may need to get better and using a score editor? Should that be a goal for the new year?
Read up on how others are coping with Spitfire's orchestral samples. They are very quiet compared to everything else I use. Apparently it's just a known thing due to their sampling philosophy. So nothing fundamentally wrong with cranking the gain to get things to a more normalized loudness.
Doodling a bit with choir samples. I like the start; don't like the cadence - went a little off the rails (or perhaps more accurately, fell into a cliche).
Revisiting Iasos's harp - updating the windows app to support the new timing params in the new firmware. Now need to think about how to have him update his firmware.
For this track, mostly just fooling around with the mix and some eq - added a couple pendulate patches at end that don't quite fit, but might work with some rearrangement.
(Finally) released new collection at https://chinenual.bandcamp.com/album/shift-reduce. Dithered for weeks trying to decide whether to release these all together or try to make separate more coherent albums (one more sequence/loopy, one more dark and ambient, perhaps even another with the more experimental tracks). Decided to just lump them together and call it a day.
Still don't have a real plan for what to do with some of the better "acoustic" / "orchestral" stuff I've come up with…
No new music production today. Revisited the track listing for an upcoming bandcamp release. Still coming up with nothing for some tracks (I really like the tracks, but can't "describe" them…) - here's what I have so far…
* Open Loop - ?? <sequence heavy>
* Swamp Thing - This started as a polyrhythmic sequence exercise and took a left turn into the jungle.
* Chaos Theory - Polyrhythms gone amok. Twelve overlapping sequences in 6, 9, 16, 18, 24, and 28.
* Luminescence - An exercise in pure additive synthesis that turned into something almost orchestral.
* Depth of Field - ?? <pianobook samples, mens choir, and reversed electric piano>
* Free Fall - Floating back to the seventies…
* Primordial - ??
* Diffusion - ??
* Ejecta - ??
* Laminar - An exercise in the use of shimmer reverb.
* Zoom In - 5 seconds from the opening of a Scarlatti keyboard sonata stretched 50x.
* Resonance - Steve Reich meets Philip Glass meets Vangelis.
Also frustrating that the latest orchestral piece doesn't "fit" - I think it's probably the best thing I did all year, but it really doesnt fit with the other tracks…
revisit the symphonic doodle from several weeks ago. Axed the most recently added section and added this. not sure it's an improvement… This improvisational approach works well when it works, but it has its limitations. May need to get more mechanical/theoretical to finish this one off.
I still really like the first section (up to about 1:40). Second section (to about 2:30) is OK. I'll sleep on this last part. It seems to be getting better as I relisten to it, so maybe some of it is salvageable.
First attempt at putting sounds to an existing video. Video from wedistll.io by Loris Lamuniere. The music isnt polished and the ending is abrupt, but it gave me a chance to try some new things in Logic. not sure how how often I'll be able to do this - most of the free video i can find on the net is very short - this was one of the few examples as long as 30s.
work on the tangerine dreamy thing some more. double up (actually triple up) the Synergy drone track and put some automation on a bandpass eq moving up and down as the track progresses. Adds a chord change to D. Where else can I take this?
watched Christain Henson's latest video: give yourself constraints. something I've found very productive in the past. Will definitely double down on that going forward.
Christian released the Winter Voices choir samples today (I'm in there somewhere…). Sound pretty nice! This also uses his the "Family Grand" samples he released yesterday - his family baby grand he sampled with a binaural mic - even nicer. A little out of tune just like my home piano and the binaural sound is scary - with headphones, it sounds like i'm sitting at the keyboard.
Some experiments with sidechain input to Sculpture instruments. Mostly intended as a cheater way to implement Iasos's approach of using a vocoder to modulate envelopes and filters on his other synths. I don't like how the original audio bleeds through on these patches. But will continue to experiment.
woot! even though you can't change the size of the pattern region once you start editing it, you can split it – so I can take an 80 beat 16 beat pattern and lop off 10 beats to allow me to follow cord changes at 5/8 boundaries. I'm sure i'll find other annoying limitations, but for now, this looks usable. Will continue this going forward. Couldn't help but add some virtual Synergy (the internal patch I mentioned a few days ago with a "spread" effect)
spent most of the day working on dx7 vs. synergy algorithm analysis. Found some time to work on the sequence. Reimplemented the 16 bar bass sequence in the new step sequencer and as long as I keep things aligned at 80 beats, it can work. But even this early in the game I'm tempted to shorten one section by one measure and I can't do that with this technique. The lead in bass variations are more to prove that i can do it - not 100% happy with how they turned out.
experiment with the new Step Sequencer in the new Logic. Frustrating! Adapted the 5-note sequence from Alchemy and like the basic functionality BUT. Within the pattern region I can adjust each row to be a 5 16th note pattern. But I cant figure out how to tell the sequencer this is a 5-note (or 10 note) sequence. It insists on multiples of 4 (in this case 12 16th notes). When I put this sequence back-to-back, you hear 9 repetitions of the 5-note sequence then a glitch where it starts over after only 3 notes. A workaround sees to be to ensure that i resize the pattern region before making any edits. Then change step length and adjust each row to a multiple of the measure length. Yuck. First sequence (in E) is before the workaround (if you listen carefully at about the 5s mark, you can hear the discontinuity when the sequence restarts mid-pattern) - the second sequence in F# is the workaround.
This is now the 3rd (4th?) re-implementation of the original embedded sequence. This new UI is really nice and I'd prefer it if it wasn't for this stupid length behavior. Need to think on which approach to take forward…
helped Don with the DX7 translator. (adapted his notes to working Go code so he can see how it works and can start making his own edits).
Added a "no flow control" option to Synergize in hopes of helping a user having trouble with his USB cable.
Have been admiring a Synergy demo on youtube for months thinking it was some really clever custom patch only to learn it is in fact one of the factory patches with one of the knobs turned extreme left. I'm going to have to find a way to use this in a track Real Soon…
another day of software stuff. Worked with Finland to capture more hardware samples from my Synergy for his analysis/comparison with his virtual machine. More back and forth with new user in the UK having serial port problems. Built a clean vm to confirm my go project is "good" and then walked collaborator in AR to get his machine happy.
Not a whole lot of time for music, but some. Ensuring the patches I've been playing with sequences can also be used musically with the sequencer turned off. While i think i've managed to do it, this actual sequence isnt good. Throw away.
spent an hour and a half on the phone with Apple and they managed to get the app store to upgrade my Logic via numerous reboots, cache flushes, login/out of the app store. (props to Apple support even if their software is getting worse and worse)
… only to discover that the new synergia virtual instrument fails validation with the new AU validator. So reported that to Finland…
meanwhile spent the morning helping a new Synergy owner to diagnose his serial port config – we still haven't fixed it, but he found me on bandcamp and said nice things about my music :)
and discovered an edge-case bug in Synergize due to some discussion between a user and Hal Alles(! Bell Labs guru who designed the Alles Machine and affectionately known as "the father of the Synergy"). Fixed it and will work on getting an emergency release out today.
Found some time to work in the sequence deconstruction. Moved the internal sequencer modulator into the Arp layer so I can create numbered patterns that I can switch to through automation… Definitely has that Berlin school vibe…
Implemented a DX7 sysex parser for a collaboration with a fellow Synergy guy who wants to try to auto-convert some DX7 patches to Synergy. will be awfully cool if we can pull it off even if only partially converting all parameters…
wanted to try to deconstruct the sequence from a month ago so I could morph the sequence during the track. Decided this would be a good use case for the new step sequencer in newer version of Logic. So I tried to update Logic Pro only to find it refuses to update unless running Catalina. The previous version (10.6) was available for Mojave, but they updated to 10.6.1 5 days ago and the current one refuses to install on Mojave and does not offer a "more recent" version compatible with my OS. So playing some games in Parallels to see if I can get older version via a "clean install". Doesn't work! Posted to the forum and sent in a support request to Apple…
I decided to just keep trying. After about 10 attempts to install on the new Mojave vm (each failed saying Catalina required), I got the "would you like to download an older version" prompt.
Why yes!
But of course that failed. (dialog says "could not install: cancelled"). So I tried again (and again (and again)). On the 5th attempt to download the older version, it started a download! Unfortunately, it just installed the version i'm trying to upgrade. The newer versions listed as available only 5 days ago are not offered!?
Why do we put up with this sort of thing? :(
Pondering adding some small tempo changes to Introspection. This speeds up a teeny bit in the middle section and then slows back down to original tempo in the last dozen or so measures. I like this. Should I try something more exaggerated? I don't think so - i don't want this to be very noticeable, but I think it helps give the central section some extra movement. Ugh - bounced a "normalized" export which is much louder and it reveals more phrasing problems I didnt notice when quieter. Rephrased and re-bounced with an an additional 5db gain. Guess it wasn't done! I dont know that anyone else will notice any of these changes, but I just can't leave it alone until the really obvious stuff is fixed…
experimented with pitch shifting the choir samples to get some extended bass for doing mens ensembles – sounds reasonably good up to about 5 semitones (which gets me down to a low Bf2 which should work for all but the most extreme 'basso profundo' (F1!?) Don't use it in the current piece, but nice to know I can if i need to.
thought i was done, but on relisten the intro dynamics was quite poor. One minor phrasing tweak in the bass intro, but mostly completely revamped expression and dynamics over the top of the tracks with my nifty touchosc x-y controller. Sounds better. Not sure "done", but definitely better
major touch up of the intro (dropped 5 measures, reworked the bass/violin harmonization). Removed the pulse from the bass line at the end. Fixed the pan on the bass (and had to ease off its volume to compensate). Had to tweak Kontakt MIDI setup to keep two track modulations from interfering with one another. Are we done? (something is still bugging me with the initial bass/violin section…) Needs a name… Introspection
explore some of the non-orchestral parts of the tundra instrument. thought I'd try to limit this to 60s blurb but the droney style doesn't really fit into a minute… There's definitely some useful stuff here - will need to finish up the orchestral piece and get on to something new…
got a call from Iasos - he received the arduino, plugged it in and… it works! He is very excited (I am very relieved :))
coming to realize the way i've set up these Kontakt patches isn't "right". (should have taken the time to create a multi-instrument on a single MIDI channel and allow keyswitching to route to the necessary instance). But given how much I've broken things (and tried to reapply phrasing/articulation) over past few days, I'll probably leave as is and just remember to do it "right" on the next piece…?
Fix the dynamics of the sustained violin at the beginning. Extend the melodic intro - i think this is directionally good, but need to tweak the melody a bit - i like the hint of repetition, but this reuses too much of the first phrase. (will also need to tweak dynamics to make for smooth legato melodies) Adds some brass and choir at end, fixes the quarter note bass pulses.
spent some time trying to create short song description blurbs for a bandcamp release and got depressed that they aren't very coherent. might need to separate things into separate collections (and wait a while longer before i have enough to "release" something). should i try to explicitly focus on a certain style to build up more output for such a collection? (not inclined to at the moment since I'm having fun working on this moody orchestral thing which isn't like anything I'd probably post to bandcamp…)
extend the choral mmms at the end and add some string pulses under it. the mix at the beginning seems to keep changing each time I revisit this - probably due to the articulation setting futzing.
more prerelease testing and coordination with Finland. Loaded an open source articulation set for BBCSO - works! Ran into problems creating my own for the Choir library - have a support query for guidance…
Tweak choir articulation in the tundra doodle. Add a few more bars… Am getting reluctant to extend this too much before I solve the articulation configuration.
add some choir to the tundra doodle. These samples are really quiet. Even when expression and dymanics are maxed, it's pretty dang quiet. Also, none of these new libraries are going to be very usable until I spend some time creating Logic articulation maps - frustrating that Spitfire doesn't provide them.
spent the day hacking javascript and managed to turn the envelope graphs into interactive editors. The way envelopes are defined in the original editor (which map directly to the original editor and the low level hardware representation) are probably the most confusing thing about the Synergy - I think the users are going to like this…
on a lark, recorded 3 poor vocal tracks (intended to be a bit sloppy, but more out of tune than I intended) and then messed around in Logic to find some sort of interesting effect. Tried distortion of various types, filters, delays, chorus, flanging, etc. Ended up with a Leslie cabinet, big reverb and some eq). Submitted it to the #PianobookWinterVoices project.
Trying to make room on laptop, so moved itunes library, bounce folders and large sample libraries to one of my linux servers. Tried to load via NFS, but Kontakt can't load off a case sensitive file system(!?). So tried via Samba - VERY SLOW (3-5min load time for a 4G sample set). Then it crashed Logic with a stack trace showing I/O via SMB. Screw it. Repartioning my backup drive to make room for putting samples on a local drive.
Meanwhile, revisit some older tracks in prep for a new bandcamp release. Changed some sirens in Primordial to a scraped cymbal noise.
Tried to come up with clever names based on the Steve Reich-like marimbas, the Philip Glass-like triplet arpeggios and the Vangelis chariots of fire oberheim, but came up with nothing. Named it: Resonance. Cleaned up some unwanted delay automation that snuck in during some copy/pastes. Removed some delay which seemed to be what Laurie was objecting to yesterday. Tweaked some velocities. added some teensy randomization to the mallet note positions to try to keep it from sounding too robotic. I'm growing weary of this and want to call it done, but I think i'm going to need to review the backing tracks after the triplets - its quite muddy and cluttered.
beta testing the integration of the next version of Synergia/Synergize. Decided to try revisiting my transcription of my old Synergy piece ("Atmos"). Sounds pretty good without any tweaking. Will need to work on the bass and tweak timbre/amplitude sensitivity and portamento - it's not sounding quite like the original yet. This is now the 5th attempt at the piece: 1) original Synergy recording on 4-track tape; 2) attempt earlier this year to replicate it by ear in Logic using Logic instruments, 3) attempt to drive my Synergy as 4 separate external MIDI sessions (tedious!), 4) attempt to sample my Synergy, 5) finally this using the Synergia virtual instrument. Discovered some new aliasing noise in some voices; working with Jari to fix. And every time I listen to this i cringe at the timing of some of the keyboard/mallet phrases. Really need to go back and fix that…
assembled the other two harp controllers and all works as expected. Drilled holes in the first enclosure - the large holes for the MIDI connections are not quite as neat as I envisioned. Will think on how to clean it up…
more marimbas. tweaked the dynamics on the main marimba sequences and added a triplet counterpoint at the end. I like it as an ending - to extend this, I'm going to need to add at the beginning - or splice something into the middle? mmmm…
created a second circuit board from first principles and… it works! Desoldered the arduino from the first board and tested with the prototype breadboard and… It's Dead Jim. Happy it wasnt due to bad soldering or poor board design, but bummed I have a dead cpu.
stumbled on this patch in alchemy - surely i need to make some tangerine dreamy something out of it… However the sequences are embedded in per-voice modulation and I dont know how to control that via automation (want to be able to evolve the sequence over time - can do that with the main sequencer, but dont see a way to do it with the per-voice modulation…) May need to deconstruct the patch into independent tracks
bought another nano, connected it up to the prototype KEY/MIDI connectors and confirmed that it runs properly. That confirms that the problem is in the stripboard and/or soldered connectors/connections.
no connectivity between 5V and GND in the prototype; about 1.6kOhm between them on the board. https://forum.arduino.cc/index.php?topic=549573.0 suggests this is normal, but it's different and the only thing I see "wrong" about the circuit. Have checked/double/triple checked connectivity between pins, wires, DIN connectors. Checked/double/triple checked that all drilled out parts of the board are in fact disconnecting each side of the hole. May have to wire up a new board and just abandon this one.
play a bit more with the marimbas. more simple patterns in other tracks. This isnt going where I expected, but still like it…
cut the remaining traces, soldered in the arduino, uploaded the firmware and… nothing! time to debug…
play around with new instrument - Pendulate - a bit west-coasty synthesis. Nice glitchy presets, but many are wired to be modulated by control signals my controller doesn't support. Might be tempting to find an MPE enabled controller to play around with this…
final touches on new Synergize release - fixes some bugs, improves the editor workflow and most importantly, changes the mechanism to find the virtual instrument in a much more reliable way.
about 80% done with initial stripboard based harp adapter. Havent soldered in the arduino yet and have some traces that need to be cut before I can power it up and see if it works… This was definitely one of those measure 4 times before you cut sort of things. Had some wires misplaced even after the third "review". Fits into the case, but I don't have the courage to drill/cut holes in the case yet…
an attempt to reproduce a sequence and modular patch christian henson used to demonstrate "systems music" at https://youtu.be/KH6g_IXrKSQ?t=146 Not 100% accurate, but I like it… the 32 over 31 polyrhythm may be a bit too subtle though.
Decided I really like the stretched Scarlatti but uncomfortable with distributing it more widely without worrying about copyright infringement from Sony. So recreated the first 9 seconds of the sonata starting with some MIDI from kunstderfuge.com by John Sankey, then editing it a bit in an attempt to emulate Horowitz's ornamentation. Used Logic's Boesendorfer samples - sounds pretty decent, and definitely good enough to be stretched.
A bit dry on inspiration, so I opened up the Oblique Strategies for advice. "Reverse". (so reversed the chord buildup and also stepped down the shimmer pitch shift bit by bit). Mmmm. Sounds OK - but is this an ending or a transition?
Released Synergize 2.2.0 and started testing the latest VST beta. It's still not handling state saving properly :(and… Logic grew to 79G RAM. Force Quit. Now any time i try to open even older projects, it grows to about 22G and freezes. Uh oh… Hope this isn't due to the VST beta…
Extend the intro vamp and add some shimmer reverb (ramp the reverb as the chords ascend at the end). Probably want to keep this fairly sparse so might not be adding too much to this. Stumbled on a nice bass line playing triplets with two delays over the 4/4 piano giving a bit of a 3 over 4 feel. I like the shimmer and the bass line, but the synth lead falls flat for me…
need to try something new. here's a just-tuned felt piano sketch of a simple progression that I think might work. might extend the intro as a long vamp and then give in to the rising progression as a way to give a sense of movement. These are simple triads, am trying to resist urge to add too much color (7ths,9ths)
rewired the arduino and basic MIDI is now working. Had to fool with timing, but a short delay in the input processing seems to avoid need for an explicit debounce logic. Added support for loading presets - testing here with a major and minor triad. Sent video to Iasos and he is stoked. Next to use NVRAM for storing presets across reboots, then support for programming it via USB, then finally a Windows app to use that USB interface to visualize and configure the harp
extend the cicada intro a bit, experiment with some bass, add another vibes phrase. (I hummed the bass line, the used flex-pitch to convert to midi - which did an awful job, but at least preserved the timing and some of the pitch, then lots of manual tweaking to get a fluid bass line). (temporarily(?) back to equal temperament)
Not synthy at all (even the spectral synth "cicadas" almost sound like a "real" field recording) - the rest is sampled "jazz combo" instrumentation.
saw a recommendation for a different image/spectral analyzer tool to "choose images with a dark background" – so I decided to play with the spectral analyzer in alchemy again - this time importing a Unsplash image of some leaves and then tweaking some spectral shimmer and sheer. this sounds kinda cool.
Finally got possession of the harp controller hardware. It may not look like much but this is one of Iasos's spare controllers – the heart of the instrument are the two rows of touch sensitive pads along the top of the controller (he doesn't currently use the musical keyboard at all, but I plan to use them as a cheater way to do preset selection/scale modification during performance). I was able to confirm that the Arduino code the previous developer had used to reverse engineer the serial protocol it spews works to scan the state of the hardware. Now to get down to business to implement the scaling and MIDI logic.
Also got Jari's latest Synergia beta and am starting final(?) integration testing.
continue to try to extend the orchestral doodle. basic approach is: stick with legato phrasing, offset note changes to keep things sounding fluid. improvise a 20-ish bar melody in some instrument, then add improvised lines in each of the others and then tweak here and there when something sounds particularly off (that worked particularly well for the intro). it still sounds ok (though this last bit is pretty uninspired - well, pretty not so good). There are limits to the approach - it's not going to get me a unified piece with a clear "direction". perhaps try some repetition to reenforce earlier themes instead of just continuing to create new melodies?
frustrated by inability to use the midi controller to "latch" dynamics after the fact - some sort of config setting is preventing the CC's from registering except during initial recording.
experimented a bit with just intonation - trying to emulate the tuning Terry Riley used on Shri Camel - but none of the systems I've tried sounds like what I'm shooting for and I can't find any details on what Riley was using at the time… Was able to find a paper analyzing a later album (Harp of the New Albion) which does have precise tuning ratios.
So here I'm trying to use that. First phrase is played with just intonation. Then played again with equal temperament. then again with both layed on top of each other. The differences are very subtle. I guess the "trick" to accentuate the tuning is going to be to choose scales and tonality very carefully…
revisit the mix on the orchestral intro - brass less prominent and better balanced. Similarly violin dynamics tweaked towards end to keep them from getting too strident. Extend the violin intro phrase a couple of measures and add a bunch of stuff after the intro climaxes- but its not (yet?) living up to the intro.
got touchosc to work as a dynamics/expressions controller w/ the orchestra samples. Wasted a lot more time trying to find why logic gets confused by the noisy midi CC msg that i am filtering in the Environment - but keeps getting picked up when I try to "Learn" mappings. Can't explain it.
not much music time today. barely had time to do some doodling around looking for some interesting chord voicings. and that's all I've got to post for the day…
add a bit to the shimmery thing. Play with the pitch shift in small continuous variations during the final section. it really sounds nice, but now i need to make it "go somewhere" :) tried altering the shimmer piano phrases a bit at the end, but not sure it really helps. tried introducing some variations on the sequencer patterns, but it sounds too fussy/busy. Need to sleep on this…
Revisiting the orchestral thingee i was working on a couple weeks ago with the new (to me) Spitfire Discovery free samples. Wow! These are still not "great" orchestral samples, but they are a lot easier to make sound good than what I was using… No new composition - just substituting in new instruments and tweaking dynamics. (well not quite true - i added a measure at the beginning and stagger the entrance of the celli and basses, doubled the bass line with bass trombone and shifted some lines by a 1/4 or 1/2 note to keep the note changes more fluid).
spent the day banging my head against development tool experimentation (trying to find a simple way to create a UI for a musician running 32bit win7. Finding tools that support Win7 in 2020 is a challenge. I originally thought I could just create an old school Win32/MFC statically linked C++ application, but I was not able to get modern MS tooling to cooperate. Perhaps if I was a more Microsoft-y kinda guy, I would have found a way to convince the latest Visual Studio to target Windows7 – but alas this project's encounter with Microsoft tooling reminded me why I've avoided them for most of my career :). I considered Qt or GTK, but frankly, working with C++ is something I swore I'd never do after writing the Tower Eiffel compiler (our motto: "You Deserver Better than a C+"). Eiffel never really caught on, but thankfully Java gave the world a decent alternative to C++. But I digress…
In the end I found a pure Go library [tadvi/winc] that provides very simple GUI support with no external dependencies - easy to compile, no unexpected DLL conflicts on his machines. I would have prefered to use the [andlabs/ui] library, but its dependency on the mingw compiler toolchain turns out to be too fragile (link issues caused by incompatible versions of mingw?). I wasted a lot of time trying to get it to build a 32bit windows exe and gave up when I found tadvi/winc. Hopefully winc proves stable enough…
so some fugue machine noodling to avoid being called a futureland slacker… In 7 this time…
Arduino day. Finally moved my prototype fridge alarm from my Uno based breadboard into a Teensy based soldered perfboard. Making room for doing some prototype development for Iasos's golden harp project.
had a little time left over to add a counter sequence and try adding a bass with shimmer. Tried a new chord modulation but its not yet working…
Created a "shimmer reverb" effects chain via a tutorial on the web. May try to save this for reuse. This sample is a simple electric piano fed through that effects chain with gradually increasing (a bit beyond what sounds "good") then decreasing amounts of pitch shift/shimmer. Then fooling with the pitch shift amounts to be fractions of an octave and continuing down to a negative octave.
experiments with the Paul Nasca's audio stretching algorithm. I've always liked Lief Inge's 24 hour Beethoven 9 transmorgification (9 Beet Stretch). Now I can do my own! Here's the opening 4 seconds of one of Scarlatti's piano sonatas stretched 50x. Bach organ also sounds good. Gregorian Chant may work the "best" for reuse as an backing track, but ends up as a fairly common-sounding ambient choral drone… In any case, as unoriginal as this might be, I really like this stretched Scarlatti…
half heartedly tried to start a new piece but sounds trite - don't like it.
so… new ipad sequencer: Fugue Machine. This has potential (it meshes with my polyrhythmic proclivities nicely and is easy to stumble on stuff that sounds interesting). This is a minor elaboration of one of the demo patterns (added a beat to make it 5/4).
Discovered another source of free samples (pianobook.co.uk). This is built on two of Christian Henson's F*ckbox samples. (plus a choir sample I found long long ago and an Alchemy based bass and LABS Monochord sample towards the end). Tried my hand at a reversed electric piano effect - it worked much better than I had hoped… Need to work a bit on the bass choir dynamics, but its shaping up very nicely…
Downloaded and substituted a different string sample (still 100% LABS) which sounds less raspy and works better for this. Some of the overall dynamics (especially the choir swells) are still not right and the mix is muddy, but I'm not sure I'll try to fix. Also still debating whether to continue/extend this.
Change out the initial felt piano for a strummed dulcimer and double it throughout. Add 24 measures in a slightly different chord progression. Lots of tweaks to the cello dynamics to keep it from getting too raspy. Some snails pace "ornamentation" to the felt piano and eq to make it sound a bit less soft. 100% Spitfire LABS samples with only a bit of EQ. I particularly like the looped piano that sounds like an organ (first and last sounds you can hear in the mix). At 2:11 its a bit short (unless you think of it as 'film' music) but it sort of feels done-ish. May insert another verse or two before putting a fork in it. Naming this Soft Focus.
need to test dynamics with these sample libraries - all of the initial measures are at very lowest velocity and intended to be very quiet. How do these samples sound when played "loud"?
So… Studio Strings don't respond to velocity at all when the mod wheel is enabled. Mod wheel makes things respond reasonably. The Orchestral strings don't have equivalent settings(!?). Similarly the orchestral horns (which I was using) don't seem to have a mod wheel dynamics option. The studio horns work differently but are a much more "chamber" / individual instrument sounding set of samples. This is frustrating.
Inspired by a film score masterclass from Jono Buchanan on youtube, decided to try to put together something orchestral. Initial phrases weren't working too well. Then listened to last bit of the Labyrinth of Music Theory podcast which has some interesting brass drones under some of the sections. Tried to adapt that idea, but ended up with this instead - i like it so far… Need to take care to not let this get too similar to a brass choir piece I did several months back - I'm starting to hear some echos of that in my head… Just as the dude said in the class, tweaking orchestral samples to make them sound realistic is hard and takes time…
re-imported midi from the Appelian project - over the wire this time (more accurate midi capture) and created new instruments from scratch to more or less match the basic sound from appellian. This was more of case of "because I left it in such a sorry state" than a "i'm inspired to make music with this"
Spent the day fighting mDNS. Decided to try to import some midi from the other day's Appelian project. What a mess - I don't do this often enough and 1) remembering the quirks in Appelian (apparently need to cycle through midi interfaces for each track in order to "arm" the output), 2) remembering that the network session needs to get joined on the mac side, 3) setting the "auto demux" setting on the Logic project. Then you end up with no spare patience to make the actual import sound good. Here ya go…
wanted to start something berlin school so playing with the alchemy sequencer - 4 variations on each of the two main patterns. This is a too busy for what I was planning, but it sounds kinda cool on headphones (but awful on my speakers). Maybe I'll run with it. Same sus chords as yesterday's piano doodle, FWIW.
revisited arpeggiated piece i set aside a week ago. Small tweak to the beginning of the melodic track. Re-recorded the second half of the melodic track - using the first live take and I sorta like it. Relistened to the fade out and it's actually not too bad. Mix is not quite right, but I think it's basically done - will need to fix the mix and come up with a name…
Double down in C chord for final several measures and feed everything into same exaggerated reverb that the organ uses throughout for a finale. Added a HP filter on one of the string derived instruments to get rid of an unwanted low frequency "thumping" noise in some sections and smoothed out the modulation on the upper partials on the sustained bell track. Not sure it's done - but basically, I really like where this has ended up! Calling this one Luminescence.
Experimented with the Alchemy spectral "image" import. I may return to that for a future piece, but the sounds - bizzare and interesting as they are - aren't fitting this piece… (first sample is a picture of a tomato tart, the second is from one of my wife's recent pen and ink sketches).
Trying to build a bit of a climax (in an oh so ambient way :)). I suspect I'll be rejecting nearly as much as I keep…
played with spectral resynthesis and got some interesting results, but didn't end up with anything suitable for this piece. Instead added a bit of the bells, "organ" and "tuba" voices and bring back the sustained bell sound on top of yesterday's "strings". It's definitely taking a turn for the ambient, but I think with some pseudo orchestral overtones. The fundamental theme is "periodic reminder of tonality through the "organ". Everything is in the A aeolian modal, but the overtones of the bells and resynthesized strings sometimes camouflages that. Its getting difficult to edit this - some of the notes are so long and evolve over 10s of seconds - just going back and tweaking something requires listening to 20-30s of prologue to ensure all the supporting voices are audible while auditioning… (new stuff starts at about 4:00)
tweaks to the sustained bell track. noticed some discontinuities in yesterday's mix. played with the "organ" track - tried various treatments, even pitch bending, but ended up with a really ringing reverb - sounds nice and feeds into a new "strings" track. experiment with resynthesis (new track after the organ started life as a cello, then run through a granular resynthesis with some noise added in via a slow LFO). Not sure it's going to stay, but that's what i did today…
inspired by yesterday's additive experiments, I'm going to try to compose something sort of 'symphonic' (but probably will end up pretty ambient) with purely synthetic sounds and no presets. So far so good, but I'm going to need to reverse engineer this improvised tonality if i have any hope of extending it very much. sketched out some basics on paper first in an attempt to have a more strategic view of how to proceed. we'll see how this turns out…
After some successful but largely lucky (monkeys typing Shakespeare) modifications to various Alchemy patches, I've decided to try to better understand its fundamentals. Running through some online tutorials focussing on its additive features and reminding myself just how flexible its modulation routing is. I really like this second patch - modulation mapped to the number of additive partials. First patch is more subtle; modulating 3 partial volumes with some slow LFO's; third patch is more monkey typing variation on the second, but i now have a better idea of what i'm trying to do… Finally an organ like patch with some oscillator tuning modulation via a multi-step envelope.
Testing without cover art
Level and filter tweak on the lead arpeggio track - sounds a bit better on the reference speakers. Cut 3 measures before the bass starts. Repeat the bass section and add some chord changes in 4/4 in the arpeggiated track. Improvised a melody over the repeated bass section. String pads voicings are sloppy, but still sound ok - maybe leave as is? (but need to diagnose/fix the unwanted swells). In search of an "ending" now…
Switch the intro to 7/8 and alternate with some 4/4. Makes it a little more interesting. (yay) Tweak the main ES2 patch to be a bit softer and less "piano" like (yay) - sounds ok on headphones, but still sounds a bit harsh on my reference speakers. (boo). Some unexplained volume changes on the pad towards the end. (boo).
had an idea for building something based on simple one-note rhythmic patterns. Sketching it out on piano, but will likely move this to synths if I can convince myself the basic idea will work. First section is in 4/4 repeating every 2 bars; second section chops up each pattern into various lengths and repeats each independently.
Learned how to filter MIDI events in Logic's somewhat intimidating "MIDI environment". Have finally silenced the noisy pot (CC event) that has been an irritant for months. HOWEVER, the settings apparently cant be persisted as a default so that the next project uses them. I will need to create a new "project template" (and remember to use it :( )
Back to the arpeggios. a few more measures with the bass, change the mix to emphasize some of the dirtier arpeggios and an abrupt "finish"? Naming it: Open Loop
Heard what sounded a little like a Peter Gabriel instrumental (but wasn't) as background music in a TV episode and decided to see what I could do with the basic idea. (on further thought, a bit of Fripp's Exposure version of Here Comes the Flood must have also been in the back of my head). So nice to not have my software fighting me! The result isnt very Fripp or Gabriel-esqe, but i still sorta like it. The piano is intentionally very repetitive with thought that most of the movement could come from the synths. But my test audience was focussed on the piano. So maybe I need to introduce some variation there…
Fixed! In trying to isolate a possible problem with one of my USB audio or midi devices, I started pulling plugs. When I had pulled them all (the last one being my serial cable(!?), Logic started working normally! And better, when I put them all back in it's still working normally. This will probably remain a mystery, but at least I'm not stuck.
Removes the annoying ring modulation in the intro and adds another 32nd arpeggio at the end. But basically this is just where I left it 2-3 weeks ago. Need to think about where to take it.
restored previous version. it also has the transpose error. previous version to that was prior to most of the new tracks. Its getting wierd. Started a brand new project. Imported only midi data into it and created new instruments from first principles. Verified that the MIDI data has no hidden automation (e.g. pitch bend). Yet it's still hosed when I audition directly from Logic (the first track has a bad wobble at the start and the third track is definitely transposed). But when I exported the audio to prove my point, the exported mp4 file actually sounds correct. (though it's not captured on this screen recording as intended). Arrrgghhhh. Hopefully this is a clue, but I'm still very very confused…
decided to revisit the arpeggio piece from a week or so ago and… i did it again. Managed to screw up note transpositions and no clue how. Notes show where they are supposed to, the track, regions and instrument have no built in transpose. there's no midifx transpose on the track, yet some tracks are playing 4 semitones higher than it should
just discovered the pitch tracking feature in TJ. Here I'm experimenting with whistling vs. humming. It's not perfect, but its as good as I've ever tried (have tried many pitch tracking approaches over the years and this seems pretty good - many wont track at all when I hum in my normal range - this seems to do fine whether hummed in my bass voice or whistled). However, it's been sooo long since I sang regularly I've lost a lot of my pitch control :)
this may not sound like much of an accomplishment, but it is the result of importing yesterday's TJ improv as MIDI into Logic, then using TJ as an external MIDI / audio source to re-record. There's some tempo fixup too now that I am in an editor where that is possible. The TJ samples are expressive, but not great (some clicking artifacts and the bowing is not something I can actually control), but it sounds 'good enough' to make me want to try to extend this and see where I can take it.
chasing ui test automation glitches - brain is fuzzy, but futzed with the last section's left hand voicings and tweaked the bass line. Have been trying to think of how to revisit the A section, but can't make it work. Maybe this can be an ending?
And with that… it's my one year anniversary on Futureland! Only the 350th day of output - not 365 - since I didn't upload each and every day (I'm still conflicted on whether @internetvin's "upload anything even if its facile" approach is better than "I'm sick/traveling/whatever and don't have anything meaningful to say today"). I've been practicing the upload at all costs approach for the past couple months and it's working out - even though many day's uploads are pretty paltry… I do think it helps combat the tendency to overthink how "motivated" or "inspired" I am on any given day and helps me avoid getting 100% absorbed into software projects as is my want
Next section uses a technique i first tried a few months back for the solos in King Kong (I tried to automate via javascript then, this is manual) - apply a rhythmic filter to a more busy set of notes. I sort of like it, but now need to figure out how to pick up the original bass riff or something like it again…
Slight change of approach to tj looping. recorded a number of smaller snippets and then arrange them in logic. also recorded most of the instruments dry and added reverb and some delay in Logic. Not quite as inspired as the loop from a day before yesterday, but technically this approach should work better going forward.
Synergize 2.0.0-beta2 released. Got two semi-luminaries from the Synergy community to join the Slack group - one of the original developers, and the son of the author of the user manual. They seem to be interested in trying to get the lead developer to join. The only thing better would be if they convince Wendy Carlos to pop in to say Hi.
Meanwhile, I tried to remind myself how to use Aphelian - here's a short sequence…
revisit the extreme portamento string thing I did some weeks back, but reworked to use the Synergy portamento/sustain functionality. A bit tricky to deal with "external MIDI" devices in Logic, so there's not a lot of new music here - it's mostly been about juggling midi/audio/track configuration. Not to mention trying to cope with the polyphonic portamento behavior on the synergy…
this is just wrong. thought I'd try to work on overlaying more simplified "riffs", and had a Doobie Brothers riff floating around in my head all morning. But then had trouble finding something more or less compatible. Who would have thought the stick riff from King Crimson's Elephant Talk would fit??
Kubrick's 2001 floating around in my brain, so decided to try to adapt Aram Khachaturian's Adagio from Gayane. Found a decent MIDI file to kick start this, and thought I'd try to work on tempo, articulation and instrument design to try make this "my own" (?). the feedback on the guitar instrument is hard to control - in some cases sounds pretty good, but by the end it is obscuring the harmony… But this, like other times I've tried to adapt a classical piece (e.g. Eric Whitacre and Palastrina) is quickly turning into disappointment. This spring's Bach oriented stuff was an exception since it was less literal in the use of the original piece - more transformation/munging which made it more fun to play around with. Probably won't finish this. Gives me pause on ever trying the Barber or Shastakovich 5 adagios… Give me new appreciation for Isao Tomita and Wendy Carlos!
Figured it out. TJ claims to use filename as hint about sample's pitch but it doesn't seem to do a very good job in practice. Here's one of the patches I used in Distancing auto sampled in Mainstage and imported into TJ and each sample manually fixed up for proper pitch - sounds completely different (but still kinda cool- and adding a lowpass filter and using the 2D touchpad to control it is really expressive).
Try to build something around a rhythmic looping patch from the Synergy library. It's much harder to use this via Logic than it was on the onboard sequencer since I have to find a way to match the loop frequency with Logic's meter and tempo. This isn't quite right, but perhaps with enough patience I can make it work.
fooling around with the looper on thumbjam again as I try to decide what to do about yesterday's piece…
In the end I like it - though it could use more fine tuning in some of the phrasing. I don't feel bad about using the sampled piano, but even though it sounds pretty good, I'm not completely comfortable with the fake sampled guitar. Need to decide where I draw the line…
put aside some time to experiment with the logic Scripter to see if i could use it to "split" a track - can't. The API is very restrictive. So went back to the gomidi project which I'd played with a bit a few months ago – it's in the midst of a big reorg and what worked before no longer works. Will let it stabilize a bit before I try again. So I decided to play with Sculpture (physical modeling synth). It really is pretty slick. Here is some sort of mutant bowed combo steel / nylon string thing with some noise overlay.
Intended to take yesterday's arpeggiation section, slow it down and make the extend each inidvidual note to make legato glidey phrases. Seemed like a simple thing to try - but I can't find a reasonable way in Logic to split a track with overlapping notes into separate tracks. Select lowest/highest is error prone since I keep deleting the remaining event when there are no more overlapping notes in some sections. Split by notes is way too fine grained - end up with single measure tracks. DO I need to write a program to do this!? This is what failure sounds like (unable to really get things separated out so they can be articulated properly)
Need to be careful to not let the synth programming get in the way of making some music… The attempt to adapt Palastrina motet last week was utter failure, but I've picked up this failed attempt from last fall in adapting Eric Whitacre's Lux Aurmque to synth. What the heck - he's arranged it for chorus, horn choir, even strings. Why not give some synthiness a try? It's going to take a lot of fine tuning to get some patches to do it justice, but here's a start…
Wasted the day trying to get my head around D3.js. Taking a break and revisiting another 30 year old Synergy doodle. This time I'm actually using one of the original patches (one of Carlos's pipe organs). The rest via logic soft synths… I was mostly interested in deconstructing the organ chord progression. Not sure I've really got it - the resolution to Eb minor sounds close, but not quite what I hear on the tape… Also the brass samples don't really replicate the original. Not sure I'll continue with this, but if I do I might try a more synthetic brass sound as in the original.
For the most part Mainstage's auto sampler works fine, but I hit what seems to be a bug when sampling two of the voices. When processing note D#-1, Mainstage went nuts, flashed the screen at each velocity range, displaying noisy waveform. Retried several times, but always the same. Worked around by shifting the resampling one semitone lower to avoid D#-1. Wierd. Wasted a few hours figuring that out.
The resulting sample instruments are very quiet. Cranked them up to max in esx24 and the mixer and still not loud enough. Adjust the velocity - that helps a bit and will probably be required to get proper articulations especially with the mallet. But still not there. So now decisions… double down and try to make these samples work, or just go back to the original soft instruments and tweak the phrasing…
Today's audio excerpt is a few measures of Original, a few of the Logic redux, then a few of the Logic version with the multisampled synergy (pre-tweaks).
various disappointing starts. tried to adapt Palastrina's Sicut Cervus as a atmospheric synthy thing - sounds awful. Relistened to my transcription of my old Atmos synergy piece - some of the mallet parts sound klunky. May revisit that and fix up the articulation and timing. But today I'm going to try to determine which synergy patches were actually used and multi-sample them so I can try to use them more easily from Logic and use them in the transcription…
Most of the day spent programming. A breakthrough on VRAM decoding, then revisit financial java app for adding new features for expense tracking. Spent a bit of time trying to make some headway on music. Revisit piece from a couple weeks ago. Can hear something to add over top of transitioning sequence, but can't make it happen. Ended up with this - which I'll probably end up throwing away
i dont see any way to merge the new riff with the original, so split the project, calling the new riff "ambient2".
Fine tune the rhythmic patch - some of the extreme ends of the morphing don't actually sound good in context. "simplify" the arpeggiated morphs, but add a bit of embellishment in other instruments as the riff progresses.
Wasted(?) most of the day on an experimental MIDI proxy for the Synergy. It's not working yet and the more I think through some of the limitations in the firmware, the less enthusiastic I'm getting to get to the bottom of the bugs…
Revisiting the distortion exercise from a couple weeks ago - I messed it up trying to "improve" it, so restored it from backup and did some little tweaks. Calling it done.
very encouraging reactions from the synergy user's group! I think I'll get pressured into adding all kinds of stuff. Today was hardening the serial io - found some bugs when trying to port to linux for one of the more visible users.
today's challenge: do something in a major key. I'll start with a chord progression from a pretty famous piece and see if I can hide that I stole it. I think I've got some of the chords wrong here - will need to revisit before I progress much further…
final push to get the app into a releasable state. No brain cells left over for music… Announced the beta to the synergy users groups – let's see what happens…
https://chinenual.github.io/synergize/
lots of progress on the ui - all functionality is now hooked into the ui. sanity tested that it also works correctly on windows. Some minor styling issues on both mac and windows (different issues on each - grrrr.). Need to come up with a robust way to handle serial connection resets and hangs.
a quick three-track "live" improvisation using acoustic instruments. I like the arrangement, even if the melodies are not quite there. Might try to make something of this…
preferences menus work, some error handling fixes.
frustrating session trying to use the Synergy as an external instrument from Logic (keyup events being randomly ignored even when playing the native keyboard - do I have another hardware problem!?), so fooling around with some alchemy presets. I like the vibe, but this is not what I thought i wanted to try next so will probably set aside
Experiment with driving the Synergy from Logic. Its MIDI implementation is pretty primitive - despite it being able to play up to 4 voices simultaneously via its onboard sequencer, it can't assign a voice per MIDI channel. So to get this, I had to record this like an old fashioned multitrack - one voice at a time. 4 channels to drive the MIDI, another 4 to capture each audio, recorded one at a time (which made auditioning voice changes, note changes very difficult). I hope I can find a more straightforward way to do this… (tempted to add a mode in the Synergize librarian to act as an OSC/MIDI host and then turn around and drive the Synergy through the serial port). For another day…
Meanwhile in software development land, the Synergy librarian UI is getting a bit more full featured. CRT display can now drill down to show voice details. Ripped out some old javascript libraries that affect notifications and spinners – will need to revisit all the error handling to make sure new replacements work as expected.
Yoooge progress on the Synergy file parsers. Can now decode CRT files and display them in the UI. UI is converted to bootstrap and behaves as expected.
Not a lot of brain cells left over for composition, so played around on aiva.ai again - unfortunately, more dross than inspiration. 20th century cinematic, electronic and jazz styles are not generating much of interest. The "modern cinematic" style seems to generate more variety and (to me) better pieces. Better training set? Here's a sample straight from aiva - not reorchestrated in logic.
Discovered aiva.ai via a comment on Hacker News. Hoped to use it as a source of compositional inspiration. So far, most of the "electronic" samples I've generated sound pretty similar. Exported this one to .MID and re-orchestrated it in Logic. Will probably keep dabbling with this a bit, but not sure its going to be terribly useful…
Is it possible to hate CSS any more than I do? Simple stuff is simple but even the most mundane layout can get so HARD… I have it basically looking OK, but with a fragile hack. Tomorrow I think I'll try converting to Bootstrap.
Have confirmed that the Synergy is using CRC16/BUYPASS algorithm. The STDUMP command now works, including the CRC check!
Audience feedback of yesterday's piece is "nice, but isn't it a little repetitive"? :) That's becoming a recurring theme. Need to work on strategies avoid that (first is to stop trying to turn a short melody/idea into something more without a more strategic plan on how the piece will be structured). Strategy vs. Tactics…
Not too inspired to start something new, but wondering if the last project (repetitive or not…) can be re-orchestrated as an acoustic piece. Not sure i have the energy or desire to complete this, but spent a little time to try some things.
Was able to log some useful data through the pseudo-terminal so have a new lease on life on getting the Synergize serial routines working. Worked most of the day on understanding the STDUMP opcode and have it mostly coded, but not yet working – have discovered another limitation of the serial port library I'm using; may pause and look for one that fits the problem better…
Cleaned up the descending scale segue and minor refinements to the ending. I think it is really done now…
spent all afternoon battling socat and interceptty in an effort to get a logging serial proxy between the SYNHCS emulator and the Synergy. In the process, I re-learned the fun of carrier detect and how tty open will hang until CD raised; and by the end of the afternoon, I have a working proxy. Tomorrow I start logging stuff…
Set aside 45 mins to get some real work done: Some tweaks to the final variation and then fade it out.
Maybe "done"? calling this "done" and naming it "Perimeter"
more Synergy diagnostics. Managed to get the original Kaypro software to work inside a MSDOS based CP/M Z80 emulator running via DOSBox. Some serial diagnostics work but its failing to do most of the interesting serial comms. Working with the Synergy users group to see if we can pinpoint virtual machine config settings that might work better…
Hardly any time or brain power to produce any sounds today - but did add a few bars…
Beefed up debug scaffolding for the Synergy serial port IO. Found the bug in the CRT upload – for the record 0x6B != 0x68 :). Reverse engineered the CRC algorithm as CRC-16/ARC. However, the Synergy is NAK'ing the CRC. Not sure how to debug this - can't get a handle on the CRC the Synergy has computed as a comparison; don't have a way to run the Z80 code to validate the go based implementation… Need to sleep on this…
added another ~30 bars to the sequence piece… Some good, some not so good.
Installed a new EPROM in the Synergy and… it seems to work! While I had it opened up, I replaced the AC fan with a 12V DC one and soldered up a 12V voltage regulator. Old fan was like a jet engine, this one is practically silent! Here's a demo of the Synergy's semi unique way it handles portamento polyphonically
More head bashing with the go/electron integration. I have an open question to the developer, but I think i've convinced myself that I can't use one of his convenience modules along with the templating library I was using with webview. So I either need to stop using the convenience bootstrap or change the templating approach to use IPC messaging instead. Time for more musical distraction…
Going over to the dark side. converting my webview app to electron. there's a really nice golang package supporting it, but a dearth of examples (and the main example doesn't do things the way i want to…), so banging head a bit trying to make things fit. Started a new sequence based doodle to clear my head
Timbre and amplitude key velocity graphs displayed. This version tries to match the computation i read for the timbre and amp proportion doc generators, but had to scale the velocity values it by a factor of 10 to get it to match the old doc output… mmm…
Hit some limitations of the webview package; am going to need to use something else.
Got inspired to try to reverse-engineer the 40-year-old Synergy Kaypro Z80 ASM based librarian/editor (SYNHCS) software. Spent too much time trying to think of a good name and then getting frustrated by crappy SVG editors when constructing the logo (but ended up with something that will fit well if I can pull this off. Threw together a quick POC of a golang/webview based app and will be attempting to decode the binary VCE files next…
Dodged a bullet. Took the daughterboard back off, removed the processor board from the chassis and re-sprayed everything in sight with contact cleaner, then reseated the daughterboard (this time able to apply back pressure on the processor board, so made a better connection I guess). Also recleaned and reseated the Internals EPROM just in case… Also replaced the dried out masking tape holding down the washer spacers with duct tape. I'm sure that made a difference too. LED's on powerup, so that's a good sign. Ran self test and… back to the original internal ROM CRC error. Phew. At least I'm back to where I started! Internal voices are all pretty bad (subjectively seems worse than when I started), but external voices still seem to work ok. Have found someone in the local "old computer" scene willing to help burn new EPROMs – that will be next step.
Also started fooling around with a DX7 emulator - here's a ditty using factory presets
In hopes that the Synergy's EPROM is actually OK, but perhaps just some corrosion on the contacts, I removed the INTERNALS EPROM, cleaned with contact cleaner and restested: same self-test error. so, removed the daughter board, cleaned all of its connections, and reseated. Retested and…: she's dead, Jim. Jim. No LED's light up, can't even invoke self test. Uh oh… Time to pause and re-plan.
So back to the Logic transcription project. But having a hard time getting motivated. No actual progress…
Ended up wasting some time with a phone-based sequencer toy called "seaquencer". Started it up with one of its presets and just started morphing stuff.
Installed Mainstage in hopes that it will make it easy-ish to sample the Synergy – want to try to get some sounds off of it into some sort of usable format before I risk damaging it by mucking around more with the circuit boards… Only semi-successful – some notes are sampled nicely, but some are at wrong pitch, many have weird high frequency artifacts. In any case, here's a short example of using the instrument derived from patch #21 "SPOOK"…
Removed the phaser from the choir patch - it was causing an unwanted low frequency tremolo. Learned how to transpose audio so I transposed the original recording up a semitone so i can transcribe it in the key of C rather than B.
Learned how to use beat mapping track to use a midi click to manually set a tempo map from fluidly played recording.
Reasonably good first cut at a transcription of the horn, bass and choir parts. Some of the phrasing is still stilted compared to the original though. Tomorrow finish the vibes…
Stalled on the Synergy repair, so back to tweaking Logic instruments. More attempts to tweak software synths to sound like the Synergy atmos track. The choir sound still doesn't really sound much like, or as good, as the SPOOK preset on the synergy, but some reverb and phasing has helped a bit… I think a few added partials via the Additive editor have made it a bit more interesting. ADSR tweaks have improved the bass part.
Found great resources on the net (https://lanterman.ece.gatech.edu/synergy and dragonslair.ca/synergy.htm) with tons of original Synergy documentation, SYNHCS software, repair manuals, etc. I believe that back in the day I had a user manual but none of the voice library docs and nothing re: MIDI or Kaypro integrations.
So… brought the Synergy out of storage - couldn't find all the cables I needed, so out to Microcenter (had to wait in line to get into the store - only 25 customers allowed in the store at a time…) for a couple of RCA adapters. And… At first it seemed to work except only output on the Left channel. But then after a while (and after experimenting with MIDI), most voices were garbled and noisy. Took it apart and reseated some connections. No improvement.
Ran a self test: error 10/12: "internal voice rom crc. test. The rom containing the internal voices is tested for proper data contents.". Not good.
Turned it off and let it sit overnight. Turned it back on this morning and… it sounds normal. Ran the self test and… same error. And now back to garbled sound. So what does this mean…?
I really don't think that was right. The pulse I counted in 10, actually seems to repeat in 9. So it's 8 over 9 - this sounds much mo' betta to me. Also helps to have changed the bass pulse to a more similar sound that behaves to dynamics more similarly to the original. It still doesn't sound quite right - but is that the dynamics/instrumentation, or am I still fooling myself re: the rhythm?
I think I've figured out the rhythm of the piece I uploaded yesterday. Sounds like a "10 over 8" polyrhythm (8 pulses in 10/8 over 6 pulses in 8/8). Who knew?
This rhythm has been buzzing in my head since I first recorded it xx odd years ago and I never knew what it exactly was… I knew when I recorded it that I liked it, but never worked out exactly why.
This also reenforces how important dynamics and phrasing are - this sounded completely wrong until I worked on note velocities in all three tracks. Still doesn't sound 100% - i'm hoping that's due to the elec piano instrument choice which isn't really very similar to the original.
Finished archiving the old tapes. I did find the Synergy piece that I remembered and have been humming to myself for the past xx (don't ask!) years… I'm surprised that, despite the primitive tech and my clumsy keyboard skills, some of these still have moments of inspiration - I may mine them for ideas for new stuff.
Success! worked around the problem with the tape deck (the tape height lifters are apparently stuck and keeping the tape too far away from the heads. Worked around by threading the tape above the lifters. Should be enough to archive what has survived.
I suspect I'll be busy with this for the next few days at least. Based one what I've heard on the first tape, there's not much I'm proud of from that era (it really brings home the power of the modern DAW - having the play live to a multitrack tape requires a lot more keyboard skill than I had at the time (or for that matter still don't have… but with a DAW I can more easily go back and fix stuff), but I hate to lose it - so I'll just archive everything without making too many judgements.
Including this one not because it's any good, but because it's the first successful tape digitization with this resurrected setup… FWIW, I believe this was made with my upright piano using a pressure zone microphone I made while an undergrad and a rented Korg Polysix.
Rush/Crimson style progressive rock may not be my norm, but it was liberating to have a clear "theme" and constrained instrument choices. Have had similar experiences when writing for acoustic instruments. Need to try to emulate that approach even when I'm working on other styles.
Meanwhile…
I resurrected my old 4-track tape deck in an attempt to digitize some ancient recordings from school days. Audio quality is abysmal – tried aligning the heads and no improvement. Could be that the audio preamps are toast? Not giving up (yet), but disappointed… I recall one short piece from the era that I'd love to recover – this excerpt is from a best-to-be-forgotten one. :)
added distorted "pedal" guitar above second half of the second section - some pitch bending, delay feedback and distortion automation changes to keep it semi interesting… Added some arpeggios and a similar guitar pedal to the slower last section and another chord change. Still not sure how best to link this all together
Spent most of my time trying to diagnose the "stuck" MIDI notes. To no avail. Also tried using the sequencer's "song mode" to automatically switch between sequence "scenes". Altered some of the patterns to try to mix things up a bit - but it was half hearted - mostly trying to get to the bottom of the MIDI errors…
A from scratch set of Aphelian sequences feeding string samples. There are some unexplained "hung" notes (but in the sequencer or in the MIDI connection from the ipad?) In any case, I sort of like some of them so i've left most of the "hung" notes in as "pedal" notes. The only cheat is that I've displaced each track by two measures to allow the transition to each new sequence to overlap instead of changing on the same beat. On repeated relistening, I may need to edit out some more of those hung notes. And each variation really isn't different enough from the last. But overall, I like this for a first try with a new tool.
Despite the new sequencer, I ended up spending time with a slow piano oriented piece inspired by a piece by Max Richter I heard on the radio (which this doesn't really resemble in the least…).
Need to sleep on this. I used string section samples - but I'm wondering how this would sound with solo cello and violin samples - eg. as a string quartet. May try that with VPO samples tomorrow…
The Bach Abuse Continues. This time freely mixing and matching, on a measure by measure basis the original, inverted or reversed notes - this time not restricting which hand - so sometimes playing two left handed measures, sometimes both right handed, sometimes mixed. Sadly it sounds no better than the more constrained options I tried – I'm not even going to finish this - enough of the abuse! time to get on with something less offensive :)
Found a free orchestral sample library (the Virtual Playing Orchestra) and am kicking the tires… Quite good samples but awkward to control articulations
Yet More Bach Abuse. Again the Two-Part Invention No.1. This time played simultaneously both forward and reversed. Chopped up by measure and each measure mixes a forward or reverse measure. Most of this sounds OK, but there are a couple measures where I really couldn't find a mix of forward and reverse that "fit". Still it's recognizably Bach…
Continuing the Bach Abuse project, here's the reversed 2-part invention, slowed way down and given some ambient treatment. I'm a bit embarrassed to admit that I like it… Calling this Inappropriate Appropriation
I look at this as a sort of generative music experiment. I made no note-by-note decisions. Basically, took Bach's music, altered it by reversing the notes and slowed it way down, then just added several instruments - some playing the notes in time, some delayed and/or transposed.
A bit rudderless at the moment. Not really inspired to do much of anything.
Played with xronomorph again, but didnt create anything worth recording.
Played around with MIDI automation to change Alchemy sequencer patterns - more awkward than I expected (MIDI value 1, 2, 3 do not correspond to ARP pattern 1, 2,3 – there seems to be a threshold around 20 or 30. Perhaps this works well when modulating the sequence via an LFO, but it's not intuitive when using Logic automation.
Added a new solo via thumbjam and reviewed the rest. I think I'm calling this "as good as i can manage at the moment". Not sure how much of this is just having gotten fatigued listening to it over and over or how much it really has gotten a little better. But I need to move on. Calling this "done" for now (with apologies to Ed and Frank…).
Previous pitch-to-midi experiments havent worked very well, but I'm finding that if I whistle, the pitch tracking is pretty accurate even though the timing of each note is usually quite bad. So this time I tried whistling while using a keyboard to record "timing" in real time. Can then post-process the derived MIDI events to fix up the timing. I may try to use this to make different solos. Or I may put this all aside again - that's a lot of work and I'm not feeling very inspired ATM.
ok, back to about where I started on Saturday. being very careful not to trigger the Logic bug, have tweaked the brass parts, splitting them into individual tracks and tweaking here and there. Brass sounds a bit better now vs. the various tweaks I'd tried back in December. Reenabled the solo track and have done some minor edits there. The parts that are "note for note" from the original arrangement sound ok - the solos are still pretty disappointing. not sure how much more tweaking will help vs. just starting over on some of the soloing…
ugh. found more aliased regions that got corrupted during the copy/paste (which I'd have realized yesterday if I'd listened more closely to the exported audio…). Fixed those and… the horn2 track samples are now hosed. WTF. This is bad. Suddenly I can't trust my tools.
I'd planned to split the brass and sax tracks into individual instruments as a way to try to get a more realistic sound. If the "stacked" section instrument is in fact the wonky part of the project, this should get me away from that problem.
but it doesnt! split the first 26 bars of the horn section and exported a short section to show how nice the new arrangement sounds and… one of the trombone tracks is echoing… grrr.
aha - noticed a pattern. it’s always the “highlighted” track while exporting that gets hosed. as workaround, maybe i can just try to be careful and “highlight” a sacrificial track when exporting… worked once…
restored from backup. replicated the edits. All sounds good in run-through. But when I export the track to audio, the (@&^@ sax sample is hosed. Looks like the only way forward may be to start a brand new project and copy/paste tracks into it without carrying over any of the instrument/effects settings from the original. blech. Have done that, but now i remember all the work i put into trying to get the horn section sounding realistic… this is not going to be a "quick fix"…
Attended a master class with Gary Motley and Bob Mintzer and got re-inspired to finish up the Ed Palermo Zappa arrangement. Playing around with the solos to make them sound a bit more natural. These still are still pretty poor solos, but timing and velocity tweaks really do make them a bit more palatable…
HOWEVER. Logic keeps corrupting settings for the solo sax and sometimes the horn section tracks - the sampler gets into an odd "too slow release" or too much reverb mushy sound. I can't determine why (even if I disable all effects, the raw sampler instruments has the defects). In the process of trying to fix the sampler wonkiness, I managed to completely hose the file. Going to have to restore from backup and start over. Posting the wonky brass section sound for a collective WTF.
Slight change to the outro to get to the fast arpeggiation sooner and removed 8 bars in the middle. Audience feedback: leave the counting whispered throughout. Still not entirely happy with the next-to-last section, but may resist changing that. Had to re-record the count - some have some annoying breath plosives that were pretty annoying on my reference monitors. So… is it "done" now? FWIW, fades at decimal #147 now
another variation. try to string them together… Not entirely happy with some of the segues, but I'm getting tired of listening to myself count. Doubled the bass lines to add a higher octave - it's not really fair to expect speakers to get down to 30Hhz (but it sure does feel good when they can :))
Despite some chords and leading tones not quite gelling, I may call this "done" - title "Irrational". For the record it fades out at the 172nd decimal position.
Have an idea for a piece harkening back to my very first computer program (which computed the square root of 2 to "infinite" precision) (and reminded of Kate Bush's recitations of the digits of Pi)
Experimented with the EVOC vocoder - it's much more limited than I'd hoped - control over unvoiced "consonants" is pretty binary - and hard to mix wet vs dry. Abandoned the idea of vocoded pads for this piece.
Learned how to create an ESX24 instrument from an audio recording. Sounds sort of cool underneath an ambient choir and bass patch. Hope I can keep this up and make it interesting…
this plucked string patch has some potential for a full on ambient treatment. Also gives me an excuse to finally use the tibetan bell samples I downloaded a few months ago. let's see… I don't intrinsically get excited about this sort of music, but it's kind of fun putting it together and it takes more effort to make it sound good than it may sound like. If I space out while "listening" to it, I sort of like it. It definitely hits the "ignorable" aspect of Eno's ambience definition - while not really achieving "interesting".
In the end I quite like this. Added delay feedback, filter cutoff and other patch/effects automation throughout. Fade out at the end. Calling this "done" and naming it "Spiralling Onward".
Back to being frustrated with mastering though - this sounds best in my Sennheiser 650 headphones, "ok" on my Spica's (which are a bit weak in bass), "ok" but differently on a pair of old Klipsh's (boomy bass and much of the subtle delay effects on some of the treble is lost in the mud…), and not too bad on a Fugoo bluetooth speaker. So who knows how this will sound to anyone else… :(
Doubling down. Rather than try to find some variance in melody/chord progression, I'm just going for more and more variations in rhythmic patterns. So it's a bit repetitive, but what the heck. a slight change to the intro, but mostly just a lot more added to the end. Some work better than others and prob need to shuffle order to allow things to build and/or decay in a more coherent way.
Started playing with a LFO driven bass and some slowly evolving delay / pickup settings on a Sculpture patch. But then extended with other rhythmic patterns. Needs some work to fine tune the delay feedback from being annoying and to keep it from being too repetitive, but I think there's potential here
Added an intro and outro focussing mainly on drum variations. Toned down the female voices a bit to control the unnatural "sampliness" of it but it still doesn't really sound right.
Rather than try to mesh this with the original drum/synth thing, I think i'll keep it separate and work on the original synth thing as a new piece. Giving this one a name: "Sunlight"
Personal insight: (somewhat embarrassingly late in the game given that it's explicitly part of the futureland manifesto :)): not every daily creation needs to be "art". Except for a few cases where I've explicitly uploaded a one-off throwaway, I keep getting frustrated when I start something that doesn't lead anywhere interesting. This current piece isn't likely to turn into anything I'm particularly proud of, but it does seem to offer a nicely portioned "daily exercise" (in this case, a simplistic 12-bar variation on a theme). Much easier to get motivated to create that next variation not expecting it to necessarily turn the piece into something more than it is.
Today I'll try to apply an Oblique Strategy – "Reverse". (after fixing up some poor note choices for the second variation)
added some delay on each trumpet and trombone in attempt to make the brass section sound "bigger" - sort of works. Extended bass lines where needed. Still more copy paste than I'd like. This is "complete", but the bass lines could be improved and the solos are still the "first takes" and are mostly too busy. The brass sometimes sounds decent, sometimes pretty thin and artificial. Still, it's not half bad and I'm not sure how much energy I have to improve this… The poor solos are very distracting, so posting the rest of the "finished" piece (with just the backing vamps and no solos).
Whether I return to this to try to create some decent solos remains to be seen…
Now that i have the basic theme transcribed (with help from written scores), I'm going to try to replicate Ed Palermo's arrangement by ear. (I have a dozen different recordings of King Kong in my library – the original Uncle Meat variations, Jean-Luc Ponty's, Zappa's later bands and Ed Palermo's big band - Ed's is my fave). His has an extended sax solo as the central core and I don't want to try to create a jazz solo to replace it - but I think it will be fun to just get all the rest of his arrangement figured out and replicated. Or so I think… The brass samples in Logic are pretty good, but missing a good sax section – the "section" samples also mix in brass so its hard to separate out the parts. Making do with what I have, but it's difficult to make this really sound like a real band…
tidal is wedded to the notion of a cycle with all sounds sharing that same cycle. very difficult to break out of that model. works good for dance music; might work ok for generating algorthmic drum tracks - but awkward for the sort of thing I'm trying to do. Not sure how much more effort I want to put into it…
So taking another break and transcribing some Zappa. Here's a beginning for King Kong. I cheated a little by finding a score on the net, but got all confused by the horn part key signatures. Need to read up more on how brass and horn parts are notated.
it wasnt straightforward, but also not that hard - I now have Tidal->SuperDirt->SuperCollider acting as both audio source and MIDI controller into Logic. As evidence, here's a short recording - first an audio sample from tidal (squeel), then another in track2 (saw synth), then a midi sequence in track 3 (drums), another midi sequence in track 4 (piano) and then some random "live" munging of the sequence patterns. Still not "music" - this is still just an exercise in making sure all the pieces can talk to one another as intended.
In a rut. So decided to try playing around with Tidal Cycles. Had an old version of SuperCollider installed - deleted it. Installed Tidal (haskell, SC, SuperDirt, Atom, etc.) Fired up SuperDirt the first time and… "Error - Index not an Integer" while initializing :( Eventually sorted it out and got basic Tidal examples to make sounds.
Don't get excited @internetvin - I haven't made any "music" yet - just making sure all the modules do what they're supposed to do so far… Have a vague idea of things to try when I get beyond raw newbie mode - but not sure if I want to concentrate on Tidal as a standalone audio generator vs. an OSC/MIDI controller (or some sort of hybrid).
resurrected the foot switch off my old Synergy to work with my current midi controller (had to disassemble and readjust the sustain spring tension.) It now works with the ESX24 piano – I had forgotten how much I depend on the sustain pedal when playing piano! I think going forward I'll have an easier time improvising piano parts.
Benefits of being on the beta test team - ipad sequencer is fixed, so I created three sets of looped patterns, exported them as individual loops and then rearranged/assembled in Logic. Could also play the loops live in the sequencer (that's more or less how the developer intended), but this gives me more control.
more experimentation with the new sequencer. found a bug in the pattern editor, but the developer seems pretty responsive; hoping for a fix soon. meanwhile, i imported some drum machine samples from Logic and created a pattern from scratch then fooled around with it in real time while recording. I think i'm going go like this.
Found better settings in thumbjam as a midi-controller for guitarish sounds (eg. pitchbend via H-tilt instead of V-tilt and try to avoid pitchbend via drag gesture.) Slightly better results. The larger layout on the tablet vs phone helps some too.
Searched for a simple-to-use step sequencer for the ipad - nothing "free" does what I want. Decided to try Lemur since it has gotten good reviews and is very customizable / programmable. Then wasted lots of time fighting app store problems - turns out apple's servers were down for a few hours.
Eventually installed Lemur on the ipad, but can't get the supporting daemon and editor utility to run on the mac. Frustrating! To be continued…
read up on Haskell (nightmare memories of APL from school daze…) and Tidal Cycles (live coding)
On relistening, the weakest part of this was the original rhythmic pulse :). Tried to remove altogether and that didn't work. It's specifically the "B" pattern that bothered me, so altered that. Messed around with the mix too - brought the "drum" and bass a bit up in the mix. Shortened it by 4 measures.
clean up the pads and build a bit via chord voicing, tweak the bass to be a bit less muddy (but it's still almost lost in the mix). Some of muddiness is inherent in the patches (especially the pad, but it's a bit better now that i reduced some of the exciter-driven harmonics) – but I like how they sound together… maybe this will just be a slightly cluttered/muddy piece. Giving it a name: Stubborn Hope
no matter how hard i try, i can't get excited about this dancy thing. I like the arpeggio and the transition leading to the kick - and even like the repurposing of the pattern to the bass and then its variation. But I can't really get excited about finishing this. Either I "kill my darling" and abandon the kick and dancinesss or put this whole piece back on hold yet again.
Tweaked some settings in thumbjam that makes it work a teeny bit better for this patch - but it's still making tracks that need a lot of manual cleanup (very small thumb movement on the phone generates wrong notes - probably would work better on a larger screen. Still I managed to record something today - though there's not much of the original thumbjam take left after all the tedious manual cleanup… The result no longer sounds vaguely guitarish so I'll just call it a glidey synth. Maybe I should try letting thumbjam create audio on the phone and import that rather than trying to use it as a midi controller? (that allows for less manual tweaking tho - so not really practical in the long run). I think this is getting closer to the realm of "done enough", so I'll give it a name: "Mystified Stranger"
spent too much time trying to find a way to use my bass as a way to create guitar lines. thought if i could get a DI audio signal, I could use a pitch shifter to get it up an octave or two and what could be simpler…? Eventually managed to get the DI part working via an old Bassman preamp using the headphone output, but the Logic pitch shifter is CRAP. Completely useless at least in any way I've tried so far… The Wham pedalboard pedal plugin is a little better at pitch shifting, but it doesnt really solve the problem. So back to square one. Nothing worth recording…
added some cymbals and simple drum fills and replaced the bassline for a slighty more human feed. Need to re-record some of the piano (reduce copy/paste) and spend more time on the drum/bass - some of the phrasing still sounds a little clunky. Spent a lot of time trying to find a guitar-like sound for another track - have a close-enough sound (I think) but none of my takes via thumbjam are sounding good. may try using the keyboard. Added some filter swells in the pad.
got distracted playing around with the https://syntheshesie.buzzlight.com webapp. Reminds me a bit of Brian Eno and Peter Chilver's iPhone apps (trope, bloom) (without the generative aspect - this just plays what you draw - nothing more).
continuing on in more or less the intended style. trying hard not to fall into a rote copy of the song that i stole the inspiration from… (spent 30 minutes fooling around with a tape delay on the bass but that really did get too derivative).
its a bit monotonous, but perhaps with some additional development i can turn it into something decent…
Not a lot of time and not super inspired today. Trying to at least find a chord progression for next piece. Settling in on a basic two chord "progression" from Pink Floyd. Fooling with the stock FM Piano patch sounds really nice - so maybe will just lay down something like this and see where it can go.
Naming things is exhausting (no new music created - just a lot of thrashing around with naming things). Come up with names for my "back catalog" of more or less completed pieces. Will try to do this as I create anything non-throwaway going forward.
amb -> Bigfoot Sleeps
brass2 -> It Is Possible
dreamy -> Lazy Water
dreamy2 -> Aimless Flight
dreamy5 -> Choose a Path
dreamy6 -> Smoking Circle
piano2 -> Unpaved Trail
piano5 -> Altered Fate
seven3 -> Captive Puzzle
spacey1 -> Deep Space One
string1 -> Obsession
strings2 -> Stalking Music
some more variations. some placeholders that probably still need to be fleshed out (or excised :)) Or maybe this is "done".
trying to start being more creative about titles - may start calling this one "Eastbound Countdown" instead of "strings3" … (ended up using an xkcd style password generator to generate word random triplets and selected some words. this was exhausting! maybe I'll just go back to numbering things (opus1, opus2… :))
trite arpeggiator stuff. each track using the deadmau5 inspired basic sawtooth patch but each with slightly different filter and envelope settings. My excuse for this travesty: I'm learning how to use the arpegiator to create basic sequences, but then paste them into editable piano roll for fine tuning.
still not trilled with the bundled arpeggiator (I would like to be able to control pattern selection via MIDI CC (track automation), but can't find a way to do that). here's the basic sequence using it. Uses CC to control the basic rate. Tweaked the equalization a bit to get a bit more fizz into the patch while also emphasizing the bass.
My initial impressions of the bundled arpeggiator in logic was not good so i’ve been avoiding using arpeggiators and manually writing sequences into the daw (time consuming and gets in the way…). time to find and master an arpeggiator. Spent too much time looking at alternatives and fiddling with demos. Havent found anything i like better than the bundled one. Maybe i ought to just suck it up and learn how best to use it. nothing worth calling a “creation” today - so not posting audio.
inspired by a Deadmau5 excerpt heard on an amazing wingsuit video - https://www.youtube.com/watch?v=uXGowLvRPWc). As simple as his fat sawtooth patch sounds, I'm so far unable to replicate it with ES2, ESM or even ModulAir. Mine sounds thin in comparison. Here's as close as I've managed so far (via ES2). A slow opening of the filter and a gradual introduction of some delay effects just to make things interesting.
Have been thinking i should try some small snippets and call them "music for tv" – i.e., like incidental music for tv or movies – that wouldnt need full fledged development, just a snippet of rhythm or melody or interesting "sound". This the first such output – though it turned into a more structured sappy song than originally envisioned. [mutitracked, but melodies improvised live with minor touch up].
extended the solo and it now sounds a little better in context. here's the whole thing as it stands now… Still not sure these sections really work together - i've listened to them so much I'm convinced they do, but I've had comments that it seems a bit aimless. Still - that isnt completely uncommon of the style… mmm…
I just can't help myself - added a 4/4 arpeggio over the new (still 5/8) section, then layered a 3/4 over top of that. the 5/8 isnt as apparent here - maybe I'll morph the following section away from 5? Will probably also extend the initial melody section with an extra verse before transitioning to this.
Something citrus inspired again. build up some 9/11 chords to start then simplify, add some arpegiated bells (in 6/8), a bass line and a melody (in 15/8). Lets see where this can take us… Complex meters, much less polyrhythms arent typical for this style – should I continue or modify the bass/melody to 6/8 or the arpeggio to 5/8? Decisions, decisions…
searching for a usable general purpose LFO controlled filter/gain modulation effect. Can do what i want with some built-in LFO's for some soft instruments, but am trying to find a standalone effect so I'm not tied to a given soft synth. so far the Logic supplied "Step FX" is close - but has icky clicks when i use it as a square gate and tonal instrument - works well for noise though
Struggling with the 2-bar bridge – which i think is also used in the outro. But have everything else mostly worked out (still need to add "vocals" and the guitar filler in the 2nd and 3rd chorus). But here's the interlude and 3rd chorus rhythm section… once i have everything worked out, i'll go back and refine the bass and drum parts.
Fairly complete rendition of Bill Schaeffer's Bigfoot (a note for note copy and attempting to be faithful the PLATO IMS wavetable sound of the original). Some of the fast arpegiation may not be quite right, but it's really hard to follow in the original. Pretty happy to be able to knock this out in a couple days - i'm getting a lot better with both Logic editing and analog synthesis patching.
another chord progression driven section - this time adding Am, Bb and C. Still needs some revoicing. More similar in style to the original from some months back. Not yet integrated into a whole (tried mashing together with the original and it just doesnt quite flow). But hope to find a way to make it work…
added some room ambiance and tweaked a few note timings (and removed two measures). Disappointed in both the piano and cello samples. Installed Kontakt player in order to use pocketBlakus solo cello sample. Nice! but only "demo mode" - so spent time articulating the esx24 sample instead. Piano sustain has some wierd harmonic side effects, but still sounds better than without it
A lot of not much for the time spent. Added 3 or 4 tracks and took them away (cluttered the sound and didnt really add anything interesting). the remainder is not exactly as it was - drums have an extra kick, triplets pan from left to right, main sequence delay increases during first couple measures.
extension to the diversion: add percussion, flutey thing and drone to macoy groove. spent more time fooling with equalization trying to make it sound ok on speakers vs headphones than I spent laying down the tracks. Not happy with the percussion accents, but ran out of time due to the "mastering" time sync. still sounds 100% better on headphones than the speakers :(