Archive for the ‘Uncategorized’ Category
fast 2d polygon rotation
i’m fairly new at 2d vector graphics but haven’t seen this mentioned. the naive approach would be to rotate each vector using brute force..
newx = x * cos(angle) – y * sin(angle);
newy = x * sin(angle) + y * cos(angle);
given vector [x,y] you can define four ‘cardinal’ vectors by inverting and swapping the coefficients, eg. the two normals [-y,x] and [y,-x] and the inverse[-x,-y]. a cheaper method than rotating each point would be to rotate one point (say a unit vector), and map the polygon out using a cartesian grid. the four cardinal axes can be extended into space using vector scaling, vector addition and so forth.. eg. 135 degrees off the rotated vector can be achieved by summing a normal and the inverse, then scaling to 0.7071..
using a few such operations, perhaps with linear interpolation, and planning the polygons ahead of time with easily derived positions on a cartesian grid means that points after the first will require addition and multiplication instead of sine and cosine.
note
this is a developer’s blog, and this is a notable experience for a developer.
the vst scene ain’t what it used to be.. in the early 2000’s i could release blaster, or some plugin produced in a couple of hours that availed some simple yet otherwise unavailable process and you’d be guaranteed a party within five minutes of posting.
one of the big learning steps for me was releasing radian – there were a dozen pages of eager praise for the sound demos, the moment i released it everyone went to sleep, except for a few well-respected users.
i wonder, how is that? radian does something nothing else comes close to doing, it isn’t a solution for modeling *all* objects, but it does some things very well.
we’re all aware of the reality of detractive subterfuge in the marketplace, perhaps this was additive subterfuge. “where are my users? don’t people use drum sounds in music any more?” radian, 2011.
i sent out a beta to three people five days ago. understandably, two were too busy, the other told me it makes no sound. i *know* it’s going to make sound on someone’s system, i have yet to receive any intimation that such a thing has occured.
so i posted a public beta over 24 hours ago. at this point, there are over 400 views of the topic and i have garnered two statements of intent for further investigation.
which isn’t the same as actual use and verification, let alone the OMG THIS THING IS LIKE NOTHING ELSE EVER that, let’s be honest, the software warrants.
so, you decide for yourself how today, 400 views result in zero feedback, and even a year ago, 400 views would have been two pages of discussion and dozens of pms and other verifications of use.
because i’m tired of telling you why that is, and why it is such a consideration in the vst marketplace, and, larger scopes of consideration.
the badness of badness
a few times a year i peep into the samples forum at kvr. while i have a fair (going on 2 decades online) collection, i rarely have them on a machine because i never use samples..
there are few sample products i find much application for besides removing capital from ambitious and inexperienced persons. one is percussion.
and often, as i have today, one encounters a collection of synthetic percussions.
while i am quick to condemn many participants in music software for their depraved cruelty towards living and breathing creatures, i prefer not to judge those who may simply be acting out of the same enthusiasm that such persons seek to exploit, let’s barge ahead anyway.
so: if you are thinking of releasing a pack of synthetic percussion (aah, memories..) allow me to inform you ahead of time that 95% of the content is likely total cack, a waste of everyone’s time (even your experience since you were thinking about how to make other people like things for money the whole time).
most synths do not have envelopes steep enough to use for patching kicks. the exponential contour of analogs simply cannot create an attack and release in one stage.
if you are adamant about this, may i suggest listening to a few vince clarke productions as he has an aged mastery of patching ‘real’ percussion (eg. snares). here’s a big hint if you can’t – think short. big, flabby synth snares with a long tail may sound like a neat synth patch, but the buzzyness you liked is useless as a snare and the attacks aren’t developed enough to truncate. those sounds will *never* be used in anyone’s music.
the hats are useless as well. when patching, you decided the timbre was noisy, and different from white noise, but eg. many are too grainy because of your focus on “interesting noise” when patching.
(then again.. i noticed the other day the 909 oh or crash ROM sounds like the stick hit twice.. never noticed that in the post filtered sound).
if you really don’t have a good idea of what a utilitarian synth percussion patch is like, aim for brevity. that will help you focus on improving what’s there.
same response to just about every synth pack i’ve heard since i released mine (which were only marginally more general purpose ftr) (except for the straight-off-the-box 808909 collection stuff.. they’re applicable but of course it’s straight profiteering and ought never be recompensed).
one of these days i’ll show them all. people rarely comment on percussion in my tracks, it’s probably because they’re patched so well (maybe not so much my kicks). eg. if i go through years of tracks and record all the hihat patches, people would probably think most were processed 909s et c.
i’m thinking of all the times i’ve heard better synthetic cymbals (ok, percussions in general) than mine. none of those times are associated with people who sell samples. the most useful synth hats in these collections are generally the highpassed synced stuff vince et al. use, which is where we get into that depravity thing again. synth1 and such are free.. there’s no need to buy a sample of a highpassed synced osc.
gosh! the extreme rightness of what i say, huh.
the goodness of goodness
given my general character within the marketplace one may anticipate this praise of those who enable. my background has bridged cultures allowing me some face with the requisite academics for dsp. i cannot profess the mathematical rigor my seniors in the field do.
it is good. it is what i say and what you know and what *they* try to hide. hehehe. without the mathematical rigor i apply procedures perhaps “not even worth bothering with”.
it is good recently also. harvested 4 open source zero delay filter algorithms which are augmenting the “stuff i did”, which is under wraps atm, beyond opening a cache of unused methods which i think are kinda neato.
if it wasn’t for the enablers, my stuff would be scantily accoutred, where a platform was found at all.
for all this talk, once i have a free moment i’ll wrap some of the filters as SEMs and document some more stuff. then it will actually be good then.
sinc interpolation and FFT transforms
implemented a oneshot FFT analysis and learned a few things, eg. that i really ought to install octave or similar. i may even be able to fool some innocent academics into disclosing methodology to me.
zero padded sample to 2^n of course. i’ve been having poor luck with getting the fft to do much beyond frequency shifting. i have tried several approaches to pitch shifting.
tried another approach.. i used sinc interpolation to apply the bins in cartesian form to the output array. this achieved the same results as changing the read speed of the sample, and repeated the signal for the length of the buffer when the pitch was raised. of course this process is then effectively useless, but encouraging to hear good sound quality for once. wish i could get on the same page with someone who has accomplished this hint hint.
it occured to me recently that sinc interpolation was not as intensive as it appeared – sine waves have two degrees of symmetry (if you like) and both can be exploited so that only one transcendental needs to be performed for every interpolation.. (perhaps easy to prove with a diagram). it is only necessary to discern the correct polarity of the first point you are calculating and create an extra variable to invert the operation between bins. quick copy and paste of a 32 point interpolation, hopefully you can sort it out.
for (i = 1; i <= nd2; i++) {
o = (float)i * shift; p = (int)o;
d = o – (float)p;
if (d == 0.f) {sleft[p] += ileft[i]; sright[p] -= iright[i];}
else {
s = sin(d * pi);
j = p – 15; if (j < 0) {j = 0; (p – j) % 2 ? m = -1 : m = 1;} else m = -1;
k = p + 17; if (k > nd2) k = nd2;
for (l = j; l < k; l++) {
o1 = (o – (float)l) * pi; o1 = m * s / o1;
sleft[l] += ileft[i] * o1; sright[l] -= iright[i] * o1;
m *= -1;
} }
}
for (i = nd2 + 1; i < n; i++) {
sleft[i] = 0;
sright[i] = 0;
}
i’ve never seen squat about this anywhere.. perhaps because it’s obvious if you have a moment to consider it. not knowing that the sinc function could be less expensive than a transcendental for every operation led me to avoid exploring solutions that employ it. now i know better.
*edit* notes that i didn’t copy the alg for bin 0.. of course 0 is not scaled so in this alg the values are just copied over, the “right” inverted cos this alg does the inversion 😉
polyphony II
whew! well, this is embarassing.
i haven’t been afforded much in the way of an environment conducive to concentration lately, having had a relatively hectic and physically demanding week. my polyphony code sat there day after day with me being unable or unwilling to address it.
i’ve finally hammered the thing out (i think) and it is really quite simple. only the most elementary sorting is required once one has a scheme to address it.
my scheme has some necessities others may not have – it’s a module for synthedit to be used for various applications. it needed to have two operational modes: a conventional “synth” polyphony mode, and a “physical” polyphony mode, where the same voice is used for recurring pitches (in synth polyphony, the same note/pitch can be present on all voices, eg. retriggering a tom with a long release.. you wouldn’t want this behaviour for eg. a piano..)
the other suggestion is to use an array for recording which voice was most recently released. if you retrigger one key polyphonically, you want it to “round robin” the voices so that the release is as long as possible. my application, being a component for a modular environment, is limited by only operating on gate on/off events.. it doesn’t receive information on the release stage.
so my ‘voice age’ algorithm works like this: ‘age’ is incremented so that the highest coefficient indicates the newest voice – when a noteoff occurs, the highest age value is discerned, this value is incremented by one and applied to the voice being released. because the variable is only used for sorting when a noteon event occurs, it is at this time that i sort the lowest age value and subtract it from all the ages to keep things from overflowing.
i had tried two kookier schemes precedent to this involving the position of each voice in the held notes stack. the final solution is a lot neater and the only one that satisfied my desire for orderliness as well as functionality. it is quite dizzying to consider how convoluted the solution was before determining this method! 🙂
polyphony algorithm
i’m posting this because i found very little reference on the logistics of writing a polyphony system (there’s one thread on kvr that’s worth reading). i thought i’d mention something about it.
writing a sorting algorithm can be aggravating. it’s such an elementary programming skill that one feels as if one should have it mastered even without any experience. of course, you can write programs for years without needing to use one, then when you do, you’re unfamiliar with the terminology. it can be challenging if you aren’t afforded peacable concentration and are pressured to produce.
my polyphony hint: don’t start writing code to handle noteon and noteoff. for conventional keyboard performance, the first thing to implement is a dynamic array of the currently held notes. activating and deactivating voices will reference this list (eg. to recall held notes at key release), so start with that (i’m still not sure what the correct term for this kind of data structure is.. stack, zipper, dynamic array..)
a bit obvious really 🙂
BLIT oscillators
i was quite happy to achieve rapid success with BLIT (bandlimited impulse train) oscillators. i didn’t run into a pitfall i’d heard about.
or maybe i misheard – my instant success was frustrated by the realisation that pitch transitions to triangles (and other twice-integrated contours, eg. hyperbola) generate nasty artifacts – BIG transients in the integrator resulting in significant dc offsets.
so i have a few notes on BLITs:
itfp, what i have not seen mentioned elsewhere is scaling the integrator by the ratio of old omega and new omega. that’s fairly obvious, and thankfully, as simple as one could want.
it may be somewhat unfair of meto compare my implementation to another. i recorded the output to compare how triangle pitch transitions were handled. i discovered that that instrument only updated pitch between wavecycles – this may have been a contemporary method of reducing cpu, i misinterpreted it as a way of stabilising the integration. it’s not, at least as far as it seems to me.
what i do seem to have improved is an application of a high pass filter to the triangle integrator instead of to the signal. there are still artifacts on some note transitions (depending on phase and pitch) but they never offset the dc by more than half the amplitude of the wavecycle (which can probably be improved with a stronger filter).
so try that 🙂
top waveform is my first attempt at highpassing the integrator. lower waveform is a bad instance of another BLIT algorithm making a pitch transition, presumably with the signal highpassed (images captured separately, pitch is not to scale).
a nice example of the working algorithm. worst cases oscillate instead of having a single-sided dc offset, but rarely even halfway to clipping.
unison voicing
a very brief entry on unison voicing, as i’ve just finished another unison oscillator and coincidentally been asked a question. this should be useful to anyone experimenting with modular synthesis who hasn’t developed much familiarity with dsp yet –
for unison voicing gain, i use 1 / sqrt(# of voices), making:
1 voice: 1 / 1 = 1.0 gain
2 voices: 1 / sqrt(2) = ~.7071 gain
4 voices: 1 / sqrt(4) = .5 gain
this seems to give about the same volume no matter how many oscillators you stack together 🙂
other note on unison voicing is to avoid equidistant spacing between pitches – there’s a spectral analysis graphic in the manual of my ‘horizon’ unison synth that clearly illustrates this concept –
equidistant pitch (logarithmic scale, of course) between three or more oscillators (of course) is going to produce cyclic phase cancellation and pronounced beating, instead of the smooth, thick detune effect. pointedly, equidistant pitch is useless 🙂
i use a simple n*n nonlinearity crossfaded with equidistance as a ‘spread’ parameter – try using a hi-res spectral analysis (voxengo’s SPAN has longer fft block rates which give better frequency resolution) then use sine oscillators and exaggerate your detune amount.. if the frequencies are the same distance apart, you will hear phase cancellation/beating.
there are different effects you can achieve, eg. bunching the pitches towards the center is different from spreading them away from the center.. things to experiment with 🙂
horizon has several stereo voice distribution schemes.. after using horizon for ?? three years now ?? i almost never used the left>right or right <left schemes, which pan in order of lowest to highest pitch – the highest pitch will draw the most attention, so this strongly weights the stereo image… i left them off the new build.
i did find it useful to have a mode where voices are split between either the left or right channel (i include an option to alternate which channel receives the highest pitch, usually in patching, if i use both oscs in unison mode, the highest pitch of each osc will be towards the opposite side) and another mode where voices are panned lowest to highest from the center to the side, alternating back and forth.
both of these modes sit better in the mix for some sounds.. the split is “cleaner” and the center>side distribution makes a thicker timbre (obviously).
INT phasors
i haven’t had time to build this yet, my deep thought for today concerns the irksome float to INT conversion when using phasors for delays. they are expensive.
typical implementation is as such
floatphasor += floatincrement;
if (floatphasor >= 65536) floatphasor -= 65536;
INTphasor = (int)floatphasor;
decimal = floatphasor – (double)INTphasor;
(+ whatever method of interpolation)
the float to int conversion, even with typecasting is nasty stuff (i know this because synthedit displays the cpu consumption of SEM modules made with the SDK).
as i’ve only thought about this, there could be a glaring error. but at present, and happy to blog it, i’m thinking that it would be faster to split the phasor into int and decimal components:
unsigned short int INTphasor, INTincrement;
double decimalphasor, decimalincrement;
INTincrement = (int)increment;
decimalincrement = increment – (double)INTincrement;
loop {
INTphasor += intincrement;
decimalphasor += decimalincrement;
if (decimalphasor >= 1) {
decimalphasor -= 1;
INTphasor++;
}
};
the IF (or a WHILE depending on modulation) is faster than the float to int conversion, i’m sure of it.
off course, the person who is writing this has just inlined their oscillator routine with 32 variations, so they will have to correct all thirty two of them individually in order to implement this in their current project 🙂