Delay feedback causing buzzing/aliasing

I’m trying to create a flanger effect, and I’m able to create a delay that works cleanly. However, when I tried to implement feedback, I find that that feedback signal has this buzzing sound. When I analyze it with a spectrum, I see several different peaks throughout the frequency spectrum with increasing numbers towards the higher end. Would anyone know what might cause this?

if you have a feedback knob make sure it doesn’t go all the way to 100% because there you will hear this loud noise… also, if you have a sweep knob see that the allowed values are >0 and <1 (from 0.01 to 0.99 for example)

What you describe is exactly what I’d expect to see with feedback on a short delay (comb filtering) - I’d expect the peaks to be equally linearly spaced, which will look increasingly close on a log frequency display. Have you tried modulating the delay to see if you get the desired flanger effect?

While I do get the comb filtering effect when I use shorter delay times, the buzzing that I have is more like frequencies getting added into the signal that didn’t exist before, and it happens even when I use longer delay times that make more of an echo type effect.

Very hard to say without seeing the code. If your feedback is very high you might distort the output, which would create new harmonics.

Here’s my process block. Even at lower feedback levels the noise is there.

void DKFlanger::ProcessBlock(sample** inputs, sample** outputs, int nFrames)
{
const double gain = GetParam(kGain)->Value() / 100.;
const double delayTime = GetParam(kDelay)->Value();
const double feedback = GetParam(kFeedback)->Value() / 100.;
const double dryWet = GetParam(kDryWet)->Value() / 100.;

const double sampleRate = GetSampleRate();

const unsigned int sampleDelay = (unsigned int)((sampleRate / 1000.0) * delayTime);

const int nChans = NOutChansConnected();
double sampleTotal = 0.;
for (int s = 0; s < nFrames; s++) {

  if (GetBypassed()) {
    for (int c = 0; c < nChans; c++) {
      outputs[c][s] = inputs[c][s];
    }

    return;
  }

  for (int c = 0; c < nChans; c++) {
    
    sampleTotal += inputs[c][s]; 
  }
  
  double sampleAvg = sampleTotal / (double)nChans;

  double oldSample = sampleAvg;


  pathSamples.push(oldSample);

  double newSample = 0.;
  double feedbackSample = 0.;

  if (sampleDelay == 0) {
    newSample = sampleAvg;
  }
  else {
    while (pathSamples.size() > 0 && pathSamples.size() > sampleDelay) {
      newSample = pathSamples.pop();
     
    }
  }

  feedbackSample = newSample;

  for (int c = 0; c < nChans; c++) {
    double oneSample;
    double twoSample;
    oneSample = applyGain((newSample + feedbackSample), gain);
    twoSample = oneSample * dryWet; 

    double dryOriginal = inputs[c][s] * (1. - dryWet);

    outputs[c][s] = (twoSample + dryOriginal);

    sampleTotal = oneSample * feedback;
    
  }
 


  }
}

It looks as if you have only one delay line where you store the average of the left and right signal… why do you do this instead of having two dedicated delay lines? (or perhaps I am misinterpreting the code).

Also, I think it’s better to have an index going through the delay line array, which has to be a circular buffer where you overwrite old data, instead of adding and popping out values.

Another thing, you will have to interpolate between samples at some point because the delay time will almost never fall on an integer sample value

And finally, I don’t see how you are modulating the delay, you need parameters for depth and rate instead of delay time.

For some reason your code looks very unfamiliar to me. It almost seems like you have a different approach to the flanger algorithm that I don’t know about.

I agree with all the above. I actually can’t see how you have any feedback here, as that would involve adding the output of the delay to the input, and in this code I can’t see that happening.