so glad you asked
theres two ways to understand fourier transforms. one understanding starts with the continuous + calculus and the other with the discrete + linear algebra. they both end up at the same place. on the linear algebra it makes sense to talk about dimensions; its the number of components in your vector, the number of columns in your matrix. on the continuous side if you want to talk that way you start talking about infinite dimensional spaces. this makes a lot of sense if you know what it means, but nobody spoke this way when fourier came up with this stuff, so let us set it aside
historically continuous was easiest to understand cause computation was difficult. now computation is easy so discrete is easier for programmer types. if you get calculus drilled into you early in life then continuous is still easier for many people
on the continuous side you can justify the fourier transform by observing what happens when you take an integral like integrate[ f(x)g(x)dx ] and making f(x) and g(x) two applications of the exponential function that treat x as an imaginary number. this requires some mental infrastructure to be developed for it to be a good time
on the discrete side we use linear algebra. now historically linear algebra was developed to understand the f(x)g(x)dx pattern more generally but in this case the generality means it can plug directly into your geometric intuitions. so the fourier transform becomes a lot easier to understand
discretely the integral becomes sum[ f[n]g[n] ] or in code
float dot_product(float[n] f, float[n] g) {
float result = 0.0f;
for (i : n) result += f[n]*g[n];
return result;
}
|
n is the number of dimensions
in geometry if you have a 3d vector and you want to know its x component you can do this
dot_product(v, { 1, 0, 0 });
|
{1,0,0} is direction in space that points in the x direction. this works for any direction, not just the x y and z directions. So doing
dot_product(v, dir) * dir
|
will isolate the part of v pointing in the dir direction
in the DFT if you want to isolate a given frequency you do
dot_product(x, e_f) * e_f
|
where e_f is exp(i*f), sampled at whatever sample rate x is at. that dot_product is giving you the value that goes in a given FFT bin, and the multiply is turning it into the sine wave you wanted to isolate
and mathematically its the same except in nice 3D geometry you have 3 dimensions, which is sensible, and in the DFT you have as many dimensions as you have samples in your signal, which is also sensible but may be a new meaning of dimension than you are used to
the reason why you can talk about every frequency's corresponding e_f independently with this, and do a big stack of dot products without worrying about other frequencies interfering, is the kind of thing linear algebra gives you the machinery for understanding. this is largely the same mental infrastructure i mentioned before, that needs to be developed for it to be a good time