One of my ongoing frustrations since first finding VirtualDub has been the requirement of working in RGB colorspace. Since that time, we have AVISynth and HuffYUV for working with YUV colorspace; but we still convert to RGB for the greater part of filtering. I don't get it. Why would we ?not want to sharpen the detail in the Luma channel before converting to RGB color space. Why mix Luma with the color channels (at half the resolution) before refining detail? Why not attack Chroma noise in the U and V channels? Why distribute it through all three RGB channels before trying to filter out chroma noise? I don't get it. It seems to me that we need a blender filter to break out Luma and Chroma channels and treat each seperately. Comments?