I tried the Audacity noise-removal plugin recently and it's complete crap. I fed a high-quality audio stream from a Rode mic into a few different options to see which could remove the noise of my server rack. iMovie made the voice sound like a robot and Audacity barely did anything. The only thing that worked was DeepFilterNet and it's free, open-source and cargo installable.
There's no reason to lock yourself into an intel-only solution. Just use DeepFilterNet. The results of this on my noisy server room were insanely good. Almost no voice dropout with 100% fan noise removal.
https://github.com/Rikorose/DeepFilterNet
EDIT: Even more interesting, it looks like OpenVino is just DeepFilterNet glued to Whisper.cpp and tied to Intel hardware.
https://github.com/intel/openvino-plugins-ai-audacity/tree/m...
OpenVINO is an Intel toolkit for deploying AI models. This particular project is an Intel project using the OpenVINO toolkit to package several existing models as audacity plugins.
> an intel-only solution.
> OpenVino is just DeepFilterNet glued to Whisper.cpp and tied to Intel hardware.
Well, no.
When you want to run a model on a truly wide set of devices, you end up sort of wedged into either ONNX, OpenVINO, TensorFlow Lite, and a few other frameworks.
They're all FOSS, and they're software libraries.
YMMV on which is best, of course, but broadly and widely: where are your users, mostly? Desktop? OpenVINO. Web? TensorFlow. Mobile and desktop? ONNX. This isnt entirely accurate because ex. I reach for ONNX every time because that is what I'm familiar with. All of them make effort to reach every platform, ex. OpenVINO goes supports ARM, and not in a trivial manner.
That all being said, TL;DR:
It is "not even wrong", in the Pauli sense, to imply OpenVINO is Intel-only, and to describe OpenVINO as "just glu[ing a model to inference code]"
You're describing 3 different components (a hardware acceleration library, and inference library, and a model) and suggesting the hardware accelerated inference library just glues together a model-specific inference library and a model. The mastroyshka doll is inverted: whisper.cpp uses openvino to acclerate its model-specific inference code.
The Noise Removal plugin takes a bit getting used to, but I have had great results from it. I don't mean to point to operator error... it's got too many options for someone new to it to tune.