"There was one surprise when I revisited costs: OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model. Even conservatively assuming I had 1 billion crawled pages, each with 1K tokens (abnormally long), it would only cost $100 to generate embeddings for all of them. By comparison, running my own inference, even with cheap Runpod spot GPUs, would cost on the order of 100× more expensive, to say nothing of other APIs."
I wonder if OpenAI uses this as a honeypot to get domain-specific source data into its training corpus that it might otherwise not have access to.
> OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model.
Is this the drug dealer scheme? Get you hooked later jack up prices? After all, the alternative would be regenerating all your embeddings no?
I don’t think OpenAI train on data processed via the API, unless there’s an exception specifically for this.
Maybe I misunderstand, but I'm pretty sure they offer an option for cheaper API costs (or maybe its credits?) if you allow them to train on your API requests.
To your point, pretty sure it's off by default, though
Edit: From https://platform.openai.com/settings/organization/data-contr...
Share inputs and outputs with OpenAI
"Turn on sharing with OpenAI for inputs and outputs from your organization to help us develop and improve our services, including for improving and training our models. Only traffic sent after turning this setting on will be shared. You can change your settings at any time to disable sharing inputs and outputs."
And I am 'enrolled for complimentary daily tokens.'
Can you truly trust them though?
Yes, it would be disastrous for OpenAI if it got out they are training on B2B data despite saying they don’t.
We're both talking about the company whose entire business model is built on top of large scale copyright infringement, right?
Have they said they don't? (actually curious)
Yes, they have. [1]
> Your data is your data. As of March 1, 2023, data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us).
[1]: https://platform.openai.com/docs/guides/your-data
Yeah, so many companies have been completely ruined after similar PR disasters /s
i'd not rule out some approach like instead of training directly on the data, may be they would train on a very high dimensional embedding of such a data (or some other similarly "anonymized", yet still very semantically rich representation of the data)
i am too lazy to ask openai.
It'd be a way to put crap or poisoned data into their training data if that is the case. I wouldn't.