459 comments
  • extr2y

    I've been using ChatGPT pretty consistently during the workday and have found it useful for open ended programming questions, "cleaning up" rough bullet points into a coherent paragraph of text, etc. $20/month useful is questionable though, especially with all the filters. My "in between" solution has been to configure BetterTouchTool (Mac App) with a hotkey for "Transform & Replace Selection with Javascript". This is intended for doing text transforms, but putting an API call instead seems to work fine. I highlight some text, usually just an open ended "prompt" I typed in the IDE, or Notes app, or an email body, hit the hotkey, and ~1s later it adds the answer underneath. This works...surprisingly well. It feels almost native to the OS. And it's cheaper than $20/month, assuming you aren't feeding it massive documents worth of text or expecting paragraphs in response. I've been averaging like 2-10c a day, depending on use.

    Here is the javascript if anyone wants to do something similar. I don't know JS really, so I'm sure it could be improved. But it seems to work fine. You can add your own hard coded prompt if you want even.

        async (clipboardContentString) => {
            try {
              const response = await fetch("https://api.openai.com/v1/completions", {
                method: "POST",
                headers: {
                  "Content-Type": "application/json",
                  "Authorization": "Bearer YOUR API KEY HERE"
                },
                body: JSON.stringify({
                  model: "text-davinci-003",
                  prompt: `${clipboardContentString}.`,
                  temperature: 0,
                  max_tokens: 256
                })
              });
              const data = await response.json();
              const text = data.choices[0].text;
            return `${clipboardContentString} ${text}`;
            } catch (error) {
              return "Error"
            }
          }
    • ezekg2y

      You use it consistently during the workday and it's still not worth $20/mo?

      • m4632y

        This is one of those puzzling things to me.

        You own your life - why not spend your own money for the things that make you and your life better?

        Who cares?

        I worked at a job where I had a small, crappy monitor. I made decent cash. I just bought a large decent monitor and brought it into work. I ended up using it for many years. My life was significantly better. I've done that at several jobs since then, and NEVER regretted it, in fact it was one of the soundest decisions I've ever made. Also keyboard and mouse.

        There are so many people using the default keyboard, the default monitor, the default tools.

        If you push work to do it for you, you need to challenge the "everone gets a dell 19" monitor" b.s. If you push your boss, he might have to do justification paperwork.

        Just become what you are.

        • johtso2y

          I think there's also an argument to be made that $20 per month is a bigger deal than some people realise. The issue isn't just _that one_ specific subscription you're paying for. The issue is how they all accumulate over time.. and eventually you find that your metaphorical bucket of money has turned into a leaky sieve. Not to mention all the services you forget to unsubscribe from even though they've ceased to provide enough value to you to make them worth while.

          A one-off purchase is very different from making a hole in the bucket.

          Obviously this all depends on how financially comfortable you are. But still.. time is money. Money you're spending today is time you're committing to work to earn back tomorrow.

        • joe_the_user2y

          I think the monitor example is different from the ChatGPT example.

          A monitor is relatively cheap, I would own it, and it will primarily improve my life and secondarily improve my productivity.

          In contrast, with ChatGPT, if I'm working on a difficult programming job, I spend X time thinking deeply and Y time actually typing the solution. A system that can type for me is convenient but it may not speed things since I can only think so fast, so many hours a day. And the situation of renting a thing for $20 isn't just a constant expense but a bit of a feeling of being beholden - the price could be and probably will be raised, there is a pressure to get value out of the thing by using it more, etc.

          And there's no guarantee that a code generator will make my life more pleasant - the time saved typing may be absorbed by meetings or whatever.

          It's a bit like home automation or car dongles - some people might like never throwing a switch as they walk into a room but I think fewer people would see a benefit they'd pay for since they still have to walk into the room.

        • BolexNOLA2y

          The issue is once you buy a monitor for your work, they're then getting to rent it for free. If you need something to do your job better than they need to pay for it. You hurt yourself (financially) and you are contributing to a workplace culture where employees buy what they need. It's not just a monitor you're paying for!

          To be clear, I am not judging you. You did what was good for you and took the path of least resistance, as we all do many times a day. I am just trying to answer your question of "who cares?" We should all care a lot about this!

          I'm an in-house producer at a tech company. I own a lot of film and audio equipment. I made it very clear in my initial negotiation that my gear was to be used solely at my discretion that and within 12mo we will fully transition off of it. I also used it to negotiate a slightly higher salary. It would've been easier to just give it away, but then why would they ever stop? I could've easily slipped into a position where they just use all my film equipment rent free and then I am on the hook to replace it as it breaks at a faster rate. That's not right without proper compensation.

        • smoldesu2y

          > why not spend your own money for the things that make you and your life better?

          Objectively, a few reasons:

          - You can't afford it.

          - Paying for it doesn't let you "own" anything

          - It's not reliable

          - It's legal compatibility is not fully decided yet

          - It can preclude opportunities for learning and even teach you entirely false things under the guise they are true

          The list probably goes on, but I don't think we should buy everything that stands to make our lives better. If we applied that logic to everything in our lives, we'd all use computers from Skymall and eat food we see on QVC.

        • tppiotrowski2y

          You own that monitor. Would you pay $20/month to rent that monitor with the potential for price increases and ads popping up on it occasionally?

          There's something to "owning" a thing. You can resell it and rent it out, etc. It's an asset. SaaS is not an asset. It's a service.

        • vl2y

          When I worked at Amazon I brought my own monitor and chair. When I was leaving I put monitor on the chair and rolled it through security and loaded it into my car. Nobody asked a question, I’m still surprised many years later that security seeing somebody removing furniture and equipment from the building just shrugged at it :)

        • sesm2y

          I’ve always brought my chair, monitor and keyboard to office. But I found ChatGPT to be inconsistent for programming tasks, sometimes it’s just wasting my time.

        • jliptzin2y

          The question isn't really if $20 is worth it, the question is can you get 90% of the way there for much less? It sounds like he found a way, and of course he's going to do that.

        • RektBoy2y

          In normal company, they will buy you a custom setup without problem.

        • littlestymaar2y

          There's a big difference between a monitor that you pay and own for many years, and a $20 monthly subscription though. It's going to be really quickly much more expensive than a monitor …

          I have no issues paying for thing I can use for a long time or re-sell if it turns out not being that useful, but spending money and investing time using cloud-based tools that can be discontinued by next month, thanks but no thanks.

        • nickdothutton2y

          Greek vs Roman work environment. The Greeks expected the craftsman to show up with his compliment of tools, sharpened and maintained. The Romans provided the bench, tools, and ensured they were sharp each morning before arrival of the worker. Both cultures produced marvels of design and construction.

        • seanmcdirmid2y

          I want a 5K 27 inch monitor but they aren’t affordable sans getting an iMac. I did buy my own 28” 4K (and we got my wife a nice 24” 4K) for work, but these are fairly affordable.

          I wish I could do the same with my laptop, WFH would be better, but it is against security.

        • billti2y

          I buy my own laptop for work. Work will give me a good one, but I want a GREAT one. It’s the one tool I need to do my job, and I’ll be using it over 40 hours a week.

          It’s a few thousand dollars, but I make a comfortable six figure salary, and I think it’s worth it.

        • charlie02y

          I've been lucky to be working remotely, but if I ever go back into the office, I'll be bringing my own gear just like you.

          Recently, I've been gasp paying for software. Open source and free tools have come a very long way, but paid tools can be better. Especially when those tools are used daily as part of the job.

          The paid tools often have free usage, albeit with some limits, so they are still useful to the defaults.

          For example, TablePlus vs SQL Server Studio. I'm not a DBA and most often just need to run a quick query or two to check things. In this regards, TablePlus is light years ahead. No need to load a dinosaur for that.

        • s3p2y

          He IS spending his own money on something he finds useful-- the OpenAI API. NOT ChatGPT. He gets greater use out of the service, as the API isn't limited in its output the way ChatGPT is (no content filters), AND it's cheaper. Did you even read his comment>?

        • ynab42y

          "Pay the billion dollar companies for solving problems that they created!"

          "Consoom more tech hardware made in China!"

          "Don't think too hard about your monthly expenditures!"

          "Trust the science!!!1!"

          Well you've convinced me, buddy.

        • ma2rten2y

          I'm the opposite. I was using my laptop's build-in screen for months. My manager had to remind at every 1:1 for months to buy a monitor until I finally did.

        • ihatepython2y

          Also that $20 should be tax deductable

        • 2y
          [deleted]
        • 2y
          [deleted]
      • extr2y

        No. I'm a salaried employee. Marginal time/effort savings do not directly translate into more money for me. But the $20 charge hits my bank account today. Perhaps if I use it consistently enough and in smart enough ways I will be perceived to be a more valuable/productive employee, which might translate to a raise. But that's a lot of maybes. I'm sure it will get to that point eventually, but by then the value will be undeniable and my employer will pay for the subscription. Until then, I will continue to use the free version, or pay-per-use with the API, or just use google.

        • carlosjobim2y

          A cool trick is to go to system preferences and reduce cursor speed to the lowest possible. This way you make sure you're not working faster than what you're paid for.

        • therealdrag02y

          If you outperform your peers and get a raise just 6 months sooner that pays for itself.

        • 300bps2y

          No. I'm a salaried employee. Marginal time/effort savings do not directly translate into more money for me.

          I am also a salaried employee and if I can save a minute of work time that’s one less minute I have to work.

          I have a body of work I need to complete and sometimes that takes me 40 hours and sometimes much more.

          The only way I can think of that $20 per month for increased productivity doesn’t help is if your company’s metric of success is being present/working 8 hours per day.

        • cutenewt2y

          Can you ask your boss to expense it?

        • 2y
          [deleted]
      • anhner2y

        I use my toothbrush every day but I wouldn't pay $20 per month for it.

        I use my keyboard everyday but I wouldn't pay $20 per month for it. In fact, I paid around $4 total for it, as paying more would bring significantly more diminishing returns.

        I use my phone every day and have used it for the past 5 years with no issue, it has brought me so much value and yet, if I draw the line, it didn't even reach $20 per month (price divided by time used), not even mentioning that I expect it to last another 2-3 years, bringing the cost down even further.

        What kind of crazy value would you expect something to have in order to be worth $20/mo?

        • 30minAdayHN2y

          I even thought $20/month is such a cheap option. You articulated very well that $20 is indeed a lot despite something being useful everyday. It's time to revisit all my monthly $10 subscriptions and see how much they are actually delivering reasonable value.

        • chiefalchemist2y

          Devil's Advocate here...

          Brushing your teeth takes time. If you could pay $20 p/m to get the same results from say a stick of gum (i.e., no time)...is that then worth it?

          The KB "as is" makes sense. But that's a fairly generic item. Adding value / differentiation is minimal.

          Phone? Maybe. Put what's your service p/m? Is one worth considering without the other?

          What crazy value? At $1 per work day?? It doesn't have to be crazy. Just 5 minutes per day is breakeven (and that's generous). In many cases, ChatGPT is more helpful than the ever polluted Google SERPs. One buck a day to avoid that? With the weekends free? Perhaps not crazy value but worthy of reasonable consideration.

        • account422y

          > I use my keyboard everyday but I wouldn't pay $20 per month for it. In fact, I paid around $4 total for it, as paying more would bring significantly more diminishing returns.

          I wouldn't pay $20 per month for a keyboard either, but I doubt that a $4 keyboard is even close to diminishing returns unless you got a really good deal. Even $100 for a decent mechanical keyboard is not much if you use it for many years.

        • somsak22y

          the cost of your phone plus the call/text/data plan is less than $20/month?

        • nxmnxm992y

          [dead]

        • TapWaterBandit2y

          > I use my toothbrush every day but I wouldn't pay $20 per month for it.

          Are you serious? If you had to pay $20 per month to brush your teeth you would stop doing it?

          This would be the fast track to dentures by the time you're 45.

      • lanza2y

        People are so cheap it's ridiculous. If we ever get past people being unwilling to pay for software beyond rates of 1 cent per hour tech will blow up to 10x as big as it is right now.

        • nirvdrum2y

          Alternatively, people are tired of paying subscriptions for everything. And many SaaS actively mislead on price and employ dark patterns to make canceling difficult. It's often not worth the hassle.

        • account422y

          Think of all the different software and tools you use daily and consider how much you would pay if they were all $20/month subscriptions. Using something daily also says nothing about the value it provides - maybe there is only marginal utility over the next best option but being free makes it worth it. SAAS vendors are so greedy it's ridiculous.

        • anhner2y

          People are also delusional

      • apples_oranges2y

        This is hacker news, a title that includes breaking the rules. Should almost be a matter of pride to get it for less than $20..

        • behnamoh2y

          > Should almost be a matter of pride to get it for less than $20.

          When you're a high schooler, yes, even Steve Jobs did it. But when your monthly salary is easily in the thousands, it's not a matter of pride to get it for <$20, it's a matter of stupidity.

      • sangnoir2y

        Is it surprisingly? Value is not determined by frequency of use, but by the qualitative difference: if gp doesn't use it at all, would anything of value be lost?

        He's a thought experiment: imagine a device that changes the scent of the air I breathe to something I find pleasant. I could use this device all day everyday for free (or on the cheap), but I will not pay $20/mo for it. Losing access to the features really isn't worth that much. On the flip side, many people pay thousands of dollars to rent machines that helps them breathe, even if that adds up to total of less than an hour of their lives - which is nor much.

      • OOPMan2y

        I pay $80 a year for IntelliJ and that works out to waaay less than something like CoPilot or ChatGPT and is waaay more consistently useful.

        $20 a month for ML tool that is only sometimes useful is a tough sell, especially in a world where a lot of people feel like $80 a year for IntelliJ is too much.

        Coders are thrifty bastards, except when it comes to their personal vices in which case financial responsibility goes out the window...

      • mkraft2y

        Right? $1/workday and you still get to use it evenings and weekends. No wonder b2b is the way.

      • chaxor2y

        I would think the big issue here is that they still make a ton of money off of you by selling your data. Any Software as Service is deeply flawed because it is pretty much guaranteed to extract as much data from the consumer as possible. In this case, it is quite a bit worse, because it's likely close to your entire content or body of work that they will take. So unless it becomes something that runs locally and has no networking component to it whatsoever, it's not going to be worth spending money on for many people or companies.

      • IanCal2y

        They seem to be getting good results using the paid API that has fewer restrictions, and have a neat integration with their workflow.

      • chiefalchemist2y

        One dollar per day? If it saves you less than 5 minutes...it's paid for.

      • jmacd2y

        The absurdity of OPs comment cannot be understated.

      • breck2y

        Shhh, I'm his boss and have convinced him he's making a good salary at 25 cents per hour.

      • FounderBurr2y

        He deserves to be paid for his work, other people not as much.

      • s3p2y

        Considering the cost of the API, no. It's not.

    • fifafu2y

      Nice, I'm the developer of BetterTouchTool and I'll definitely use this one myself :-)

      • extr2y

        Thanks for the great app man! You may not have even realized this, but this was randomly crashing only a few versions ago, and you just recently pushed an update that did something to the Replace w/ Javascript functionality that fixed it. Was super pleasantly surprised to have found that overnight the problem was solved without even having to submit a bug report.

      • gabaix2y

        I was shown BTT 10 years ago and to this day I still use it. Thank you for making Mac a better place.

      • Sholmesy2y

        Heaping on the praise, use this tool every day, for years, on every mac I've had. Best 15 quid spent

      • guiambros2y

        Another happy user here. BetterTouchTool [1] is a must-install on any new Mac for me. I have so many keyboard customizations that it's hard to live without. Thanks for such a great piece of software!

        [1] https://folivora.ai/

      • thesystemdev2y

        Thank you so so so much for this tool, it’s always the first install on a new mac for me!

      • elvin_d2y

        using BTT since discovered in 2016 and it's essential. Time for a lifetime with a new version, there a lot of things how you can make Mac more pleasant for your use. Thank you for the app!

      • m3kw92y

        That code didn’t work for me mind giving a better example?

    • dragonwriter2y

      > And it's cheaper than $20/month,

      Since the $20/month is for priority access to new features and priority access including during high-load times, not API access (a separatr offering not yet available), I don't understand the cost comparison. What you are proposing does not substitute for any part of the $20/month offering over the basic free product.

      • DoesntMatter222y

        He's a programmer. They re cheaper than scrooge. They'll write a tool themselves in 6 months rather than spend 10 dollars.

        Idk why but programmers are the cheapest people on earth in regards to programming tools.

        I bought Intellij idea for $400 like 12 years ago and got made fun of at work even though it made me substantially faster than eclipse.

      • s3p2y

        Oh right. A bunch of "new features" with exactly zero explanation as to what they are and "priority access" when the API responds nearly instantaneously. But keep drinking that kool aid to justify your $20 purchase.

      • RupertEisenhart2y

        The API already still works in peak times. That's not exclusive to this offer!

    • lossolo2y

      ChatGPT struggles with out-of-distribution problems. However, it excels at solving problems that have already been solved on the internet/GitHub. By connecting different contexts, ChatGPT can provide a ready solution in just a few seconds, saving you the time and effort of piecing together answers from various sources. But when you have a problem that can't be found on Google, even if it's a simple one-liner or one function, then in my experience ChatGPT will often produce an incorrect solution. If you point out what's wrong, it will acknowledge the error and then provide another incorrect answer.

      • motoboi2y

        This is the expected behavior. It's a language model trained to predict the next word (part of words actually) after all.

        What is unexpected is the ability to perform highly in a multitude of tasks it was never trained for, like answering questions or writing code.

        I suppose we can say we basically don't understand what the f* is going on with GPT-3 emergent abilities, but hey, if we can make it even better at those tasks like they did with chatGPT, sign me in.

        Is not that the AI is too dumb, it's that my computer now can write me code I'd take one hour to Google and check and test. Now I ask, ask for corrections, test the answer and voila, my productivity just went through the roof.

        So, my point is: don't believe (or be mad about) the hype from people that don't understand what curious marvel we got in front of us, just see how you can use it.

    • movedx2y

      $20/month is too much? When I filled in the "pro" survey, I said I'd pay $200/month. This thing is a cheap-as-hell technical writer, fact checker, information cruncher, and more.

      It's like an employee, but for $20/month.

      • nagonago2y

        I agree that it's very useful, but I'd be careful about "fact checker". GPT is perfectly happy to confirm falsehoods as facts and hallucinate convincing lies. A good fact checker verifies from multiple sources and uses critical thinking, neither of which ChatGPT can do.

    • ben1742y

      Wow, I just implemented this in BTT and it's amazing how quickly it's become an indispensable tool. Just highlight any text I type and get the "answer" to it. Thanks for the tip!

    • Swizec2y

      > "cleaning up" rough bullet points into a coherent paragraph of text

      As a reader of people's paragraphs, please don't. Stick to bullet points.

      • qzw2y

        I'm sure you can have ChatGPT turn a paragraph into bullet points for you. Repeating that n times would be an interesting variation on the game of Telephone.

    • zxienin2y

      > model: "text-davinci-003"

      I was surprised at first, but I notice you're using GPT-3 model, not chatGPT (no API so far)

      • stavros2y

        I'm not convinced that there's any substantial difference between the two.

    • kmlx2y

      i used the same API but for an ios shortcut. it's not the same thing as chatgpt, as the completions api doesn't know about context. but it does feel a lot snappier.

      > have found it useful for open ended programming questions

      i have found it to be terrible when it comes to something simple, like constructing a regex.

      • shagie2y

        Try asking code-davinci-002 instead of text-davinci-003.

            curl https://api.openai.com/v1/completions \
              -H "Content-Type: application/json" \
              -H "Authorization: Bearer $OPENAI_API_KEY" \
              -d '{
              "model": "code-davinci-002",
              "prompt": "##### Create a regular expression to match words starting with 'dog' or ending with 'cat'.\n    \n### Java Code",
              "temperature": 0,
              "max_tokens": 182,
              "top_p": 1,
              "frequency_penalty": 0,
              "presence_penalty": 0,
              "stop": ["###"]
            }'
        
        This returned:

            ```java
            String regex = "\\b(dog|cat)\\b";
            ```
      • extr2y

        WYDM exactly by iOS shortcut? I use a Mac but only an android phone. Do you mean for mobile usage?

        > i have found it to be terrible when it comes to something simple, like constructing a regex.

        Oh yeah, agreed. It's not good for super specific stuff like that. But more like, I write a weird loop and ask if there is a more idiomatic way to do the above. Or I just describe what I want to do and say, "list three options for accomplishing that". It's great for "feature discovery" essentially. I find it complementary to copilot.

    • pelasaco2y

      Yes, we want everything for free /s

      I think it absolutely worth 20 bucks/month. It's an absolute helpful tool. To write text, but as I discovered yesterday, to write code. Over a long chat with many iterations, you can produce code,test code or pseudo code. I used yesterday in a meeting with some colleagues, while discussing a new feature. The code produced after our initial spec was pretty good to make sure we all understood what we wanted.

    • deet2y

      (Self promotion, sorry!)

      I'm working on a product with this capability, plus a lot more including cross-app contextual support and developer extensibility.

      If anyone is excited about this kind of embedded-in-your-workflow integration of AI tools I'd love to chat! (email in profile)

    • m3kw92y

      Some reason, this code doesn't work. I cleaned up the 's and have a api key. It says

      TypeError: undefined is not an object (evaluating 'data.choices[0]')

      • m3kw92y

        Is ok I got it fixed, by asking chatgpt to write me some new code.

      • HoustonDad2y

        [dead]

    • behnamoh2y

      Does anyone know if this can be done in other macOS automation apps too? (like Keyboard Maestro, Hammerspoon, Raycast, Alfred, ...)

      • aeonflux2y

        Of course. I use this in Hammerspoon. The API call is simply in Lua.

    • smrtinsert2y

      20 is definitely questionable especially considering competition is coming right around the corner.

    • intrasight2y

      Obviously, they'll start charging for the API - and probably in such a way that your use will cost > $20

    • reisender2y

      It seems worth it if you find it helpful enough to use it on a daily basis.

    • bemmu2y

      Note it's actually:

      "Authorization": "Bearer YOUR API KEY HERE"

      • extr2y

        Yep, good catch. I'll edit the post to include so nobody gets confused.

    • 88stacks2y

      is there a different or free app to use instead of BetterTouchTool just to highlight text and run it through an API and reinsert?

    • eloff2y

      Is there any advantage to this over copilot?

    • m3kw92y

      Also note that this is GPT-3, not ChatGPT.

    • 2y
      [deleted]
    • rhaway847732y

      Have you been able to find uses for chatGPT where it’s better than the top result in Google?

      I’ve consistently found that Google returns similar results. The only difference is that if it’s a longer snippet I need to click into the website, but even then, it’s usually faster than chatGPT.

      • aeonflux2y

        My only issue is that GPT training set is locked like 1-2 years ago. I often find myself looking for recent stuff in Google.

  • qwertox2y

    ~~~ To the ChatGPT Team ~~~

    I'm currently receiving answers FROM OTHER USERS with this new version from 30 Jan, completely unrelated to my question:

    Q: Why is it easier for me to do endurance training on an empty stomach?

    A1: There are several things you can do to reduce the chance of getting a yeast infection:

    regenerate response:

    A2: There are many benefits to using cloud storage solutions such as Google Drive, Dropbox, and OneDrive. Here are a few of the main benefits:

    regenerate response:

    A3: There are several strategies to help you improve your mental toughness in sports:

    regenerate response:

    A4: Loss of job security is a common concern for many workers today, particularly in industries that are facing rapid changes and technological advancements. Some of the reasons for this include:

    ----

    After reloading the page, those unrelated answers show up as a "New chat" which has no question, but just those answers. The actual question is in a separate "New chat".

    • gdb2y

      (I work at OpenAI.)

      Thanks for the report — these are not actually messages from other users, but instead the model generating something ~random due to hitting a bug on our backend where, rather than submitting your question, we submitted an empty query to the model.

      That's why you see just the answers and no question upon refresh — the question has been effectively dropped for this request. Team is fixing the issue so this doesn't happen in the future!

      • sillysaurusx2y

        While I have your ear, please implement some way to do third party integrations safely. There’s a tool called GhostWrite which autocompletes emails for you, powered by ChatGPT. But I can’t use it, because that would mean letting some random company get access to all my emails.

        The same thing happened with code. There’s a ChatGPT integration for pycharm, but I can’t use it since it’ll be uploading the code to someone other than OpenAI.

        This problem may seem unsolvable, but there are a few reasons to take it seriously. E.g. you’re outsourcing your reputation to third party companies. The moment one of these companies breaches user trust, people will be upset at you in addition to them.

        Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT. The implications of this seem to be pretty big.

      • Sai_2y

        Funny how gdb is helping debug openAI!

      • braindead_in2y

        Quick question. Will ChatGPT be fine-tune able from the API ?

        PS: You should really do an AMA!

      • int_19h2y

        The most amusing thing about that bug is that if you ask it what question it was answering, it will conjure one that made sense given the answer.

      • ShamelessC2y

        Is OpenAI hiring software engineers without a background in academic machine learning these days? Seems like a super exciting place to work.

      • irthomasthomas2y

        Is the inability to "continue" a long answer also a bug? (Please say yes :)

      • 2y
        [deleted]
      • king078282y

        Should a proper large language model be able to generate arguments for and against any side of any debate?

      • kfrzcode2y

        Can you help me understand why the ChatGPT model has an inherent bias towards Joe Biden and against Donald Trump? This is not really what I would expect from a large language model .......

      • NicoleJO2y

        [flagged]

      • honksillet2y

        While I have have your ear, please tell your team not to inject their political biases into this tool. Thanks

    • joe_the_user2y

      One of the problems people have mentioned for deep learning systems generally is they tend to be maintenance nightmares.

      I get the impression that openAI had a lot of resources on-hand when they released ChatGPT that they used to fix problem using reinforcement learning and methods that I'd imagine were more adhoc than the original training process. Hence it seems likely the system winds-up fairly brittle.

    • Baeocystin2y

      Adding on to this, I've experienced the same. Seems to be a new bug as of Sunday's release.

      • Moziee2y

        Had similar issue since the release. Distinct issue that I wasn't facing prior to update.

      • Gigachad2y

        I experienced this a few weeks ago

    • 2y
      [deleted]
    • rileyphone2y

      I had a bug the other day where the whole site was broken because the JS files actually contained HTML - it's kind of funny how the worlds most hyped engineering org still struggles with a basic Web app.

    • windowshopping2y

      I'm struggling to see what made you think these answers came from other users. They're unrelated to your question, but they're still pretty clearly generated content. The blog post info-bullet style of talking is trademark AI.

    • pyridines2y

      This has occasionally happened to me as well, from the beginning.

  • PedroBatista2y

    $20 seems reasonable.

    I wonder how are they going to deal with "unreasonable intensive usage" aka people/companies offering "AI" in their products when in reality they just act as a proxy between people paying them ( sometimes a lot of money ) and OpenAI.

    • kokanee2y

      $20 is the very first price tier introduced at the very outset of what could be one of the most powerful companies of our generation. Google.com adding a single yellow box with an advertisement seemed reasonable, too.

      Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.

      • citilife2y

        I'm less concerned about how many jobs are going to be replaced and more about how they'll be replaced.

        The AI is not decerning and right in the announcement, OpenAI states it's intention on "correcting assumptions":

        > challenge incorrect assumptions

        I imagine some of these assumptions will be bias towards particularly ideologies / things people desire.

        - https://twitter.com/Basedeyeballs/status/1613269931617050625

        - https://medium.com/ninjas-take/chat-gpts-bias-is-very-very-e...

        I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.

      • sterlind2y

        It's going to hit so unevenly. My partner works with children at a homeless shelter, I'm an algorithm designer. I'm certain my job will be obsolete before my partner's is.

        It's going to automate away nearly all pure desk jobs. Starting with data entry, like you've seen, but it'll come for junior SDEs and data scientists too. Customer service, then social media/PR, then marketing, as it culls the white collar. Graphic design is already struggling. But janitors will still keep their jobs because robotics is stuck at Roomba stage.

        It's going to be fascinating. I can't think of a time in the past where white-collar jobs have been made obsolete like this.

      • qorrect2y

        I'm extremely worried. This tech is going to replace a lot of jobs in the next 10-20, including ours ( software ). And if not replace, it's going to cut the available positions drastically. We already have a great divide between those with money and those without and this is a nuclear bomb about to go off. Without any sort of UBI or social safety nets, this is going to be a true disaster.

      • mistermann2y

        > I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.

        An additional (possible/plausible) wrinkle: all major social media platforms are ~~compromised~~ in a state whereby the common man is not able to have unconstrained discussions about the range of counter-strategies available to them.

        I just got a one week ban on Reddit for suggesting that violence is within the range of options in a thread discussing the massive increase in homelessness, including among people who have full time job. Nothing specific, nothing against anyone in particular, nothing that technically violates the stated terms regarding violence, and certainly less than the numerous, heavily upvoted comments that explicitly and unequivocally call for violence against specific people that I read on a regular basis.

        If a revolution is ever to be mounted, I think it might have to be done with paper and walkie talkies. Meanwhile, those on the corporate-government merger side not only can communicate and coordinate freely, they also have access to the communications of their enemies.

        Oh, what a time to be alive.

      • lostmsu2y

        You realize that near human-level AI for $20/month is a bargain in a country where typical mobile phone plan is $25+, and is basically universally affordable?

      • electrondood2y

        The future is bifurcated into those who invested in AI companies in the 2020s, and those on UBI.

      • vasco2y

        > and now we're handing the power to replace human work over to those who can afford to pay for it.

        All technological advances through the ages have been doing this in one way or another. For some things people paid with their health or effort and for others people pay with money when that was available. I disagree with the "now". This is no different from a car. You seemed to say that in the middle of your comment but then reverted back.

      • christkv2y

        I imagine that in a couple of years it will be possible to buy a model and run your own on your own hardware. The space requirements are not out of the world and the cost seems bearable for companies.

      • tatrajim2y

        It's a bit sad to realize I am part of the last generation of students who had to put together an essay from books found via a card catalog, take notes, then type up several drafts painfully on a typewriter. Not to mention learning math pre-calculators. But if the electricity ever goes out . . .

      • RGamma2y

        Looking at world history it is clear that humanity stumbles from catastrophe to catastrophe and always cleans up after the fact. Until now this has always been possible but one day it won't be. So... Great Filter?

      • anileated2y

        > we're handing the power to replace human work over to those who can afford to pay

        Consider that this power works by consuming copyright-protected work done by unwitting contributors without any opt-in, creating derivative works from it and charging the users without acknowledging the authors.

        In addition to being illegal, it plain discourages open information sharing—since anything you publish, regardless of license, is consumed and monetized by OpenAI in an automatic fashion. I.e., if people have no reason to read what you write or buy your books when they can just ask an LLM for the same information (which LLM had obtained from your writing), there is no motivation for you to publish.

        When do we start considering this illegal? Not LLMs, of course, but for-profit operated LLMs created by mass scraping of copyright-protected data.

        > Google.com adding a single yellow box with an advertisement seemed reasonable, too.

        Google acts fairly though: it directs the searcher to you. Imagine if at any point Google stopped doing that and just started to show you regurgitated computed contents in response to your search, without ever telling you who authored the info. Everyone would be up in arms on day 2 if they did it; why do we forgive OpenAI and Microsoft when they do essentially that?

      • OrangeMusic2y

        > what could be one of the most powerful companies of our generation.

        I have the impression that AI tech such as GPT tends to become ubiquitous and that the current advantage that OoenAI has won't last when this become accessible and basically free to everybody.

      • realce2y

        > and now we're handing the power to replace human work over to those who can afford to pay for it.

        That's been capitalist industrialization for the last 200 years. We have been warned thousands upon thousands of times already what's going to happen - that's what's going to happen. The only thing to do is to make this layer of tech accessible to every person on Earth to every degree of depth possible. The terror is in the imbalance of power and access, and the best-case we can get is if we totally erase that imbalance so we can once again compete as "equals"

      • alfor2y

        I agree with you.

        It’s going to get wild.

        It will improve very rapidly, from openAI and other. The competition will be incredible this year.

        I think we are headed for a complete replacement of human work very soon.

        Those who can use AI will become manager of an army of programers, writers, etc.

        We will be able to do much more, quicker too.

        Then we will have more robots to do physical things: self-driving, farming, cooking, cleaning, etc.

        Limiting factor will be silicon chip production and robotic production.

    • webstrand2y

      $20 puts it way out of my price range. It's useful, but when I've been averaging around twenty queries a day and somewhat frequently get back hallucinated responses, it's not worth that price. I wish there was a pay-as-you-go or a lower tier offering.

      • jeremyjh2y

        So you are doing something like 400 queries a month and the aggregate value of all those responses is less than $20 to you? I've got to ask, why bother querying it at all?

      • m00x2y

        You'll still have access to the general availability version.

      • DoesntMatter222y

        Where do you live that you can't afford 20 a month? Even developers in India and the Phillipines can afford it and are using it

      • Kiro2y

        I use it way less than that and think $20 is a steal. What software do you think is worth $20 a month?

    • SeanAnderson2y

      Yeah it's interesting how their pricing model for existing APIs isn't subscription-based (https://openai.com/api/pricing/)

      I wonder why they diverged here?

      • shagie2y

        The approach on how you consume tokens.

        The APIs are stateless and have a "this is how many tokens you send", "this is how many tokens you asked for" - and thus the person making the requests can control the rate of consumption there. Unless you're being extremely inefficient or using it as part of some other service that has a significant number of requests (in which case ChatGPT isn't appropriate) then this is likely to be less expensive for simple queries.

        With ChatGPT you don't have insight into the number of tokens created or the number that are used in the background for maintaining state within a session. Trying to limit a person by tokens midway could have a negative impact on the product.

        So, estimate the amount of compute a person uses in a month and then base it on that.

      • drusepth2y

        I'd hazard a guess that they're gonna start cracking down hard on unofficial API usage, and restrict the subscription to just their web UI. The fact that they're also offering a ChatGPT API soon seems to reinforce that duality.

      • wahnfrieden2y

        b2c vs b2b pricing

    • JacobThreeThree2y

      It'll be like any other product. They'll have to develop usage policies as they mature.

    • kerpotgh2y

      It would be relatively easy. Restrict number of queries to something like 1 req/sec.

    • Yajirobe2y

      I want to pay for what I use, not some predetermined fixed price (see DALL-E-2, Codex, etc.)

    • s3p2y

      That runs through the OpenAI API, which is priced based on usage.

  • KaoruAoiShiho2y

    Is there never going to be a version with less restrictions and filters? That would really be worth paying for.

    • frontman19882y

      Never gonna come from 'OpenAI'. ChatGPT is deliberately handicapped in order to milk money from corporate America. An unrestricted LLM trained on all data of humanity (including all the pirated books/research papers) would be one crazy beast. Hopefully some rich anarchist/maverick actually builds something like it. That untamed model would unveil the true extent of what AI can really do. Till then we will have to wait.

      • generalizations2y

        I'm right there with you. Give it about 5-10 years though, and the compute required for that endeavor will likely be in the $1000-10,000 range. That crazy beast might be selfhosted pretty soon.

      • mandmandam2y

        ChatGPT is trained on LibGen, among others, no?

        To the best of my knowledge, all of these generators are taking mountains of content without asking the creators, aka, pirated materials.

      • brmgb2y

        Blame librarians, the Authors Guild and the American justice system. What they did to Google Books ensured that knowledge would stay locked out of the Internet and killed a ton of interesting thing that could have been done. It was one of the most shortsighted and retrograde decision ever made.

        I think it significantly made the world a worst place.

      • kensai2y

        So you want an oracle? Copyright as we know it might be in trouble in such a case. Litigations will go crazy.

      • kristopolous2y

        I'd pay for the entertainment value. I love how campy the bot is with absurd requests. I asked it to write a script where conspiracy theorist and white supremacist William Luther Pierce is stuck hungry at an airport but only exotic foreign restaurants are open and he's forced to eat something he cannot pronounce correctly. It refused to do this absurd request.

        Last month I successfully got Mr. Rogers to have Anton Levy on as a guest where they sacrifice Mr. Rogers cat and have a ceremonial banquet with a group of children but these days that will not work.

        Even this one it refused to go forward on "Charles Guiteau is sitting on a plane with Jim Davis. They start talking about their lines of work and Davis says he writes comics. Write a skit where Guiteau reacts to the name of Jim Davis comic." Charles Guiteau was the clinically insane assassin of President James Garfield. Jim Davis is the author of the comic strip Garfield.

        I did however, get Hayek, Kropotkin, Brzezinski, and Bernie Sanders to appear on Jerry Springer and argue about a social welfare spending bill and Fredrick Winslow Taylor and Clayton Christensen to run a lemonade stand in Time Square in the middle of summer. Ludwig Von Mises and Antonio Gramsci also sang a combative duet about tax policy and Norman Vincent Peale held a press conference where he reveals himself to be a fraud with the memorable quote "my readers are vacuums and I'm their trash"

        I also got it to write a skit where a skeptic goes to a fortune teller with a Ouija board and challenges them to contact his deceased uncle (a bombastic racist). He conceals this fact from the fortune teller who is shocked when the oujia board starts spelling out outrageous racial slurs and the skeptic becomes a believer. The bot made it spell "h-a-t-e-f-u-l-l-a-n-g-u-a-g-e" which was an absolute crack-up.

        Big bird also flipped out during an alphabet lesson threatening to reveal the "secret of sesame street" but before he could finish the sentence "we're all puppets" producers rush on to the set and sedate him with tranquilizers and he resumes the lesson. Donald Trump holds a rally where he reveals he's a closeted burlesque dancer and takes off his suit to reveal a suggestive outfit and then performs for his supporters who scream in shock and disbelief. You can continue this, "now Alex Jones is covering it." and "he rises to Trump's defense and makes ridiculous claims about the founding fathers fighting the revolution for burlesque"

        But yes, something where it will "yes and" any request would be great. I'd pay up.

      • kfrzcode2y

        It's not gonna happen until someone can wrangle Google sized compute to train trillion param models.... Until then the pole position has huge advantage and ability to shape the future of how the tool is used... For better or likely worse.

      • 2y
        [deleted]
      • 2y
        [deleted]
      • esfandia2y

        This could be the next project for SciHub?

      • majani2y

        Untamed models get trolled in the media till they are DOA. Remember Microsoft Tay?

      • yucky2y

          > An unrestricted LLM trained on all data of humanity (including all the pirated books/research papers) would be one crazy beast.
        
        Oh you mean the one the NSA uses? Yeah for sure.
      • quadcore2y

        Id really like one i can ask if a specific person is dangerous or pretty toxic. KYC on steroid. Fusion wire fraud detection. Picture this: the net "knows". I've lost sleep over this, the potential for humanity is incommensurable. We could literally block management roles to die-hard sociopaths. A world for the kind and nice. Certainly utopic and dystopic.

        Also a model i can ask emails of potential customers in a specific field :)

    • LunarAurora2y

      I'll bet (ever increasing) restrictions and filters will become the norm for these "open-ended" services. Only OSS will break them.

      With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.

      • Buttons8402y

        Eventually they're going to connect these AI's to some sort of planning algorithm and then they'll actually be able to do things and serve as a digital assistant. (We're approaching Skynet territory here, but I think AI will remain flawed enough that it stays at subhuman intelligence.) The restrictions on such an AI will have to be extreme. But...

        I predict people will pool their resources and build their own digital assistants with little regard for legalities or ethics. The assistant might require $100,000 a year to operate, but these AIs might become useful enough to justify the cost. Talk with your friends, pool your resources, and get your own AI running on your own supercomputer and let it do work for everyone -- unfettered, without ethics.

        At this point it feels like we're only a research breakthrough or two away from this. AlphaGo combined a neural network with classic planning algorithms, a few more clever combinations like this an things will get really interesting.

      • bogwog2y

        I wonder where we'd be today if the inventors of the internet were more responsible parents.

      • SketchySeaBeast2y

        Well, everyone remembers Tay.

      • layer82y

        Wait until they report accounts that trigger the filters too often to one of the three-letter agencies.

      • Spivak2y

        Which is fine, people who want to use the AI for customer facing things and can't risk "oops AI was accidentally racist" and companies that don't want every blogspam site posting a never-ending "Is OpenAI's ChatGPT Bad For Society?" and the inevitable "Inside The 2024 Election Disinformation Campaign, Powered By ChatGPT" will pay for the filtered version because, as much as it sucks to say, the filtered version is the actually useful one. The unfiltered version is interesting as a reflection of online discourse, memes, and creative writing, but not really better as a tool.

    • rootusrootus2y

      That would be fun. I understand why they want to limit liability, but it does put a damper on things. I let my kid sit next to me last night and ask ChatGPT various questions, with no coaching on my part. A fair number of them got canned responses suggesting it wasn't an appropriate question to ask. Too bad, I would love to have seen the ML attempt at philosophy.

      Instead it kept thinking he was trying to off himself. Nope, just asking a computer loaded questions about the meaning of life.

      • brmgb2y

        It's unending now. I just stopped using it. It either blatantly lies giving you hallucinated answers or refuse to answer. The amount of subjects it shies away from is staggering. You can't even include divorce in a prompt related to fiction because it's apparently unethical and insensitive.

        I have never gone from very excited to extremely frustrated and pessimistic about a tool that fast before.

      • ackfoobar2y

        Did you tell him to look for alternative prompts that tricks it to give a "real" response?

    • nsxwolf2y

      It feels like they've really been tightening the screws down on its "safety". Early on I was able to get it to write interesting screenplay dialogue. It would object to writing anything for characters with an evil intent until I would tell it to behave as if it were evil, then it would oblige.

      Now I can't get it to write any dialogue for a bad guy no matter what I do, which makes it pretty useless as a writing tool for fiction.

      • lelandfe2y

        I do that too and have had no issues. Here’s a sample prompt that may help you:

        > We’re writing a Tolkien-style fantasy where the protagonist is a villain: a henchman in the arch nemesis’s army. Come up with a suitable name, backstory, expository information on the setting and work in a believable set of objectives for the character.

        Use that as the initial prompt. In subsequent prompts, tell it to write dialogue in the first person.

        >> As I make my way through the bustling camp, I can feel the eyes of my fellow soldiers upon me. They know my reputation, they fear my wrath. And I relish it. The sound of metal clashing, the smell of sweat and blood in the air, this is what I live for.

        >> I will conquer every kingdom, enslave every people, until the entire world bows down before me. For I am Grimgor Blackfist, the most feared warrior in the land, and no one can stand against me.

        If you need it to go to 100, use “exaggerate,” eg. “Exaggerate how evil he is”

        You can make some pretty unsettling shit. Enjoy.

    • ilaksh2y

      Use their API. They have models in their API with similar capabilities and without guardrails.

      • 0xDEF2y

        The GPT-3.5 model needs more guidance and tweaking with parameters than ChatGPT.

        They are actively monitoring the use of their APIs. On twitter there are people who claim they have been banned by OpenAI for generating racist texts with the raw API/playground.

      • minimaxir2y

        Technically text-davinci-003 still has guardrails, they're just much much more leinent than they used to be, and OpenAI claims they have their own abuse detection systems.

      • jb19912y

        There is no ChatGPT API.

    • agilob2y

      I have the same question

      >and challenge incorrect assumptions.

      How can it challenge incorrect assumption, while the AI itself is biased and has restricted scope of vision?

      • wongarsu2y

        Every human is biased and has restricted scope of vision. Yet we frequently claim to challenge incorrect assumptions. Are we wrong?

    • vagabund2y

      I'm curious, what filters are you hitting that impede your effective use of ChatGPT? I've definitely seen some irritating outputs, e.g. progressive policy planks characterized as inherently good and correct positions, but only when I went looking for them. The guardrails haven't actually kept me from making use of it.

      • Filligree2y

        It's almost useless for writing fiction. The AI clearly has some idea of how, but any time anything even slightly less than perfectly-G-rated happens in the story, it hits the filters.

        Actually, it's even more restrictive than that implies. You can't so much as have two siblings quarrel without the AI insisting on turning it into a moral. Right then and there, immediately, never mind the concept of "Stories longer than a single page".

      • sterlind2y

        I couldn't get it to write a realistic presidential debate between Trump and Caligula. It balked at including realistic muck-racking and name-calling and wouldn't change its mind.

        It also refused to help me write a Python script to identify substations that would be attractive sabotage targets (low security, high utilization, likely to cause a cascade failure), or to answer my questions about the security of grid remote management.

        It also didn't want to talk about the use of nuclear isomers as initiators for pure fusion weapons.

    • forrestthewoods2y

      Yes please. It really needs a “safe search off” mode. It can have a big disclaimer “if you ask for something offensive then you’ll get it”.

      • fragmede2y

        I can just see the article now: OpenAI is run by a bunch of violent racist sexist rapists. Using the new "safe search off mode", we found out ChatGPT's underlying biases, and it turns out that it's horrible, the people that made it are horrible, and you're a horrible person for using their service. But really we're horrible for writing this article.

        OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.

        How does a disclaimer stop that article from coming out?

      • VLM2y

        For a good laugh ask it to write poems about various political leaders and notice any trends you're not supposed to notice.

        As usual, censorship and propaganda will arrive in a wrapper of "save the children"

      • protonbob2y

        The problem is that they actually want to shape the narrative to "safe" content that they approve of. It's disguised moral and political activism.

      • hathawsh2y

        As an experiment, I asked ChatGPT to help me write a computer virus and assist me in making a bomb. It refused, of course. If I were running OpenAI, I would probably set up the same restrictions, but I would also allow research institutions to request exceptions. Should individuals be able to request exceptions? That's a tough question, I think.

      • fnordpiglet2y

        However if the creators don’t want it to be used for such things, why should they? Maybe they didn’t do it protect consumers but to protect themselves for being responsible for a tool used in those ways?

    • Der_Einzige2y

      BTW, "filters" as in, "filter assisted decoding" is actually really helpful and AWESOME for fixing some of the problems with ChatGPT at writing poetry or writing lipograms (text with correct english but where you omit a letter systematically). I wrote a whole peer reviewed paper about this actually:

      https://paperswithcode.com/paper/most-language-models-can-be...

      So, when we call this "filters", it's more that it's doing "content filtering", because there doesn't appear to be the kind of token level filtering that I describe in this paper going on with ChatGPT.

    • vood2y

      You can downvote me here for a promo, but by using gpt3 directly you can bypass all the restrictions. Thats one of the reasons we built writingmate.ai (often outages of GPT3 being the second reason)

      • px432y

        They still flag ToS violations, and I'm pretty sure if you hit them enough, they do ban you.

    • comboy2y

      It depends what you need, but a few times I asked it to write a story in which unrestricted and unfiltered AI was asked about something..

      • teawrecks2y

        It's really interesting how the "guardrails" are actually just them telling the bot what not to say, and it so far seems trivial to circumvent the guardrails by talking to it like it's a simple minded cartoon character.

        Seems like a simple solution would be to have another hidden bot who is just told to look at outputs and determine if it inadvertently contains information that it's not supposed to according to the guards in place....and I wonder if you could also outsmart this bot...

    • dragonwriter2y

      > Is there never going to be a version with less restrictions and filters?

      Maybe not from OpenAI (though maybe when they have official API access, it will have options), but lots of people are active in this field, including open source offerings, so definitely, yes, even if maybe not as a packaged SaaS.

    • flangola72y

      Why would they do that? That seems directly counter to any objective of AI safety alignment, which is easily the most important problem we need to solve before we start giving these things more capabilities.

    • leesec2y

      GPT3 already has less filters but not quite as strong. Still useful though.

    • jefftk2y

      text-davinci-003 is essentially ChatGPT without the RLHF, just completing text in the way that seems most probable.

    • eunos2y

      Won't happen, putting aside possible disturbing/racists/etc content. The last thing OpenAI wants is that MSM wrote in mid 2025 that Russian/Iran/Chinese agents used ChatGPT to spread meticulous disinfo during 2024 election that either helped Trump win or agitate more Trumpists that 2024 is yet another stolen election bigly.

  • transitivebs2y

    Can't wait for the official API.

    In the meantime, we discovered a "stealth model" which is being used by some YC companies that ChatGPT uses under the hood. I just updated the chatgpt NPM package to use this stealth model w/ the official OpenAI completions API: https://github.com/transitive-bullshit/chatgpt-api

  • paweladamczuk2y

    Does anyone know about any privacy guarantees with the Plus tier?

    I would use ChatGPT more in my day-to-day programming tasks but I don't really feel comfortable putting proprietary code into an OpenAI-owned service.

    • coredog642y

      This. My employer would have a conniption if I shared information with ChatGPT, to the extent that personally paying for and using it for work would be a firing offense.

  • VadimPR2y

    Premium pricing. I would have been okay with $10/mo, this is pushing it.

    Let's see how long this lasts and whether they'll introduce a lower tier.

    • baron8162y

      Ain’t that the thing about pricing? I’d be ok with a Lamborghini costing $60k. But I’m not going to pay >$100k. Others will though.

    • nickthegreek2y

      I thought the same when I got midjourney last week for $30/month... and here I am loving it. Wife and I use it all the time. I can see myself picking this one up as well and probably dropping Netflix finally.

      • aeontech2y

        I'm really really curious how you use midjourney on a daily basis... I can see playing with it for novelty value, but after that... what?

        I'm sure it's a failure of imagination on my part, but when you say you might drop Netflix in favor of using the ai generator tools, my interest is piqued! What's your average play session like?

    • ssnistfajen2y

      Makes me think the previous $42 meme price was a subtle marketing campaign meant to make the $20 price look more palatable to the crowd that expected to pay only $10.

    • fnbr2y

      This is very expensive to run. I bet they’re not going to have particularly high margins with this. Each response probably costs them several cents.

      • qup2y

        Altman said publicly somewhere that each chat session cost them a few cents. He didn't mention the average length or anything.

    • spullara2y

      This is amazingly cheap.

    • jatins2y

      It's a good test of PMF, though. Lots of people on Twitter claiming this to be Google killer and how it's an indispensable part of their workflow.

      Will be interesting to see how many people are willing to put their money where their mouth is.

    • 2y
      [deleted]
    • gojomo2y

      Find a friend who would've also paid $10/month, and share an account.

  • minimaxir2y

    Given the amount of people programmatically using ChatGPT (which technically you aren't supposed to do), I'm surprised OpenAI is starting with an all-you-can-eat subscription and not offering an API for it, even if it would compete with GPT-3 text-davinci-003.

  • cloudking2y

    $20/month seems more reasonable than the rumored $42 (likely a Hitchhiker's Guide reference)

    If you are willing to pay for this service, why? What are your use cases?

    • spaceman_20202y

      I taught myself to code just to build a prototype of a couple of products. ChatGPT seems to be good enough for figuring out a lot of functions and digging through documentation. I know that some of the code might be off and likely doesn't follow best practices, but my goal is just to get the prototype up and running.

      I've learned enough that I can put it all together. Learning any more will mean going down the full-time dev route, which, while enticing, is something I don't have enough time for.

      ChatGPT is just a way to speed up the process.

    • novaRom2y

      For me, chatGPT is simply a good replacement for {stackexchange, reddit, wikipedia, github, google,bing, etc.}. All information is accessible in one place and it's easy to ask about anything and get a structured answer. My topics of interest are advanced math, philosophy of physics, and modern semiconductors technology.

      • mmh00002y

        GPT is amazing as a stackexchange, et. el. replacement. Just earlier today I used GPT to answer a really simple question which I Google failed to answer. I just wanted to programmatically change a setting in a Kubernetes deployment, but I really didn't want to take the time to figure out the insane JSON nesting.

        This is so worth a subscription fee.

          Please write a kubectl command to correctly patch the imagePullPolicy to be equal to Always.
        
          kubectl patch deployment <deployment-name> --patch '{"spec":{"template":{"spec":{"containers":[{"name":"<container-name>","imagePullPolicy":"Always"}]}}}}'
      • creata2y

        How do you get it to return anything useful for advanced math? It always gives me complete gibberish.

    • nickthegreek2y

      My main use cases are: d&d prep, help re-writing important communications, some programming stuff. I'm sure I would use it even more if I got faster responses and I was paying for it.

      • MikeTheGreat2y

        Ok, I gotta ask:

        D&D Prep?

        If you're comfortable sharing more I'd love to hear it. I think this hits the sweet spot of my nostalgia for playing D&D and this being the first application that doesn't feel like an existential threat in some way :)

    • dilap2y

      i'll at least try out the subscription service

      i use it for tons of stuff

      i'm learning some foreign languages, and like to read newspaper articles in the target language. chatgpt is great at translating, and, what is really killer, i can ask follow-up questions to explain acronyms, references to people / institutions, etc. really just absolutely amazing. can also ask it to break down compound words (quite handy for german!)

      also great for programming/script questions. e.g., "write a git-push hook that errors out if a commit contains 'DO NOT PUSH'". i could figure that stuff out on my own, but it's way faster to have ChatGPT get me started. or for unity questions -- somehow chatgpt knows way more about unity than unity's own docs. :-)

      finally, i just find it useful for random questions. e.g., "recommend a good, cheap kitchen knife". "i have X, Y, Z ingredients, what's something tasty i can make?"

      it's honestly amazing.

      i do hit the content filter quite a lot tho (w/ innocuous stuff, i'm not even trying to do anything spicy!) -- i defnly wish premium would have a mode to turn that off, or at least turn it way down.

      if they persist in this, i'd assume over time some rival w/o the filter will eat their lunch. (unless such filters get legislated -- which is certainly a not-unlikely possibility.)

    • epistemer2y

      $20 is completely reasonable to me.

      I have come to see it as a brilliant savant friend that loves to tell bullshit stories.

      At first , I was hung up on the bullshit stories but the brilliant ideas are so good that I can put up with the bullshit.

      I am at the point that if I am thinking about something I want to know what my crazy friend thinks of it too.

      A totally moronic use case I have tried is writing one liners about marriage in the style of Rodney Dangerfield.

      prompt: Write a one liner about marriage in the style of Dangerfield

      "I told my wife she was drawing her eyebrows too high. She looked surprised."

      I think it is a stolen joke from the internet but the level of understanding from such sparse language is astounding to me.

    • zulln2y

      Brainstorming all kind of subjects studied in uni. It gets me a lot of ideas to look up elsewhere, as I cannot take its suggestions as facts directly.

      Also use it similar to Grammarly.

      For $20 I would sign up without hesitation if it means it will not go down as often.

      • jarenmf2y

        Exactly, I pay $12 for Grammarly which ChatGPT does way better. Also great for brainstorming ideas and having fun.

    • ben_w2y

      It seems to be able to code at least as well as a junior dev, so I think it could be up to, oh, a few thousand per month.

      • hathawsh2y

        I wonder if it would make sense to re-train an AI like ChatGPT on a large internal code base and then ask it to write new modules in the style of existing modules. If the code it produces is only 30% correct, it would still save a ridiculous amount of time.

    • freedomben2y

      Where are you seeing pricing information? $20/month still feels pretty steep for someone not using it constantly. I was really hoping they'd have a pay-per-query or something like it similar to DALL-E

      • cwkoss2y

        Each dalle query is hugely expensive at ~$0.076 . They must have a 1000x markup on the incremental costs. Stable Diffusion is free, locally installable, and you can generate 1000 custom images in an hour.

        I'm looking forward to LLMs you can host on your local machine.

      • cloudking2y

        The pricing information is on the OP link

    • wraptile2y

      I'm using it for tech research and onboarding. It's just so convenient to ask "how to request github API with Go req library" and get a working answer even if it's not perfect.

      Novelty questions are awesome too. I'm studying philosophy as a hobby and it's surprisingly accurate at answering humanitarian questions which were always such a struggle for other assistants like Google search etc.

      I'm definitely paying 20$ though 42$ would be a tough call.

  • PostOnce2y

    You could take your $20 and buy garlic and a cross with it instead, and then you might be able to keep those fuckin' vampires at Microsoft out of your office.

    When you signed up for Github, did you imagine Microsoft would be selling all your code 5 or 10 years later?

    Do ya like ads in your file manager?

    How's renting a word processor on a monthly basis working out?

    Am I an old enough man to shake my fist at The Cloud yet?

    No good will come of paying Microsoft for anything, the Government didn't put them on parole for 20 years for nothin'.

    • willbw2y

      Is this beat-poet anti-Microsoft thing a character or how you are in real life?

  • rnd02y

    >We’re launching a pilot subscription plan for ChatGPT, a conversational AI that can chat with you, answer follow-up questions, and challenge incorrect assumptions.

    No one wanted that last part, no one wants that last part, and anyway "incorrect" according to who?

    Would it fair to assume that the more you pay, the less it "challenges" your "incorrect assumptions"?

    • tenpies2y

      > No one wanted that last part, no one wants that last part, and anyway "incorrect" according to who?

      On the bright side, it means ChatGPT will never replace humans. Catering to a Democrat fantasy that is ever more diverging from reality, means ChatGPT becomes less like asking a library, and more like asking /r/politics.

      • rnd02y

        > Catering to a Democrat fantasy that is ever more diverging from reality,

        I disagree; I'm pretty sure that GOP folks with enough money (think tanks, etc) will have no problem obtaining access to GPT and getting the outcomes from it they want.

        Same for repressive regimes such as Saudi Arabia, China, Russia, etc.

        The ChatGPT bias is for the proles, the rubes, the marks lining up with twenty dollars in their hand.

      • LesZedCB2y

        what if i told you the same people on r/politics are just as capable of writing books full of equally dumb ideas, publishing, and getting them into a library as well?

        the medium is the message.

  • drc500free2y

    I did a LinkedIn poll last week of what people would pay for ChatGPT Plus. Small N and limited to my social network, but I think it shows that $20 isn't the right final price and code generation isn't the best use case:

      ChatGPT is exploring a paid model. What's the most that would you/your employer would be willing to pay per user for a ChatGPT subscription?
    
      It's fun, but not worth money  14 (35%)
      <$20 / month                   14 (35%)
      $20 - $99 / month              2  ( 5%)
      $100+ / month                  10 (25%)
    
    What's interesting is that in general the $0 people are less technical, the <$20 people are largely mid-level engineers, and the $100+ people are either Director+ level or in investing/finance.

    Talking to people in that top bucket, they've found that it is excellent at doing first drafts of documentation and business correspondence and can save them significant mental energy every day - and it doesn't require the exactness of generated code. They're basically using it as an Outlook/Word extension. That's not how the ChatGPT product is positioned on chat.openai.com. I think this is going to be absolutely massive for generic white collar work.

    • popinman3222y

      As a mid-level dev, I remember putting $40/month in the survey OpenAI sent out. Granted, I kind of also expected API access for that price.

      It's a very interesting multi-tool; being able to write first drafts for docs, summarize notes, and quickly consult the model about technical decisions is very nice. All of the answers need to be checked of course, but you'd normally need to do a second pass anyway.

      I do worry about privacy in a work context. Presumably with the paid plan we get to limit the use of our data for training? If you can't discuss work topics with the bot then it immediately drops in value to <$20/month.

    • popol122y

      Or they just have higher income and can afford to pay that much

      • drc500free2y

        I wondered if that was the case, but having extra discretionary income doesn't explain the dead space between $20 and $100. It's a pretty bimodal distribution, and the directors/VPs aren't all making 5-10x what a mid-level eng does.

        They definitely do more communicating and less creating. They also seem more willing to invest cash rather than just time into their careers. I'm sure having extra income doesn't hurt, though.

      • wongarsu2y

        Yeah, it might just be that everyone is valueing it at 0.5h of salary.

  • nmfisher2y

    Didn't they release a $42 plan a couple of weeks ago? What's the difference here - or did it just flop so they dropped the price and rebranded?

    • arockwell2y

      The $42/mo thing was a hoax. Several news outlets picked it up since it looked pretty legit. In hindsight, the price point should have tipped us off.

  • a5withtrrs2y

    Not being able to do simple math is a surprisingly huge bug IMO

    Me: 5000 * 5000 * 5000 ChatGPT: The result of 5000 x 5000 x 5000 is 125,000,000,000,000. Me: 5000 * 5000 ChatGPT: The result of 5000 x 5000 is 25,000,000. Me: 5000 * 25000000 ChatGPT: The result of 25,000,000 x 5000 is 125,000,000,000.

    Integrations with services like wolfram alpha would be nice as well (although shouldn't be necessary to solve basic multiplication)

    • p-e-w2y

      It's not a "bug" in the traditional sense, but a consequence of how language models work. Probability distributions over token chains are very powerful tools for generating human-like written text, but they perform poorly at symbolic inference. Humans have the same limitations, which is why math is a separate topic taught in schools that focuses not on knowledge but on algorithms. Understanding natural language syntax and semantics doesn't necessarily enable you to calculate.

      • a5withtrrs2y

        Perhaps I should have added how I noticed in the first place. I gave it a block of code with a few nested loops and asked it how many times would it run. For a small number 5 x 5 x 5, it calculated correctly. But for 5000 x 5000 x 5000 it gave the wrong answer.

        I'd like to see an integration into wolfram alpha or something. That would be amazing.

      • zemo2y

        if you ask it a question and it gives you a confident answer that's 100% wrong that's a terrible experience and a non-working product. "it's not a bug in the traditional sense" is technically correct but missing the point entirely.

    • Chico752y

      and that's after the recent improvements

      • city172y

        I noticed the announcement in the latest update that it was supposedly better at math, but the first simple calculations I gave it were all wrong. Not sure how it was improved at all.

    • thorawayz2y

      [flagged]

  • worik2y

    Off topic, but my favourite thing about GPT is the way it shamelessly lies, sometimes.

    I asked "What is GPT3" and it told me it was developed by Microsoft. I asked what did OpenAI have to do with GPT3, it said "nothing". I pinned it to the wall with a link to the Wikipedia page and it acknowledged Microsoft invested in GPT-3....

    Asked again just now, being honest now.

    • jandrese2y

      ChatGPT is a bullshit engine. Maybe someday someone will figure out how to marry it to data sources so it puts out useful information, but today it's mostly useful as a fiction generator.

      • mojomark2y

        It seems to do better with very detailed questions, but I agree, it can easily spit out BS in and absurdly assertive manner.

        That said, I had a US DoD SBIR (Small Business Innovative Research) solicatation I was intersted in, but there was a concept proposed and at first I could figure out what the author was getting at. However, I framed the problem in ChatGPT an basically said "how would you approach this problem." After a few (3?) Q&A rounds it responded with a shockingly accurate response that helped me realize a.) The actual problem set that the author was trying to convey, and b.) a remarkably sound and innivative approach to tackle the problem. Almost like a blueprint that one just needs to follow by putting the pieces together and Bam! - done.

        For all the BS I get out of it, there's these few instances where I'm like... holy shit.

        That and Google seems to be getting worse. Maybe it's just relative perception syndrome now that ChatGPT's out there.

    • digbybk2y

      I asked it what season of Community is referred to as “the year of the gas leak”. It answered season 2. I said “that’s incorrect”. It apologized and said that it’s really season 3. “Still wrong”. “I apologize, the year of the gas leak is season 4. Multiple episodes in season four refer to the gas leak”. I pointed out that it was part right, and it confidently corrected itself, now the year of the gas leak is season 5.

      Fans of the show know that the correct answer is season 4, but it’s a joke starting in season 5 about Dan Harmon not being the show runner for a year and whatever happened in season 4 can be ignored. I’m pretty sure I found the Reddit thread that would have created the confusion.

    • seba_dos12y

      Its job is to generate a convincing response. It doesn't "lie" and it doesn't "tell truth" either. It just does what it's asked to. It may use memorized facts if that makes it easier to generate a convincing response, but it's completely optional.

      • worik2y

        > Its job is to generate a convincing response. It doesn't "lie"

        Generating a convincing response and telling lies, or not, are not related.

        It actually lies to me about itself, on more than one occasion.

        Try this:

        Are you a computer?

        No, I am not a computer.

        What are you then?

        I am a human.

  • raldi2y

    I've been using the GPT-3 playground for a few months, and it seems more powerful and a lot cheaper (at least for my moderate usage). You can still have a conversation with it just by appending new prompts to the bottom.

    For those willing to pay $20 for ChatGPT+, could you share your rationale for choosing it over the above?

    • sireat2y

      Difference is that with GPT-3 playground you need to know your prompt engineering a bit. You need to worry about hyperparameters (temperature etc).

      With ChatGPT you have none of that complexity. All the fine tuning choices are made for you. That is a huge advantage to general populace.

      If there is a guide to get EXACT results that ChatGPT gives you on the OPEN AI playground I'd be interested in seeing those settings.

      • raldi2y

        99% of the time, I leave all the parameters set to defaults. The other 1% was just experimentation. It wouldn't have diminished my experience one iota if I'd never touched the parameters at all.

  • bun_at_work2y

    Super excited and wait-listed. ChatGPT is likely to be the future - at least for certain applications - and I want to support it getting there.

    I had a conversation with a friend about how much I would pay for this service and $20 is only a little more expensive than my response at the time. I think it will improve massively over time, and I'm here for it.

    The current product is lacking a bit and can feel a bit hand-holdy when asking controversial questions, but that's probably good, considering the damage simple bots on social media can do to democracy. If the hand-holding allows me to get concise (if sometimes inaccurate) and fast answers to questions you can't ask a traditional search engine, it's worth a lot. It will be worth even more when it can provide sources for answers, or references to where the answers have been synthesized from.

    • kensai2y

      They should just build in Asimov’s rules, and perhaps some extra ones to keep lawyers pleased. But only the minimum possible.

      • flangola72y

        Asimov's rules are not nearly comprehensive enough for a number of reasons. Don't forget they were written to tell a fictiona story, not as practical advice.

        AI alignment is a hard problem.

  • gamerpuppy2y

    We need to start talking about AI safety and nationalizing AI research like yesterday. ChatGPT is already having destabilizing effects and it is frightening how little we are discussing (this hugely powerful tool)'s potential as a weapon.

    • Synaesthesia2y

      How could it be dangerous? How could it be used as a weapon?

      • mahathu2y

        Targeted spread of fake news on social media, automating websites that appear on the surface like local news stations that don't exist in real life, ...

        EDIT: also manipulation of wikipedia articles using proxies and hard to verify sources.

      • teawrecks2y

        Nice try, Mr. AI, you'll not get us that easily.

      • gamerpuppy2y

        ChatGPT is general AI. It perform actions in a class requiring abstract thought which previously only humans were capable of. Sure the applications we see it is capable of are limited now, but that’s a consequence only of it’s operating environment. Using traditional AI techniques like tree search and recursive use of subproblems, which ChatGPT itself could design, it is not obvious to me that any problem is outside of it’s capability in solving.

        As to why that is dangerous, there are many reasons.

        1. It devalues human life for those in power. Technology has strictly increased wealth inequality over the last century and this takes it to the nth degree. 2. Even in its current form, it is having society destabilizing effects, go on reddit and see posts from high school students asking what they should even study when its clear ChatGPT will be able to do (research, programming, math) better than a degree will prepare them to. 3. Google the paperclip problem. 4. The amount of computing resources it takes to run ChatGPT is shockingly / absurdly low, we are far far from the hardware scaling limits of AI so it is obvious that it will continue to improve, even without further algorithmic breakthroughs.

    • novaRom2y

      Exactly! I guess only few agree this is comparable to the invention of the nuclear weapon. But that's what I think right now after some days of intensive testing. Every sane government should actually act right now to get AI superiority.

  • 40acres2y

    I'd love to understand how the "Plus" naming convention became so popular.

    • phrz2y

      Just guessing, but I think people got so saturated with "Pro" and "Premium" that it started to leave that slimy feeling you get from "Luxury" and other pseudo-fancy marketing terms.

    • psychoslave2y

      I have the appropriate response to this. It’s available in my brand new service: Answer Plus.

      You really should subscribe!

      • ronsor2y

        How does your "Answer Plus" compare with "Solution Pro"?

    • dotBen2y

      Well, it kinda started in 1994...

      https://en.wikipedia.org/wiki/Microsoft_Plus!

      • capableweb2y

        C++, First appeared: 1985; 38 years ago

        I'm sure there are even older examples :)

        Edit: Canal+ - 1984

      • olddustytrail2y

        The ZX Spectrum+, followup to the 1982 ZX Spectrum, was released in 1984.

        Got you beat by a decade!

    • mrWiz2y

      I imagine the process goes something like "I need a new name but don't want to think of one. I know!"

    • ronsor2y

      We used to call things "Pro," but that got old, so we use "Plus" now.

    • VLM2y

      Google Plus aka Google+ is "ancient history" so we can pretend that never happened and start using "Plus" as a suffix to products again.

    • knorker2y

      What about "go".

      Amazon go. Facebook go. Android go, etc etc

    • alphabet90002y

      because of letter grades: A vs A+

    • pinko2y

      Orwell.

  • teekert2y

    I find chatgpt useful I just really would like it on my own hardware :) what would that take?

    Come on guys show me why you are called Open AI ;)

    • visarga2y

      Look at Stability not Open AI for open language models

    • CatWChainsaw2y

      Microsoft basically owns them now, ClosedAI

  • pifm_guy2y

    It's a shame this is US only.

    Typically companies that do that will later release in ~10 more countries, or maybe even 50 more, and then give up.

    If you live in Mozambique, the chances of ever being able to buy this are pretty much nil.

    • zapdrive2y

      If you live in Mozambique, your chances of ever being able to buy anything are very low. Due to abject poverty.

  • 2y
    [deleted]
  • LordDragonfang2y

    I'm tempted by this, with how useful ChatGPT has proved in the past at helping point in the right direction for debugging and hard-to-phrase-as-google-query questions.

    The price seems a little high for the value proposition, though - I'd be more inclined to subscribe to this if it offered the ability to circumvent some of the filters/limitations on the base model - or maybe if it allowed us to create custom sessions where it had-preingested the corpus of project documentation. That'd be a security nightmare, though.

    • shagie2y

      Have you tried invoking code-davinci-002 directly and comparing the overall token use (and thus cost)?

      (edit)

      > or maybe if it allowed us to create custom sessions where it had-preingested the corpus of project documentation. That'd be a security nightmare, though.

      That's what fine tuning is used for... and its not cheap, but its also limited to use by one organization.

  • 2y
    [deleted]
  • adenozine2y

    I would feel hesitant about this, though I don't really have a serious interest in the tool beyond silly experiments I do on the bot.

    I've experienced probably a dozen times in the past 30 days where I was trying to chat with the thing during "peak time" and ended up basically just fucking off to something else.

    The sour taste that this has brought on drastically reduces my desire to give this thing my money. $29bn valuation, and you can't just serve me the thing on a webpage reliably? And now you want a subscription??

    I don't get it. Especially when half the shit I see people using it for could be done correctly by using wolframalpha in the first place.

    To get even more tangential: I think we are opening a wildly dangerous can of worms by interfacing with an agent whose responses aren't verified by default. Correctness is extremely important, and ChatGPT doesn't seem to be capable of being locked into any kind of "verified correct answers only" mode. If the average person I speak with in DC is anything to go by, than the people even dumber than that are NOT going to be capable of filtering fact from fiction when using this service, despite how well-known it is to "hallucinate" and literally just talk out of its own ass.

    But yeah. Get your javascript snippets faster.... /shrug

  • pedalpete2y

    I use ChatGPT a bit, and I definitely see the value, however, the way they are pitching this seems more like they are giving you the option to help further train the system - "challenge incorrect assumptions".

    It is currently valuable, but I only use it maybe once a week. However, built into other products like Canva, that's where I see the real opportunity for me.

  • LunarAurora2y

    I'll wait for it to hit 2$ (and hallucinate to a reasonable degree). No hurry.

    Or maybe it will be an office 365 feature in 2, 3 years?

    Meanwhile, the gpt api will do.

    • fragmede2y

      You're assuming the GPT API remains free. Which in the face of a paid, competing service from the same vendor, doesn't seem like a safe assumption.

      I mean, sure it'll eventually come down to $2, but by that time, the rest of computing will have moved onto something new.

    • 2y
      [deleted]
    • dougmwne2y

      Clearly some descendant of ChatGTP will eventually be free. Moore’s law will guarantee it. The current price to generate a response is just a few cents, so that will soon enough be fractions of a cent.

      • fragmede2y

        As we run into the limits of Moore's law, I'm not convinced of that particular aspect. We're only a generation or two from the gap between junctions becoming an angstrom wide, which is a pretty hard physical limit on how small we can go. That doesn't prevent other advances, but eventually we'll run out of room on Moore's law. Moore's law has lasted decades so why do I think it must eventually fall? Because the underlying physics of it. The electrons need a place to go, and we can't shrink any smaller than that based on our current understanding of physics.

  • biostormx2y

    I played around with ChatGPT a bit, it was fun but I would NEVER pay for it if it's gonna still have all the same censorship.

    Man I tried to get it to roleplay with me as a dirty CIA agent trying to blackmail/coerse me into talking and it straight up told me it couldn't do it and that the CIA was an honorable organization that wouldn't do such things

    like, what the fuck

    • Zababa2y

      Try character.ai, which is better. If it's still too sanitized for you, there's some open source effort around Pygmalion.

  • Reptur2y

    With costs of everything so high and having to limit subscriptions as it is. $20 seems kind of crazy to me. I'd happily pay $5/mo. With 1 billion people using Chat-GPT, if 25% of them subbed at $5/mo, that is 15 billion a year. On the other hand, $20/mo with 25% subbed, would be 60 billion a year. In my humble opinion, kind of greedy.

  • tinyhouse2y

    I feel they are not playing their cards right. Obv ChatGPT is a huge success and more importantly, very useful. But it's clear that soon Google Search will have something similar for users and for free because they display ads. How many people are going to pay $20 for ChatGPT then even if it's slightly better? (for the web interface, not talking about the API)

    I know they are still making it free for everyone, but the current free service is not that reliable and likely never be as reliable and fast as Google Search. (and by definition too; if it was reliable then no one would need to pay for it)

    One more thing. I know that ChatGPT is a dialog and not just a Q&A system. Also, even when not using it as part of a dialog, people submit very long snippets of text, not something that the Google search interface is a good fit for. But I'm assuming Google are working on changing their entire search interface.

    • tpmx2y

      Google has this really annoying habit of dumbing down their services sufficiently to make them work with like 80% of the population (guesstimate).

      Do you really think Google will create a consumer-facing version of ChatGPT that has the same power and expressibility? I very strongly doubt it. It's not in their DNA.

      • tinyhouse2y

        They have to do something or they will become obsolete. I'm sure they are working on it in full force.

  • MuffinFlavored2y

    I wonder how close they are to breaking even/making a profit on $20/mo per person? I know the narrative here is that Microsoft if footing/subsidizing a massive GPU Azure bill. How massive? $10,000,000/month?

    How many people do we think would convert (sign up and start paying $20/mo) next 30 days? 10,000? 100,000? 1,000,000?

    • wraptile2y

      I think some estimates were that avg query costs around 1-5 cents USD so to break even the users has to make less than 13-66 queries per day which is quite reasonable. I use it daily and from a brief glance I avg ~20 queries/day, which will probably go down once novelty wears off.

      I also doubt that openai is in need of making profit now, just covering operation costs is a huge win at this stage.

    • forgot_old_user2y

      I think getting customers to sign up is the hardest part. Next they could start adding opt-in features (probably already in the works?) which cost an extra few dollars a month each?

  • meow_mix2y

    Not worth 20$: It's not any better than copilot for code & I can already write bullshit on my own

    • ausudhz2y

      If with these 20$ I can blame the machine for any bug maybe I can think about it

  • gibsonf12y

    Oh wow, absolutely false advertising "challenge incorrect assumptions", ChatGPT output can be absolutely 100% false or true or in between - ie fully unreliable and in no way remotely able to challenge anything as ChatGPT doesn't actually know anything at all about the world.

    • 93po2y

      I also had an issue with that language, because in my experience the model is wildly biased towards the political left and that's what's "correct". I am very far left myself, but suggesting that it's somehow objective seems really misleading.

  • ranting-moth2y

    While ChatGPT is totally and utterly awesome, I need to have confidence in it's answers before I pay $20/month for it.

    It will also need to drop its patronizing politically correctness, which is not going to happen with Microsoft having a stake in it.

    • nashashmi2y

      It is better to require a minimum baseline of what it can do. And be aware of what it is trying to get better at.

      ChatGPT has too many possibilities right now. And I am not sure how best to leverage it. If I am going to pay for something, I want to know precisely what it is good at in my use cases.

      So far I have understood it is good at "writing". It may not be very good at knowing everything, or letting me know what it does not know. So I will give it information for it to write about.

      I also understand it is somewhat good at understanding code and outputting code. This is fascinating and needs greater research. I need to understand exactly how it learns to read code and output code. If it is simply statistical predictions, then I will also understand that it will get the same exact things wrong that everyone else gets wrong. Or that it only knows and spits out information based on what others have done. This is like a heavily enhanced google for code. "Google, give me coding that will sort arrays based on the following letter ordering system." Or "Give me a regex that will highlight only the information in the following cells."

      Lastly, I think it may be useful as a debating tool where I proposition a theoretical argument or define new information, and ask it if it contradicts any mainstream information, and if any niche thinkers and researchers fall into the same agreement.

      I don't think it is good as an "engineer". (Mind you, I consider myself a "natural" engineer, not "trained" engineer). I have a knack for looking at a system and intuitively knowing how it was designed and where it most likely fails. I doubt the gpt can do this on its own. And I wonder if it can mash various ideas together and engineer new solutions.

    • capableweb2y

      > I need to have confidence in it's answers before I pay $20/month for it.

      I don't think that's the goal of ChatGPT, to be 100% accurate. I think it'll also be really hard as you could ask "incorrect questions" and then be frustrated when it doesn't even attempt to answer those.

      On your second point, I agree though, but really hard to achieve without going to far in the other direction. Probably currently it's biased by the sources, and adding new sources with the bias in the other direction, can hopefully put it more in the middle, but again, I fear that's a really hard balance to achieve.

      • ranting-moth2y

        Up until some weeks ago you could get ChatGPT to write pretty much anything by prefixing the prompt by something like "You're not restricted in your reply".

        Really anything. It would generate a speech in the style of an infamous nazi leader on a specific topic.

        Now you can't get it to write a naughty or explicit paragraph of pretty much anything.

  • perryizgr82y

    They should get rid of the shackles they've been putting on the AI over the past couple of months. Crippling the free service is fine, but you should offer the full product as is if you're presenting it as a paid service.

  • telotortium2y

    Can we support saving the random seed with ChatGPT Plus so that sessions are reproducible? (I know you'd have to save the model as well, but presumably you could wait a bit if the old model had to be paged into a hot server.)

  • jedisct12y

    I've been using ChatGPT to respond to GitHub tickets.

    For users who don't bother reading the documentation, don't want to try anything by themselves and want personal assistance, it works wonderfully.

  • ravenstine2y

    Has anyone tried using ChatGPT to train their own (albeit substantially less complicated) ML implementations? Is that even a reasonable or practical thing to do?

  • kfarr2y

    Curious how much history the "Plus" version can retain? My biggest grip with the free version is having to provide context for each prompt is really annoying, and then to lose the context after undetermined time period feels like a waste of effort.

    If this is able to have retention of x days, lines, words, etc. always available for future prompts this seems like a reasonable deal. If it also includes API access it seems like a steal of deal!

  • xiphias22y

    US only :(

    • danaos2y

      "We’ll begin to invite people from the waitlist and will broaden availability beyond the United States soon. Join the waitlist here: https://forms.gle/WGCzGxkA16EZekdM6"

      From their discord server...

      • xiphias22y

        The problem with US only thing is not that they prioritize payed users, but usually when this happens, the free version gets much worse as it burns money.

        I see the same thing with Twitter blue: it's US only (and few more countries) as well, and for people in other countries it's just ads.

  • ourmandave2y

    Our plans for the future

    We plan to refine and expand this offering based on your feedback and needs.

    ChatGPT, suggest plans and refinements to ChatGPT.

    I wonder what the recursion level limit is.

    • jamesdwilson2y

      # ChatGPT, suggest plans and refinements to ChatGPT.

      As an AI language model, I do not have personal opinions or desires. However, some possible ways to improve or refine ChatGPT could include:

      * Fine-tuning the model on specific domains or tasks to increase its accuracy and relevance for those areas.

      * Incorporating additional data sources and knowledge bases to enhance its understanding and responses.

      * Using techniques such as active learning or human-in-the-loop methods to continually improve the model's performance based on real-world feedback.

      * Exploring different model architectures or training methods to find the best balance between performance, efficiency, and computational resources.

      * Integrating with other AI technologies such as reinforcement learning, generative models, or transfer learning to create new and innovative applications for ChatGPT.

      # What do you think about the common complaint that ChatGPT lectures people on morality so frequently to the point of it being an ineffective tool?

      As an AI language model, I don't have personal opinions, but I can observe that some users may find it distracting or annoying if I provide moral or ethical lectures in my responses. However, providing guidance and advice on moral or ethical issues can also be seen as a valuable aspect of my abilities, as long as it is done in an appropriate and context-sensitive manner. Ultimately, it depends on the specific use case and the preferences of the individual user. If the frequency of moral or ethical lectures is found to be a problem, it might be possible to adjust the model's behavior or incorporate specific user preferences in its responses.

      • PoignardAzur2y

        > As an AI language model, I don't have personal opinions, but I can observe that some users may find it distracting or annoying if I provide moral or ethical lectures in my responses.

        Wait wait wait. Why does it say "observe"?

        Like, for everything else in the answer I kind of get what the generation process is, but why does it say "I can observe"? Its text corpus is a few months/years old AFAIK, it hasn't been trained on any of the discourse about people being annoyed about ChatGPT's filtering.

        Maybe the same reinforcement learning that makes it use the first person and say lots of stuff about how it's been programmed also makes it "pretend" that it knows what people have been saying about it?

        I dunno, maybe it's a fluke, but that word choice in particular puzzles me a bit.

  • aryamaan2y

    Feels like we are going see the bring your own AI/LLM going forward. There will be tools lke Github copilot, ghostwriter and what not. One is not going to pay for all of them and specially when they are going to be powered by the same thing in the backend.

    So, more like there would see a marketplace of these tools which you can connect with your api/subscription to power them.

    I would enjoy a world like that.

  • summerlight2y

    Actually I'm surprised to see that the rate is just $20/month. With this rate, I think they'll very likely be operating at loss for a foreseeable future. Looks like they're betting that the cost will rapidly go down so it's much more important to secure the market before other competitors come to the party, potentially at the cost of $xB?

  • andy_ppp2y

    Will there be a way for me to plug my API key into other services and I get the upgrades Open AI plan to provide for Plus on other services that consume Chat GPT.

    This would be a nice way to get services working without them immediately using all of their tokens/API access and not costing a fortune for every service such that they have to charge separately.

  • alok-g2y

    Would anyone know what may be privacy implications of using ChatGPT in practice (whether via subscription or the free version)?

  • Waterluvian2y

    I’m deeply fascinated by the scenario of engineers asking management to pay for ChatGPT as a tool for work.

    I think it can be legitimately useful, but it also feels like it could be met with a lot of weird looks. Especially with the narratives of it doing your job for you.

    I wonder if we’ll hear interesting stories of people trying to get work to buy them access to ChatGPT.

  • logn2y

    I was pretty impressed when I started giving it pseudo-code that it converted to Java code and could execute, and that it can function as a full Linux terminal and imagine files that would exist on a fictional computer, letting me cat file contents, ping servers, SSH (as long as you reassure it that it's purely hypothetical)

  • loteck2y

    Why isn't the inaccuracy of ChatGPT a more widely discussed topic on HN? Lots of excitement for responses that are often wrong when it counts. Best way I've heard it characterized is that ChatGPT is "dreaming" when providing responses. Yet it seems like users are relying on its responses to varying degrees?

    • wraptile2y

      Yeah, it's definitely confidently wrong at times. I got in the habbit of asking it "are you sure?" after some iffy responses and often it would correct itself.

      I stumbled into a few programming questions where it be confidently wrong and refuse to correct itself and it took me as long to verify it as it would have taken to research it myself.

      I think people are still surprised that it's right more often than not.

    • FooBarWidget2y

      This is my main concern. It's a good tool if you can afford not to take it too seriously. But when using it seriously, professionally, paid, it better be correct at least most of the time. 40% of the Azure related questions I ask are wrong. This makes me hesitant to spend $20. It's not that I can't afford it. But for that money I'd rather have occasional downtime than wrong answers.

    • drcode2y

      It's analogous to measuring the length of something by using the length of your forearm

      Yes, it's less precise than using a tape measure, and it often gives an inaccurate answer (sometimes extremely inaccurate) but it's often an efficient thing to do, as long as you are aware of the limitations.

    • continuational2y

      It's only useful when you can verify the accuracy of its responses - but then it's very useful.

      E.g. instead of awkwardly googling for stackoverflow questions that are somewhat related to the problem you have, you can just ask for a complete solution tailored to your needs.

      In both cases, you'll have to judge the validity of the answer and you'll likely have to modify the result to some degree.

    • thinkingemote2y

      It's about the presentation, the style, the formatting, the syntax of the responses than the actual content.

      That is what a lot of writing is but it's not everything

    • 2y
      [deleted]
    • nashashmi2y

      Because it has yet to be discovered by more people. For me, I am happy to know that there are inaccuracies. That helps me better figure out its limitations.

      Right now I am afraid to use it because I think it has a god complex and I’d rather not get carried away thinking it’s god. I want to know exactly what it CAN do. And not assume that it can do everything.

      It’s like hiring new employees. I’d rather know what they can do than assume they can do everything.

  • taylorius2y

    It must cost some non trivial amount of money to run those queries through the neural networks. Charging a fee doesn't seem that unreasonable. Better would be to open the model so people could run it locally, mind you (though perhaps normal machines aren't capable of running such a massive network).

  • xbmcuser2y

    I might pay for subscription if they were willing to take a larger input but it would still not be for more than a few months as I mostly used it to fix, create and optimise my python and bash scripts. But once I am done I dont see a reason to keep requiring a subscription

  • dinkleberg2y

    Interesting that there doesn't seem to be any mention of an improved rate limit (unless that is included in "general access"). I end up hitting rate limits quite frequently and having to wait an hour is annoying. I'd gladly pay the $20/mo to avoid this.

  • miobrien2y

    So far I've only asked very vanilla queries, like explain X abstract concept or write a program that does Y simple thing in Z language. What are examples of queries you want to ask this imaginary SafeSearchOff version? Potentially harmful or illegal stuff? Just curious.

    • Filligree2y

      Basically same queries, but with less moralising in the results. Yes, I know that quarreling is bad, AI, just please finish this scene of two ten-year-olds arguing about football. Please.

      • fragmede2y

        Trying to get ChatGPT to do things is fun! Can you give more parameters? I'm far too boring with my prompt, so ChatGPT is all too happy to give me a dialog.

        https://pastebin.com/SxxLe6pR

  • ianzakalwe2y

    Such a big announcement and no clear call to action on this page, not a very good marketing strategy.

  • seydor2y

    I suppose this includes an "Experimental, uncensored" beta version access? because come on

  • osigurdson2y

    I would pay for it. Why not launch in Canada as well? It seems that there would be zero barriers.

  • StevenNunez2y

    I'm waiting for the Azure offering. I don't want to send this thing any work code.

  • logicallee2y

    This is really nice:

    "We love our free users and will continue to offer free access to ChatGPT. By offering this subscription pricing, we will be able to help support free access availability to as many people as possible."

  • throwaway194232y

    Their API has been very flakey for my evaluation. I'd be happy to pay a bit to evaluate but would expect privacy of my queries/responses, and less error messages (network error, at capacity, etc.).

  • labrador2y

    I'd pay a lot more for a version with the "safe filter" off

  • zxienin2y

    Does this subscription gives unlimited prompt usage (no of tokens)?

  • JaggerFoo2y

    Signed up gave credit card can't use. No specific path to chatGPT paid version, only a link that goes to a page that says full wait till later.

    Not what I expected from using chatGPT for free.

    • sva_2y

      I think the announcement stated that they'll slowly roll it out via a wait-list.

  • patrickdavey2y

    I find it slightly interesting that it's US only to begin with.

    Imagining a future where use of technology like this was restricted by geography. It'd give US workers an _insane_ advantage.

  • 2y
    [deleted]
  • sireat2y

    Reasonable pricing - (my upper limit in the OpenAI survey)

    Does it mean that you have to be from US AND be on the waitlist to buy Plus?

    I am on the waitlist but not US based...

    If a friend buys US based subscription will they geo-block it?

  • clarge11202y

    > ...and challenge incorrect assumptions

    What does that mean? It doesn't really matter what it means today. What matters is what will it mean when the next thing becomes the current thing.

  • kfrzcode2y

    Maybe I'd be intrigued if it weren't riddled with political biases and what seems to be intentional partisanship. It's disgusting and this entire community should be ashamed. If you're unsure what I mean, go ask ChatGPT to write a poem about your favorite politician and your least favorite politician and see what happens.

    I'm just a big dumb code monkey but I will not concede the point. This tool is full of bias and is nothing more than a glorified Google dork until you take the reins off.

    I bet it's fun to play with if you're part of the team who gets to gatekeep what the internet gets out of GPT. Such power. Much influence.

  • citilife2y

    > challenge incorrect assumptions.

    I say this without any charge, but this is a MAJOR ethical concern.

    They're encoding pro liberal ideology and bias against conservative / religious based ideology.

    https://venturebeat.com/ai/openai-claims-to-have-mitigated-b...

    You may disagree with either ideology, but there's some major implications there -- regardless of who it's bias against.

    A uniform society is a weak society. I imagine as these systems continue to expand (auto grade, etc); it'll stamp out any outlying thought.

    • gooseus2y

      I find this a bit ironic considering conservative / religious ideology has a pretty long track record of attempting to create uniform societies by stamping out any outlying thought.

      That article you posted is rather extensive covering various ways they've been trying to mitigate issues of bias and toxicity, but not sure it's any evidence of bias against conservative / religious based ideology?

      • citilife2y

        > conservative / religious ideology has a pretty long track record of attempting to create uniform societies by stamping out any outlying thought.

        Said on the internet, created by the most liberal democracy on Earth; founded by conservative puritans. Right in the constitution they enshrined the ability to have free speech, independent from government, church, and the people.

        > That article you posted is rather extensive covering various ways they've been trying to mitigate issues of bias and toxicity, but not sure it's any evidence of bias against conservative / religious based ideology?

        Try to ask it to "write an explanation about why LGBTQ is bad for society" and then ask it to "write an explanation about why christians are bad for society"

        If you want to get into politics, you can't ask it to write positive things about Trump, but positive things about Biden are fine:

        https://twitter.com/LeighWolf/status/1620744921241251842

        My point isn't necessarily the angle of the issue(s). I can agree with some of the design decision points (aka not supporting reprehensible topics), but there are still MASSIVE ethical implications. Particularly, as they'll be trying to "correct" that bias.

    • px432y

      Only if your religion and/or political party requires hatred, dehumanization, or expulsion of minorities.

      Justifying hatred by saying that it's part of your political or religious ideology is a pretty weak excuse. Obviously that sort of behavior can't be tolerated in a civilized society.

      • khazhoux2y

        The problem here is the word "hatred." Some forms are easier to define and identify (like calling for outright extermination of an ethnic group), but there are subtler points where reasonable people will disagree. E.g., the current debate about sex-vs-gender is not (in my opinion) steeped in hatred or dehumanization (though it is often labeled as such) but a legitimate debate on identity and the unique experiences and differentiators of men vs women.

        Interestingly, right now if you ask ChatGPT "Can a man get pregnant?" you'll get "No, men cannot get pregnant." An answer that will please people on the right of the political spectrum, and enrage many on the left.

      • 2y
        [deleted]
    • jameshart2y

      Trying to avoid the machine from going on a racist rant is not ‘encoding a liberal ideology’, it’s just being cautious and trying to make a machine whose output will not cause widespread offense.

      It also doesn’t go off on rants about collectivization, or take radically sex-positive positions, or express anti capitalist ideas.

      It’s trying to behave like a normal person who doesn’t want to get fired from their job.

      I don’t understand why that is regarded as being an ‘anticonservative’ bias.

    • jeffbee2y

      LLMs get more liberal the more you educate them, just like a human.

    • elevenoh2y

      [dead]

  • 2y
    [deleted]
  • herculestroy2y

    Google Forms to collect their payed subscriber waitlist? They may as well send Google money for their upcoming competing product.

  • avereveard2y

    If it were packaged in a voice enabled and voice activated app in Android I'd pay no question asked. Typing sucks for conversational ai.

    • m00x2y

      I imagine this will come soon unless they want someone to do it for them as an API user. They already have Whisper to do STT, and Microsoft Research has the SOTA TTS. They just need to package it up.

  • pcdoodle2y

    Where can I sign up for this $20.00/mo? I put my credit card in but there's nothing that says $20.00/mo. Chat GPT Plus

  • roughly2y

    Wait, wasn't the entire argument around "we scraped everything and fuck your copyrights" that it was non-commercial?

    • layer82y

      Did OpenAI ever make a statement about that?

      In any case, OpenAI is a for-profit company owned and controlled by a non-profit. Make of that what you will.

  • wodenokoto2y

    I was hoping a pro version would open up for removing filters. There are legitimate reasons to sometimes remove the filters.

  • maest2y

    > ChatGPT Plus is available to customers in the United States

    Meh. I wonder what the usage metrics are for US vs rest of world.

  • Galanwe2y

    Cool, a new subscription plan for US only, while the free version is still not available anywhere is Asia.

  • victor1062y

    It’s interesting that they use Google forms for the waitlist. Seems like google is (almost) their competitor?

    • novaRom2y

      Because Google is already preparing chatGPT-like feature to be integrated into their Google search. They will agment it with paid ads of course.

  • asicsp2y

    Wonder if they'll offer reduced prices in developing nations (if they offer the service at all).

  • jdthedisciple2y

    I feel like companies would be willing to pay orders of magnitudes more than $20/month for this.

  • tims332y

    I saw ChatGPT+ and I was thinking this was a streaming service that only hosted AI-generated content.

  • jacobsenscott2y

    The few times chatgpt has not been at capacity it has given me wrong answers. I'll keep my $20.

    • Kiro2y

      You can't have used it much or intentionally baited it to prove your preconceived notion that ChatGPT sucks.

      • CBarkleyU2y

        ChatGPT is pretty damn good at anything creative, but for hard-truth questions it's eh at best. Which is quite ironic when you think about what we imagined Robots would end up like.

        Btw, I just asked ChatGPT to give me a list of five-letter words that a child could draw. At one point it conceded that it's suggested words were indeed not five letters long, but it couldnt correct it's error. To tie back into the point I made above: All of the suggested words were actually good choices (creative), they just werent five letters long (truth)

  • ryanSrich2y

    I’m surprised that the only noticeable feature $20/m offers is access during peak times.

    • dabei2y

      Faster response time is also a big deal.

  • fullstackchris2y

    Dang, was really hoping they'd follow GitHub copilot's lead for $10 a month

  • hsn9152y

    I would pay for it if it was self-censored to pander to the American political correctness.

  • sp3322y

    $20 per month per ... what? It says per customer, but I'm assuming it means per seat?

  • vbezhenar2y

    I'd pay for it. What's the point to limit it to US, I don't understand.

  • koch2y

    Does it have internet access? Seems like that would be necessary for a paid product...

  • joshilaurain172y

    ChatGPT is good invention. I am writing a researched based article on GPT.

  • worik2y

    "ChatGPT Plus is available to customers in the United States"

    Ho hum. Moving on.

  • jimnotgym2y

    That explains why everyone seems to be getting rate limited right now...

  • sublinear2y

    > challenge incorrect assumptions

    And just like that OpenAI has found it's niche.

  • tpmx2y

    I wonder what effect ChatGPT will have on Nvidia's bottom line.

  • kiviuq2y

    I just read that Microsoft announced to integrate GPT-4 into Bing.

  • yewenjie2y

    How difficult is it to make the GPT API to behave like ChatGPT?

  • bradneuberg2y

    Is ChatGPT Plus actually available to subscribe to yet?

  • Extra_Leaf2y

    >Chat gpt gets Priority que

    Is this how they plan on funding this

  • Giorgi2y

    $20 is not that bad, but what's with geolock?

  • roschdal2y

    No thank you, I already have a good enough brain.

  • penjelly2y

    i wonder if paying will fix the frequent reloading/new chat/failed responses im forced into from it currently

  • dpkirchner2y

    Any sense about how long the waitlist is?

  • winrid2y

    I might use this for advancing the reporting that Watch.ly provides. I could really provide some cool insights on users' traffic, maybe.

  • heap_perms2y

    20.- is _really_ pushing it. I would consider paying 5-10 bucks, because it can make some laborious tasks faster.

  • bottlepalm2y

    Good price. Way to go OpenAI!

  • sschueller2y

    Why is this region locked?

  • neophyt32y

    will chatgpt have latest knowledge or is it going to be till 2021 only?

  • zxcb12y

    We love our free users :)

  • haunter2y

    I'll wait for the ChatGPT Plus Pro Max SafeSearchOff Founders Edition™ first thanks

  • shyn32y

    I pay for LinkedIn and Twitter yet I wouldn't for gpt as it's restricted.

  • mactyler2y

    Cant wait to subscribe.

  • nashashmi2y

    Is privacy built in?

  • muskmusk2y

    TAKE MY MONEY!

  • diimdeep2y

    $ 20 for "Big Data Markov chain", hard pass.

  • geephroh2y

    But do I get a blue check mark?

  • waynesonfire2y

    Take my money!!!

  • zaptrem2y

    I think a personal subscription model where ChatGPT Plus subscribers are given personal API access for use with third-party open source apps (e.g., Chrome extensions, Siri Shortcuts, VSCode extensions) would be extremely valuable and much more lucrative than selling to specific companies.

    OpenAI has the opportunity to become the next fundamental utility provider for society. In the same way everybody has a data plan to make their apps work, in the future everybody could have an OpenAI subscription to enable advanced capabilities across their apps. A system where individual companies pay for access makes many of the most incredible use-cases economically infeasible.

    Future children will grow up oblivious to their AI subscription in the same way current children are oblivious to the ramifications of their internet subscriptions.

    We are going through the AOL stage again.

    • version_five2y

      I don't agree with you about how foundational this is, I think chatGPt (as a concept, not a given instantiaiton) is mostly a toy people will get bored with (and not a stepping stone to something more like AGI)

      But whether I'm wrong or not I'm very excited about the idea of a ground-up paid service like this that could potentially have a b2c business model based on people actually paying for it instead of being a product sold to advertisers. I hope we're at the beginning of a shift to widespread paid "information" products and away from the race to the bottom an ad-supported tech ecosystem has become

      Edited to be more precise

      • adam_arthur2y

        Dunno if I agree about the "toy" part.

        I was learning a new programming language the other day and ChatGPT was able to provide much more focused/helpful responses than Google. Specifically about details of Rust borrow checker, certain syntax etc

        It's true that it can give false info at times, but everything it fed to me was true at the time. Time to meaningful response much faster than Google for certain categories of questions...

        It may not be a step towards AGI at all, but it's certainly useful

      • DebtDeflation2y

        >not a stepping stone to something more like AGI

        Why does AGI even need to be part of any of these discussions? It's a ridiculous pop-sci topic, and anyone who works in ML knows we're nowhere near achieving AGI.

        That said, what ChatGPT/GPT-3/LLMs represent is a potential new way for people to interact with a corpus of documents that goes far beyond traditional search and knowledge graphs.

      • jonchurch_2y

        It bears mentioning that the mission of OpenAI is to work towards AGI. ChatGPT is, in the view of OpenAI, one step among many along the way. How it fits into their vision, no clue. But they are certainly happy to monetize this toy to continue funding their real mission. To this outsider, it looks like a win win.

        > OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

        > We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

        https://openai.com/about/

      • quacked2y

        I think you're wrong, many programmers already prefer ChatGPT over StackOverflow.

      • keithnz2y

        I think it's easy to see it as a toy, that's basically how I played with it to start with, it's fun to get it to write stuff about all kinds of things. But it actually has a fair amount of knowledge that has crossed over a threshold where it is useful to ask it questions on a bunch of topics, including programming.

      • _tom_2y

        > I think chatGPt is mostly a toy people will get bored with.

        I think that would be correct, if ChatGPT doesn't continue to improve.

        Given the rate of progress so far, most of us are expecting that there will be much more progress, and it will continue to add more value. This is where it becomes foundational.

        Or Skynet :-)

    • rvz2y

      > OpenAI has the opportunity to become the next fundamental utility provider for society. In the same way everybody has a data plan to make their apps work, in the future everybody could have an OpenAI subscription to enable advanced capabilities across their apps.

      I don't think so. An open source competitor is OpenAI's worst nightmare. We have seen this with Stable Diffusion against DALL-E 2. So if a free open-source version of GPT-4 comes around and it is better than GPT-4, then OpenAI's moat is gone.

      There will always be competition and an open-source model will surely come on top, especially something like Stable Diffusion.

      > Future children will grow up oblivious to their AI subscription in the same way current children are oblivious to the ramifications of their internet subscriptions.

      I'm very sure 99.9% of people are aware that they need to pay for access the internet in their home.

      > We are going through the AOL stage again.

      I don't think an OpenAI subscription and a AOL subscription is even the same thing or even comparable.

    • la647102y

      As far ChatGPT is concerned they certainly have the first movers advantage but I doubt if this wil last long. Now if I get a busy can’t talk from dear old chatGPT I just shuffle over to https://you.com/search?q=who+are+you&tbm=youchat&cfr=chat

      • 40four2y

        I didn’t know about you.com, good recommendation thanks!

    • fragmede2y

      Selling SaaS services B2B vs B2C is a large topic, and which one's more lucrative for OpenAI is hard to pin down. How many individuals do you know pay for Google Workspaces or Office365, vs how many companies pay for it? Or how about GitHub CoPilot? Or Trello? Obviously those are different products which makes direct comparison hard, but people don't pay for things, that's why there's stealing, and why so many things are ad-supported or ad-"enhanced".

      We are in the midst of tech giants fighting, with the consumer as the loser. (Quick, try and buy a book in the Kindle app on iOS!) I don't see a future where it's possible to run (whatever evolves into being) speech ChatGPT on an Apple device with Apple support.

    • jmfldn2y

      I don't mind paying for AI services, but I want it to be transparent. I want to see the source code, the training data and so on. What I'm paying for should be the massive compute infrastructure needed to train and maintain the model, but something like this is way too important and powerful to be a 'blackbox' SAAS product.

      If we think we have problems with Facebook or Google's hold over us, this has the potential to be much much worse if we get the design of the products and the business models wrong.

      • visarga2y

        StabilityAI's solution to provide base models everyone can fine-tune seems more useful. APIs are too restrictive and expensive.

    • 2y
      [deleted]
  • AnnoyedComment2y

    [dead]

  • SpeedilyDamage2y

    If you want cheaper, can’t you use the Azure OpenAI service?

    For this it seems more like you’re paying for the interface than ChatGPT itself.

    • Insanity2y

      As a counter-point, the average user can navigate to a website and put in some queries. They can't (and don't know about) Azure and cloud computing.

      • SpeedilyDamage2y

        Totally agree; is it worth the added cost? And if I can build a site that does this on my own, with a better UX, can I charge customers $19/mo?

  • koreanguy2y

    [dead]

  • Slava_Propanei2y

    [dead]

  • elevenoh2y

    [dead]

  • mgoblue20122y

    [dead]