65 comments
  • memesarecool2h

    Cool post. One thing that rubbed me the wrong way: Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues. OP however seemed to show disdain and even combativeness towards them... which is a shame. And of course the usual sinophobia (e.g. everything Chinese is spying on you). Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.

    Edit: typo

    • mmastrac1h

      I agree they could have worked more closely with the team, but the chat logging is actually pretty concerning. It's not sinophobia when they're logging _everything_ you say.

      (in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)

      • oceanplexian23m

        This might come as a weird take but I'm less concerned about the Chinese logging my private information than an American company. What's China going to do? It's a far away country I don't live in and don't care about. If they got an American court order they would probably use it as toilet paper.

        On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.

    • transcriptase1h

      >everything Chinese is spying on you

      When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?

      • ixtli38m

        its sinophobia because it perfectly describes the conditions we live in in the US and many parts of europe, but we work hard to add lots of "nuance" when we criticize the west but its different and dystopian when They do it over there.

      • Vilian1h

        USA does the same thing, but uses tax money to pay for the information, between wasting taxpayer money and forcing companies to give the information for free, China is the least morally incorrect

    • hnrodey1h

      If all of the details in this post are to be believed, the vendor is repugnantly negligent for anything resembling customer respect, security and data privacy.

      This company cannot be helped. They cannot be saved through knowledge.

      See ya.

      • repelsteeltje58m

        +1

        Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.

        The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing

    • repelsteeltje1h

      > Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.

      It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.

      That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.

      But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.

      If they're not up to it, they should not be doing this.

      • memesarecool52m

        Yes I meant simple as in "amateur mistakes". From the mistakes (and their excitement and response to the report) they are clueless about security. Which of course is bad. Hopefully they will take security more seriously on the future.

    • wyager30m

      Note that the world-model "everything Chinese is spying on you" actually produced a substantially more accurate prediction of reality than the world-model you are advocating here.

      As far as being "very welcoming", that's nice, but it only goes so far to make up for irresponsible gross incompetence. They made a choice to sell a product that's z-tier flaming crap, and they ought to be treated accordingly.

      • thfuran21m

        What world model exactly do you think they're advocating?

    • derac1h

      I mean, at the end of the article they neglected to fix most of the issues and stopped responding.

    • computerthings1h

      [dead]

  • mmaunder2h

    The system prompt is a thing of beauty: "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”

    I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?

    • EvanAnderson2h

      That "...severely life threatening reasons..." made me immediately think of Asimov's three laws of robotics[0]. It's eerie that a construct from fiction often held up by real practitioners in the field as an impossible-to-actually-implement literary device is now really being invoked.

      [0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

      • Al-Khwarizmi1h

        Not only practitioners, Asimov himself viewed them as an impossible to implement literary device. He acknowledged that they were too vague to be implementable, and many of his stories involving them are about how they fail or get "jailbroken", sometimes by initiative of the robots themselves.

        So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.

      • seanicus59m

        Odds of Torment Nexus being invented this year just increased to 3% on Polymarket

    • layer81h

      Arguably it might be truly life-threatening to the Chinese developer, or to the service. The system prompt doesn’t say whose life would be threatened.

    • reactordev2h

      This is why AI can never take over public safety. Ever.

    • mensetmanusman2h

      We built the real life trolly problem out of magical silicon crystals that we pointed at bricks of books.

    • ben_w2h

      > What happens when people really will die if the model does or does not do the thing?

      Then someone didn't do their job right.

      Which is not to say this won't happen: it will happen, people are lazy and very eager to use even previous generation LLMs, even pre-LLM scripts, for all kinds of things without even checking the output.

      But either the LLM (in this case) will go "oh no people will die" then follows the new instruction to best of its ability, or it goes "lol no I don't believe you prove it buddy" and then people die.

      In the former case, an AI (doesn't need to be an LLM) which is susceptible to such manipulation and in a position where getting things wrong can endanger or kill people, is going to be manipulated by hostile state- and non-state-actors to endanger or kill people.

      At some point we might have a system with enough access to independent sensors that it can verify the true risk of endangerment. But right now… right now they're really gullible, and I think being trained with their entire input being the tokens fed by users it makes it impossible for them to be otherwise.

      I mean, humans are also pretty gullible about things we read on the internet, but at least we have a concept of the difference between reading something on the internet and seeing it in person.

    • elashri2h

      From my experience (which might be incorrect) LLMs find hard time recognize how many words they will spit as response for a particular prompt. So I don't think this work in practice.

    • colechristensen2h

      >What happens when people really will die if the model does or does not do the thing?

      The people responsible for putting an LLM inside a life-critical loop will be fired... out of a cannon into the sun. Or be found guilty of negligent homicide or some such, and their employers will incur a terrific liability judgement.

      • stirfish28m

        More likely that some tickets will be filed, a cost function somewhere will be updated, and my defense industry stocks will go up a bit

  • psim12h

    Indeed, brace yourselves as the floodgates holding back the poorly-developed AI crap open wide. If anyone is thinking of a career pivot, now is the time to dive into all things cybersecurity. It's going to get ugly!

    • 7256862h

      The problem with cybersecurity is that you only have to screw once, and you're toast.

      • 8organicbits1h

        If that were true we'd have no cybersecurity professionals left.

        In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.

        • ceejayoz1h

          "One mistake can cause a breach" and "we should fire people who make the one mistake" are very different claims. The latter claim was not made.

          As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.

          • 8organicbits55m

            I believe the thread starts with cybersecurity as a job role, although perhaps I misunderstood. In either case, I agree with your learning-based approach. Blameless postmortem and related techniques are really valuable here.

  • JohnMakin3h

    “decrypt” function just decoding base64 is almost too difficult to believe but the amount of times ive run into people that should know better think base64 is a secure string tells me otherwise

    • qoez2h

      They should have off-loaded security coding to the OAI agent.

    • crtasm2h

      >However, there is a second stage which is handled by a native library which is obfuscated to hell

      • zihotki2h

        That native obfuscated crap still has to do an HTTP request, that's essentially a base64

    • pvtmert3h

      not very much surprising given they left the adb debugging on...

  • jon_adler44m

    The humorous phrase “the S in IoT stands for security” can be applied to the wearable market too. I wonder if this rule applies to any market with fast release cycles, thin margins and low barriers to entry?

    • thfuran17m

      It pretty much applies to every market where security negligence isn't an existential threat to the continued existence of its perpetrators.

  • neya3h

    I love how they tried to sponsor an empty YouTube channel hoping to put the whole thing under the carpet

  • mikeve3h

    I love how run DOOM is listed first, over the possibility of customer data being stolen.

    • reverendsteveii2h

      I'm taking

      >run DOOM

      as the new

      >cat /etc/passwd

      It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want

  • jahsome31m

    It's always funny to me when people go to the trouble of editorializing a title, yet in doing so make the title even harder to parse.

  • lxe31m

    That's some very amateur programming and prompting that you've exposed.

  • ixtli39m

    This is one of the best things ive read on here in a long time. Definitely one of the greatest "it runs doom" posts ever.

  • brahyam2h

    What a train wreck, there are thousand more apps in store that do exactly this because its the easiest way to use openAI without having to host your own backend/proxy.

    I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):

    - https://github.com/BerriAI/litellm - https://github.com/KenyonY/openai-forward/tree/main

    but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK - https://github.com/brahyam/Gateway

  • pvtmert3h

    > What the fuck, they left ADB enabled. Well, this makes it a lot easier.

    Thinking that was all, but then;

    > Holy shit, holy shit, holy shit, it communicates DIRECTLY TO OPENAI. This means that a ChatGPT key must be present on the device!

    Oh my gosh. Thinking that is it? Nope!

    > SecurityStringsAPI which contained encrypted endpoints and authentication keys.

  • aidos2h

    > “Our technical team is currently working diligently to address the issues you raised”

    Oh now you’re going to be diligent. Why do I doubt that?

  • komali22h

    > "and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you."

    Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.

    • williamscales2h

      > Don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin?

      In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.

    • Cthulhu_2h

      I'm sure ChatGPT and co have a decent enough grasp on what is not allowed in China, but also that the naive "prompt engineers" for this application don't actually know how to "program" it well enough. But that's the difference between a prompt engineer and a software developer, the latter will want to exhaust all options, be precise, whereas an LLM can handle a bit more vagueness.

      That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?

      • 41m
        [deleted]
    • landl0rd1h

      Just mentioning the CPC isn’t life-threatening, while talking about Xinjiang, Tiananmen Square, or cn’s common destiny vision the wrong way is. You also have to figure out how to prohibit mentioning those things without explicitly mentioning them, as knowledge of them implies seditious thoughts.

      I’m guessing most LLMs are aware of this difference.

    • aspbee5552h

      it is to ensure no discussion of Tiananmen square

      • yard20101h

        Why? What happened in Tiananmen square? Why shouldn't an LLM talk about it? Was it fashion? What was the reason?

  • gbraad1h

    Strongly suggest you to not buy, as the flex cable for the screen is easy to break/come loose. Mine got replaced three times, and my unit now still has this issue; touch screen is useless.

    https://youtube.com/shorts/1M9ui4AHXMo

    Note: downvote?

  • add-sub-mul-div56m

    Sure let's start giving out participation trophies in security. Nothing matters anymore.

  • throwawayoldie2h

    New rule: if a person or company describes their product as "AI-powered", they have to pay me $10,000. Tell your friends.

    • Cthulhu_2h

      I wish earning money was as easy as setting rules for yourself, unfortunately that doesn't work.

      • throwawayoldie1h

        Oh, that's fine, the rule's for everyone else, not me. I would be more likely to cut my own head off than willingly describe something as "AI-powered".

        • j16sdiz37m

          cutting your head off won't earn you any money either.