65 comments
  • Sytten3d

    An again this would not be so bad an impact if github finally pushed their immutable actions [1]. I sound like a broken record since I keep repeating that this would solve like 70%+ of the scope of attacks on gha today. You would think that the weekly disaster they have would finally make them launch it.

    [1] https://github.com/features/preview/immutable-actions

    • thund3d

      They probably have good reasons if it's still in preview, that could be serious bugs, security gaps, potential breaking changes that would cause more harm than good if rushed, etc

      • 1oooqooq3d

        the only reason any company does or don't anything: not required for sales.

        in 2019 i saw a fortune500 tech company put in place their own vulnerability scanner internal application which included this feature for our enterprise github repos. the tool was built and deployed to an old Linux docker image that was never updated to not be the target of the attack they were preventing... they never vetted to random version they started with either. i guess one can still use zip bomb or even the xz backdoor for extra irony points when attacking that system.

        anyway, the people signing github checks also get promoted by pretending to implement that feature internally.

      • intelVISA3d

        Too much stakeholder alignment?

        • tanepiper3d

          More like last year they laid off a whole bunch of people. We've been waiting for several open tickets on GitHub to be picked up, some were but seem to be abandoned and others just ignored.

  • nyrikki3d

    No mention why this temp token had rights to do things like create a new deployments and generate artifact attestations?

    For their fix, they disabled debug logs...but didn't answer if they changed the temp tokens permissions to something more appropriate for a code analysis engine.

    • declan_roberts3d

      I think we all know this old story. The engineer building it was getting permission denied so they gave it all the permissions and never came back and right-sized.

      • setr3d

        Does any RBAC system actually tell you the missing permissions required to access the object in question? It’s like they’re designed to create this behavior

        • Normal_gaussian3d

          Yes. Most auth systems do to the developer - GCP & AWS IAM give particularly detailed errors; nearly every feature/permission system I have implemented did. However, it wouldn't be unusual for the full error to be wrapped or swallowed by some lazy error handling. Its a bit of a PITA but well worth it to translate to a safe and informative user facing error.

          as a nit; RBAC is applied to an object based permissions system rather than being one. Simply, RBAC is a simplification of permission management in any underlying auth system.

          • 8note3d

            ive never seen aws give a useful error where i could say which resources need a handshake of permissions, or which one of the two needs the permission granted, or which permission needs to be granted.

            • donavanm3d

              This is intentional. You, the caller, get a generic http 400 “resource does not exist or are not authorized” response and message. Providing additional information about resource existence or permissions opens an entire category of information disclosure, resource discovery, attribute enumeration, policy enumeration problems.

              The IAM admin persona is the one who gets a bunch of additional information. Thats accessible through aws iam policy builder, access logs, etc.

              And no, its not feasible to determine if the initial caller is an appropriate iam admin persona and vary the initial response.

              • the84723d

                Even AWS itself does better than this, but only on some services. They send an encrypted error which you can then decrypt with admin permissions to get those details.

              • Atotalnoob2d

                Just add this to the end of the error message “If this resource exists, you will need to add permission X. “

                • donavanm17h

                  A late reply, but thats not how AWS IAM (or most advanced authz systems) work. AWS IAM is a “capability” system with dynamic policies; its nothing so simple as a “role” based authorization contrary to some product naming. To wit, every authz evaluation is a dynamic evaluation of policy and context. Each check uses one or more policies with one or more policy statements that are combined with some boolean logic and predicate rules. The policies may be associated (sourced) with the particular request based on calling principal, principal attributes, the target resource, a related resource, or even other metadata like AWS Org membership. Thats combined with the point in time context from the request (ex action name, parameters), request metadata (eg time), principal (id, tags, etc), resource (arn, attributes, tags), and some more “system” specific context variables. You (and the authorizing service) need ALL of that information to perform an authz evaluation.

                  This is complicated by dynamic data, like time or source address or caller principal tag values, so even identical requests may have different results. There are also complications like DENY statements and “unless” predicates that entirely defeat a simple “resource x requires y” approach.

                  Evem if you solve all of those challenges via magic you end up back at information disclosure where your adversary is now capable of rapidly enumerating and testing all your authz policies!

            • milch3d

              AWS throws errors that look like `arn:aws:iam:... is not authorized to call "$SERVICE_NAME:$API_NAME" on resource arn:aws:$SERVICE_NAME:...`. I think it's more complicated when you go cross account, and the receiving account doesn't have permissions set up (if the calling account doesn't have it set up you get the same error). In any case you would still find that information in the CloudTrail logs of the receiving account

              • hobs3d

                Right, you can go to cloudtrail and probably get it, but I have definitely ran into things like service says you do not have access to resource or it does not exist - randomly providing the account some other tangentially related permission magically fixes it, I've found sometimes trying the UI and the API will give different errors to help, and neither is particularly more useful than the others.

                • bshacklett3d

                  assuming you can get the cloud trail message sometimes there’s more information that you can decode using the STS service:

                  https://docs.aws.amazon.com/cli/latest/reference/sts/decode-...

                • donavanm3d

                  Look in to the AWS IAM “service description files” aka SDF. Thats exposed via the console Policy Builder or Policy Evaluator logic. The SDF _should_ encode all the context (eg resource attributes, principal metadat) that goes in to the authz decision. The most common opaque issue youll see is where one action has other required resources/actions. Eg a single action attaching an ebs volume requires permission on both instance and volume and _maybe_ kms key with permissions across those services.

                • 3d
                  [deleted]
          • winwang3d

            Slightly disagree at least for GCP. It will error with a detailed permission, but you're not just going to add that -- you're going to add a role (standard, unless you have custom roles), which you technically have to map back to the permission you need. But also, those (standard) roles have many permissions in one, so you likely overprovision (though presumably by just a bit).

            ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

            • valenterry3d

              > you're going to add a role (standard, unless you have custom roles), which you technically have to map back to the permission you need

              Which is terrible btw. You dont "technicall" have to do that, you really cannot add roles to custom roles, you can only add permissions to custom roles. Which makes it really hard to maintain the correctness of custom roles since their permissions can and do change.

              > ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

              GCP even has something like that, but I honestly think that standard roles are usually fine. Sometimes making things too fine grained is not good either. Semantics matter.

            • da_chicken3d

              > ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.

              The problem with that is that it can be difficult to know what you need, and it may be impossible to simulate in any practical sense. Like, sure, I can stand up a pair of test systems and fabricate every scenario I can possibly imagine, but my employer does want me to do other things this month. And what happens when one of the systems involves a third party?

              Really, the need is to be able to provision access after the relationship is established. It's weird that you need a completely new secret to change access. Imagine if this were Linux and in order to access a directory you had to provision a new user to do it? How narrow do you really think user security access would be in practical terms then?

              • winwang3d

                > the need is to be able to provision access after the relationship is established

                Could you go into more detail? At a base level interpretation, this is how it works already (you need a principal to provision access for...), but you presumably mean something more interesting?

                • da_chicken2d

                  With token-based access, you typically assign the role when the token is created. The access level the token has is typically locked at that point. If you're generating an API access token, you might specify the token is read-only. If you later decide that read/write access is needed, you need to generate a new token with the new access level and replace the token id and value in the client system.

                  It's not difficult, but it's a much bigger pain in the ass than just changing access or changing role on a user.

          • raverbashing3d

            But obviously then the security people will raise ruckus about any attempt of telling you what is wrong

            (Which ok, for an external facing system is ok)

            I can bet the huge prevalence of "system says no, and nothing tells you why" helps a lot with creating vulnerable systems.

            System need an "let X person do Action" instead of having people waddle through 10 options like SystemAdminActionAllow that don't mean anything to an end user

        • Uvix3d

          Azure’s RBAC system usually tells you this, at least when accessing the Azure management APIs. (Other APIs using RBAC, like the Azure Storage or Key Vailt ones, usually aren’t so accommodating. At least by their nature there’s usually only a handful of possible permissions to choose from.)

        • levkk3d

          Not usually, that's considered an potential attack vector I believe. You're looking to minimize information leakage.

        • UltraSane3d

          AWS has a neat feature to analyze cloudtrail logs to determine needed permissions.

      • azemetre3d

        What's the over/under that said engineer could solve two medium leetcodes in under and hour?

    • Pathogen-David2d

      If the GitHub Actions temporary token does not have workflow-defined permissions scope, it defaults either to a permissive or restricted default scope based on the repository's setting. This setting can also be configured at the organization level to restrict all repos owned by the org.

      Historically the only choice was permissive by default, so this is unfortunately the setting used by older organizations and repos.

      When a new repo is created, the default is inherited from the parent organization, so this insecure default tends to stick around if nobody bothers to change it. (There is no user-wide setting, so new repos owned by a user will use the restricted default. I believe newly created orgs use the better default.)

      [0]: https://docs.github.com/en/actions/security-for-github-actio...

    • beaugunderson2d

      Temporary action tokens have full write by default; you have to explicitly opt for a read-only version.

          > Read and write permissions
          > Workflows have read and write permissions in the repository for all scopes.
      
      If you read this line of the documentation (https://docs.github.com/en/actions/security-for-github-actio...) you might think otherwise:

          > If the default permissions for the GITHUB_TOKEN are restrictive, you may have to elevate the permissions to allow some actions and commands to run successfully.
      
      But I can confirm that in our GitHub organization "Read and write permissions" was the default, and thus that line of documentation makes no sense.
    • Elucalidavah3d

      > For their fix, they disabled debug logs

      For their quick fix, hopefully not for their final fix.

    • arccy3d

      just goes to show how lax microsoft is about their security. nobody should trust them.

    • stogot2d

      The 2023 Microsoft hack (that CISA completely called them out for poor security) also was similar to this. Their blog post that tried to explain what happened left so many unanswered questions

  • ashishb3d

    I am getting more and more convinced that CI and CD should be completely separate environments. Compromise of CI should not lead to token leaks related to CD.

    • mdaniel3d

      This area is near and dear to my heart, and I would offer that the solution isn't to decouple CD over into its own special little thing but rather to make the CD "multi factor" in that it must be "sub":"repo:octo-org/octo-repo:environment:prod"[1] and feel free to sprinkle in any other [fun claims][] you'd like to harden that system

      1: https://docs.github.com/en/actions/security-for-github-actio...

      fun claims: https://github.com/github/actions-oidc-debugger#readme

      • ashishb3d

        Doable but I would prefer a complete isolation for simplicity.

        • thund3d

          there are ways to isolate code from CI from CD, it's just not as easy as setting up the classic repo. One can use multiple repos for example, or run CI and CD with different products.

    • nrvn2d

      This is essentially how separation of duties(and concerns) looks like. And this is how some of the good examples of projects work. Specific techniques and tooling and specific boundaries of CI and CD vary depending on the nature of the end product but conceptually you are absolutely right.

  • junto3d

    They weren’t kidding on the response time. Very impressive from GitHub.

    • belter3d

      Not very impressive to have an exposed public token with full write credentials...

      • toomuchtodo3d

        Perfect security does not exist. Their security system (people, tech) operated as expected with an impressive response time. Room for improvement, certainly, but there always is.

        Edit: Success is not the absence of vulnerability, but introduction, detection, and response trends.

        (Github enterprise comes out of my budget and I am responsible for appsec training and code IR, thoughts and opinions always my own)

        • timewizard3d

          > Perfect security does not exist.

          Having your CI/CD pipeline and your git repository service be so tightly bound creates security implications that do not need to exist.

          Further half the point of physical security is tamper evidence. Something entirely lost here.

          • Aeolun3d

            I find that this is always easy to say from the perspective of the security team. Sure, it would be more secure to develop like that, but also tons more painful for both dev and user.

            • timewizard3d

              I don't code anymore. I like making devs suffer. And this is all good for the user. ;)

        • belter3d

          > Their security system (people, tech) operated as expected

          You mean not finding the vulnerability in the first place?

          This would allow:

          - Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.

          - Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.

          - Execute code on internal infrastructure running CodeQL workflows.

          - Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.

          >> Success is not the absence of vulnerability, but introduction, detection, and response trends.

          This isn’t a philosophy, it’s PR spin to reframe failure as progress...

          • toomuchtodo3d

            This is not great based on the potential exposure, but also not the end of the world. You’re free to your opinion of course wrt severity and impact, but folks aren’t going to leave GitHub over this in any material fashion imho. They had a failure, they will recover from it and move on. It’s certainly not PR from me, I don’t work for nor have any financial interest in GH or MS. I am a security person though, these are my opinions based on doing this for ~10 years (I am consistently exposed to security gore in my work), and we likely have an expectations disconnect.

            As a customer, I’m not going to lose sleep over it. I’m going to document for any audits or other governance processes and carry on. I operate within "commercially reasonable" context for this work. Security is just very hard in a Sisyphus sort of way. We cannot not do it, but we also cannot be perfect, so there is always going to be vigorous debate over what enough is.

        • koolba3d

          > Success is not the absence of vulnerability, but introduction, detection, and response trends.

          Don’t forget limitation of blast radius.

          When shit hits the proverbial fan, it’s helpful to limit the size of the room.

          • toomuchtodo3d

            Yeah, I agree compartmentalization, least privilege, and sound architecture decisions are a component of reducing the pain when you get popped. It’s never if, just when.

      • 1a527dd53d

        Trying my best not to break the no snark rule [1], but I'm sure your code is 100% bullet proof against all current and future-yet-invented-attacks.

        [1] _and failing_.

        • atoav3d

          Nobody is immune against mistakes, but a certain class of mistakes¹ should never ever happen to anyone who should know better. And that in my book is anybody who has their code used by more people than themselves. I am not saying devs aren't allowed to make stupid mistakes, but if we let civil engineers have their bridges collapse with an "shit happens" -attitude trust in civil engineering would be questionable at best. So yeah shit happens to us devs, but we should be shamed if it was preventable by simply knowing the basics.

          So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.

          To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.

          Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.

          If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:

            *.private
            *.private.*
           
          And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.

          But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.

          ¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc

          • immibis3d

            > no input validation to stiff thst goes into your database

            I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.

            • atoav3d

              Good point, as I mentioned, this is a non-exhaustive list. Input validation and related topics like encodings, escaping, etc could fill a list single-handedly.

          • sieabahlpark3d

            [dead]

        • belter3d

          [flagged]

  • helsinki3d

    As someone with the last name Prater—derived from Praetorian—I really wish I owned praetorian.com.

    • ratg133d

      You would have had to have this thought prior to the release of the movie “The Net” in 1995

    • smoyer3d

      Their gokart project was awesome!

  • udev40963d

    Using public github actions is just asking for trouble and more so without analyzing the workflow's procedure. Instead, just host one yourself using woodpecker or countless other great CI builders (circle, travis, gitlab, etc)

  • ryao3d

    I put CodeQL in use in OpenZFS PRs. This is not an issue for OpenZFS. None of our code is secret. :)

    • asmosoinio3d

      I don't think this is a good take: Even if your code is not secret, the attack could add anything to your code or release artifacts.

      Luckily it was quickly remedied at least.

    • 3d
      [deleted]
  • atxtechbro3d

    Is this fixed?

    • lsllc3d

      It's in the article (and the comments here) -- yes, it was remediated within 3 hours of being reported back in January by GitHub.

  • bloqs3d

    This sites performance is so bad i can barely scroll