JIT access considered harmful

Some security practitioners preach the value of just-in-time (JIT) access grants, where employees request and receive limited access to production data and systems, usually for a limited period of time. JIT access is no good. It distracts an organization away from more meaningful security investments.

To start with, let’s enumerate some things that aren’t JIT access:

  1. Re-entering your password every morning before starting work.
  2. Being prompted to touch a Yubikey when doing certain sensitive actions.
  3. A newhire who starts with no access, but gains it after completing the requisite training for specific tasks.
  4. Tiered credentials, where a single individual has multiple accounts, with each account set aside for specific tasks.
  5. Being required to record a ticket number as associated with a given operation, or provide some other kind of justification.
  6. Automatically granting access to a team’s active oncall members, and revoking it when they go off-call.

Some of these practices may be questionable in their own right, but none of them are JIT access. For the purposes of this post, JIT access involves a separate human in the loop who makes a “do I trust this person” decision on a case-by-case basis. Usually the requests are of the form “X employee is requesting to read this dataset” or “Y employee wants access to that admin panel.” JIT access is typically granted for a limited period of time.

JIT access is bad because it is highly demanding of an organization’s scarcest resources: its employees’ productivity and its human judgment. The very nature of a JIT access decision is that it is evaluated just in time, where the requestor is either already blocked or imminently going to become blocked. Very relatedly, the majority of JIT access decisions are going to be obvious “Yes”es, which produce decision fatigue.

JIT access is a highly effortful process, but not a highly effective one. It is exactly the process you would choose if you wanted to do information security performatively, rather than meaningfully.

Authorization strategy

Before we continue, let’s consider what makes a good authorization strategy. Good authorization produces:

  1. consent: each component of the system being operated-on has an identifiable owner. The owner of the system or their chosen delegate has consented to the authority being exercised.
  2. intentionality: every exercise of authority occurs because a person understood what was going to happen and chose to proceed. Failure in this case looks like “oh I accidentally deleted the wrong database”, or “I ran this script that I didn’t fully understand, and it added a new IAM role to my account that gives Vendor A’s software access to our entire AWS account.”
  3. contextual understanding: people who make decisions have imbibed all of the organization’s available knowledge of the likely outcomes of their decisions. Failure in this case looks like “I copied that dataset to a third party vendor for marketing purposes, and I really wish someone had told me that it was subject to a HIPAA compliance obligation.” The decision was made intentionally in this case, but afterward the actor learned a fact that would have changed their mind.
  4. auditability: it is possible for others to look at the authority that was exercised, and work backwards to determine the who/what/when/why.

Mechanical policy

Decisions about the security of a system should be made calmly, and well in advance of the circumstances that will test that system. The owner of the system should be able to sit down and enumerate the invariants they want to preserve, and build an access policy that can be evaluated mechanically.

An example of a good policy would be “members of this oncall rotation will complete XYZ training, and then will be granted access”. As a tweak, you can add a caveat of “when they are actively oncall” to reduce the number of people who can be easily tricked, bribed, or otherwise coerced at any given moment.

Approval of actions is better than approval of access

In almost every case, it is better to review the action being taken rather than granting JIT access. Rather than saying “let’s grant X person access to Y dataset”, it would be better to say “let’s run XYZ batch job against Y dataset and store the output in Z. Team members A and B, whose team owns the data in question, have already code-reviewed the batch job and designated Y and Z as allowed parameters”. The latter approach fully ensures that our authorization principles are respected, assuming the code in question doesn’t have bugs (always a dangerous assumption).

This means that, especially for a system that the organization hires software engineers to maintain (i.e. first party systems), those software engineers should be phrasing authorization decisions in terms of peer review for specific actions, not access. This will make intuitive sense to anyone who has ever done a code review: you’re approving your colleague’s specific work, rather than making an overall decision of whether to trust them to modify the codebase.

Despite that fact, most organizations will always have some long-tail of third party systems where non-specific access is required. Even in those situations, it is possible to engineer a system in which (for instance) two different parties must be connected to the same remote desktop session in order to perform certain actions. Whether you’d want to accept this kind of productivity hit depends on your threat model.

The most common approach I see taken is one of “scoped privilege escalation”. The organization gives powerful credentials over an external service to some internal system that they have developed. The internal system is responsible for enforcing any invariants that matter to the organization, and then uses its powers to act on behalf of internal requestors.

Nearness

Access decisions should be made by people “nearby” each other in the organization, who already understand the context at play for that decision. For instance, a tech lead who has just finished training a newhire to be oncall for System A is ideally placed to grant access, since obviously they are an expert in the system and they have just finished giving the newhire the training that they’ll need in order to make useful decisions.

This nearness principle implies that a project lead should be able to transitively justify the access of any given person to the project. The overall project may have 400 contributors, and the project lead has no hope of understanding how each of the 400 people contribute. Instead, they can say “Person B was invited by the manager of Team C, and I have direct knowledge that Team C is working on XYZ aspect of the project.”

The hypothetical tech lead for System A is also ideally placed to recognize situations where access should be revoked, like “Person X has just left the oncall rotation”. The tech lead in this case might benefit from a “decision support tool” that sends them a helpful email reminder, but of course the decision is still up to them.

The failure case here occurs when people must make access “requests”, and then provide a justification to the decision maker. In any reasonable context, the situation should be reversed: the decision maker proactively grants access because they already anticipated the need and its rationale, rather than because they received a request that they hadn’t anticipated.

Phrasing access as a “request” also inflicts a kind of psychological damage on the organization, by implying that access is something that can be argued over. The rationale for access shouldn’t be a negotiation, like a project that you’re attempting to fund. Instead, it should be a factual recognition that the project is already funded and some person or team has an important role to play in making it succeed.

Conclusion

At least as software engineering ICs are concerned, access should be granted on the basis of participation in a given project or membership in a certain oncall rotation. When access lists grow large, they should be structured as hierarchical trees, such that each node retains an understanding of the activities of its neighbors. An organization should provide decision support tools to assist area owners in choosing when to revoke access.