upvote
> (Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)

First day with javascript?

reply
You mean first 86,400 seconds?
reply
You have to admire the person who designed the flexibility to have 87239 seconds not be old enough, but 87240 to be fine.
reply
Probably went with the simplest implementation, if starting from the current “seconds since epoch” value. Let the user do any calculations needed to translate three days into that measurement.

It also efficiently annoys the most people at once: those what want hours will complain if they set it to days, thought that want days will complain if hours are used. By using minutes or seconds you can wind up both segments while not offend those who rightly don't care because they can cope with a little arithmetic :)

Though doing what sleep(1) does would be my preference: default to seconds but allow m/h/d to be added to change that.

reply
I'm old enough to remember computers being pitched as devices that can do tedious math for us. Now we have to do tedious math for them apparently.
reply
Hence the way I would do it (and have for other purposes), as stated in my final sentence. Have the human state the intent and convert to your own internally preferred units as needed.
reply
I'm sure you would like to memorize all kinds of API instead of having something idiot proof and straightforward
reply
As if `minimumReleaseAge` in `[install]` section of `.bunfig.toml` doesn't require the same kind of memorization.
reply
No no no, see now we just say "computer! do tedious math!", and it will do some slightly different math for us and compliment us on having asked it to do so.
reply
Hey that's a great joke, you made me spill my morning home-brewed kombucha.

I'm going to steal that one for my JavaScript monthly developers meetup.

Is it ok if I attribute it to "Xirdus on Hacker News"?

reply
Lol sure.
reply
The one true unit of time is hexadecimal encoded nanoseconds since the unix epoch. (I'm only half joking because I actually have authored code that used that before.)
reply
I actually think it is not too bad a design, because seconds are the SI base unit for time. Putting something like "x days" requires additional parsing steps and therefore complexity in the implementation. Either knowing or calculating how many seconds there are in a day can be expected of anyone touching a project or configuration at this level of detail.
reply
Seconds are also unambiguous. Depending on your chosen definition, "X days" may or may not be influenced by leap seconds and DST changes.

I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler

reply
Could you explain what you mean re: ambiguity? I understand why “calendar units” like months are ambiguous, but minutes, hours, days, and weeks all have fixed durations (which is why APIs like Python’s `timedelta` allows them).
reply
The minute between December 31, 2016 23:59 and January 1st 2017 is 61 seconds, not 60 seconds. The hour that contains that minute is 3601 seconds, the day that contains that hour is 43201 seconds, etc. If you assume a fixed duration and simply multiply by 43200, your math will be wrong compared to the rest of the world.

Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.

reply
That’s what I mean by calendar units. These aren’t issues if you don’t try to apply durations to the “real” calendar.

(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)

reply
"exactly 24 hours" could still be anywhere between 86399 and 86401 seconds, depending on leap seconds. At least if by an hour you mean an interval of 60 minutes, because a minute that contains a leap second will have either 59 or 61 seconds.

You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration

reply
Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.
reply
That's a good way of describing that. It's far too easy to pretend UNIX timestamps would correspond to a stopwatch counting from 1/1/1970.
reply
Right. Currently epoch time is off the stopwatch time by 27 seconds.
reply
Undefined behavior is worse than complicated defined behavior imo.
reply
In the UK last Sunday was 23 hours long because we switched to BST, and occasionally leap seconds will result in a minute being something other 60 seconds.
reply
No it wasn't. The country instantaneously changed timezones from UTC+0 to UTC+1 (called something else locally), it was no different to any other timezone change from e.g. physically moving into another timezone.
reply
exploiting the ambiguity in date formats by releasing a package during a leap second
reply
I came here to argue the opposite. Expressing it in seconds takes away questions about time zones and DST.

I think you're incorrect to say that second are also ambiguous. Maybe what you mean is that days are more practical, but that seems very much a personal preference.

reply
I understand the [flawed] reasoning behind "x seconds from now is going to be roughly now() + x on this particular system", but how does defining the cooldown from an external timestamp save you from dealing with DST and other time shenanigans? In the end you are comparing two timestamps and that comparison is erroneous without considering time shenanigans
reply
I think you misread the comment you're replying to.
reply
[flagged]
reply
> seconds are the SI base unit for time

True. But seconds are not the base unit for package compromises coming to light. The appropriate unit for that is almost certainly days.

reply
that kind of complexity is always worth it. Every single time. It's user time that you're saving and it also makes config clearer for readers and cuts out on "too many/little zeroes on accident" errors

It's just library for handling time that 98% of the time your app will be using for something else.

reply
I find it best when I need a calculator to understand security settings. 604800 here we come
reply
This is the difference between thinking about the user experience and thinking just about the technical aspect
reply
Well, you have 1000000 microseconds in between. That's a big threshold.
reply
wait what if we start on a day DST starts or ends????
reply
OP should be glad a new time unit wasn't invented
reply
Workdays! Think about it, if you set the delay in regular days/seconds the updated dependency can get pulled in on a weekend with only someone maybe on-call.

(Hope your timezones and tzdata correctly identifies Easter bank holiday as non-workdays)

reply
> Workdays!

This is javascript, not Java.

In JavaScript something entirely new would be invented, to solve a problem that has long been solved and is documented in 20+ year old books on common design patterns. So we can all copy-paste `{ or: [{ days: 42, months: 2, hours: "DEFAULT", minutes: "IGNORE", seconds: null, timezone: "defer-by-ip" }, { timestamp: 17749453211*1000, unit: "ms"}]` without any clue as to what we are defining.

In Java, a 6000LoC+ ecosystem of classes, abstractions, dependency-injectables and probably a new DSL would be invented so we can all say "over 4 Malaysian workdays"

reply
But you know that Java solution will continue working even after we no longer use the Gregorian Calendar, the collapse and annexation of Malaysia to some foreign power, and then us finally switching to a 4-day work week; so it'd be worth it.
reply
It probably won’t work correctly from the get go. But it can be debugged everywhere so that’s good.
reply
... and since it was architectured to allow runtime injection-patching of events before they hit the enterprise-service-bus, everyone using this library must first set fourteen ENV vars in their profile, and provide a /etc/java/springtime/enterprise-workday-handling/parse-event-mismatch.jar.patch. Which should fix the bug for you.

You can find the patch files for your OSs by registering at Oracle with a J3EE8.4-PatchLibID (note, the older J3EE16-PatchLib-ids aren't compatible), attainable from your regional Oracle account-manager.

reply
And least one of those environment can contain template strings that are expanded with arguments from request headers when run under popular enterprise java frameworks, and by way of the injection patching could hot load arbitrary code in runtime.

A joke should be funny though, not just a dry description of real life, so let's leave it at that. We've already taken it too far.

reply
This isn’t even remotely funny.
reply
I am laughing. I'm not even near the end of this thread.
reply
In before someone thinks it's a joke, the most commonly used logging library in Java had LDAP support in format scripts enabled by default" (which resulted, of course in CVE)
reply
JavaScript Temporal. Not sure knowing what a "workday" is in each timezone is in it's scope but it's the much needed and improved JS, date API (granted with limited support to date)

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

reply
There's an extra digit in your timestamp.
reply
When I worked in Finance our internal Date extension did actually have Workdays that took into account Stock Market and Bank Holidays.
reply
…now imagine a list of instruments, some of which have durations specified in days/weeks/months (problems already with the latter) and some in workdays, and the user just told your app to display it sorted by duration.
reply
I tried to write this function in Power Query (Excel hell). Gave up after an hour or so.
reply
Me too, it was just a constant filled with bank holidays for the next 6 years
reply
Why would it get pulled in over the weekend? What automatic deployments are you running if there also isn't a human working to get it out?

Do you run automatic dependency updates over the weekend? Wouldn't you rather do that during fully-staffed hours?

reply
And we also need localization. Each country can have their own holidays
reply
And we need groups of locales for teams that are split across multiple locations; e.g.:

  new_date = add_workdays(
    workdays=1.5,
    start=datetime.now(),
    regions=["es", "mx", "nl", "us"],
  )
reply
Hopefully "es" will have Siesta support too.
reply
[dead]
reply
Might be better to calculate them separately for each locale and then tie-break with your own approach (min/max/avg/median/etc.)
reply
Don't forget about regional holidays, which might follow arbitrary borders that don't match any of the official subdivisions of the country. Or may even depend on the chosen faith of the worker
reply
Pulaski day in Illinois. Or Reds Opening Day in Cincinnati.
reply
deleted
reply
Nah, working hours and make global assumptions of 0900-1230/1330-1730, M-F, and have an overly convoluted way to specify what working ours actually are in the relevant location(s).
reply
If we're taking suggestions, I'd like to propose "parsec" (not to be confused with the unit of distance of the same name)

That way Han Solo can make sense in the infamous quote.

EDIT: even Gemini gets this wrong:

> In Star Wars, a parsec is a unit of distance, not time, representing approximately 3.26 light-years

reply
> That way Han Solo can make sense in the infamous quote.

They explained it in the Solo movie.

https://www.reddit.com/r/MovieDetails/comments/ah3ptm/solo_a...

reply
Making a whole movie just to retcon the parsec misuse in Ep IV was a choice
reply
They made a movie to make money. I doubt anyone holding the purse strings cared one iota if that bit were corrected or not. It’s not really a retcon either because they didn’t change anything.
reply
That had more or less been the explanation in the books for decades, and even in George Lucas' notes from 1977:

> It's a very simple ship, very economical ship, although the modifications he made to it are rather extensive – mostly to the navigation system to get through hyperspace in the shortest possible distance (parsecs).

reply
It was already fine, because it’s a metric defined on a submanifold of relativistic spacetime.
reply
Parallax arc-second -> distance.

For Star Wars, they retconned it to mean he found the shortest possible route through dangerous space, so even for Han Solo's quote, it's still distance.

reply
N multiplications of dozen-second
reply
To me it sounds safer to have different big infra providers with different delays, otherwise you still hit everyone at the same time when something does inevitably go undetected.

And the chances of staying undetected are higher if nobody is installing until the delay time ellapses.

It's the same as not scheduling all cronjobs to midnight.

reply
deleted
reply
About the use of different units: next time you choose a property name in a config file, include the unit in the name. So not “timeout” but “timeoutMinutes”.
reply
Yes!! This goes for any time you declare a time interval variable. The number of times I've seen code changes with a comment like "Turns out the delay arg to function foo is in milliseconds, not seconds".
reply
Or require the value to specify a unit.
reply
At that point, you're making all your configuration fields strings and adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either; either you write a bunch of parsing code (not terribly difficult but not something I wanna do when I can just not), or you use some time library to parse a duration string, in which case the programming language and time library you happen to use suddenly becomes part of your config file specification and you have to exactly re-implement your old time handling library's duration parser if you ever want to switch to a new one or re-implement the tool in another language.

I don't think there are great solutions here. Arguably, units should be supported by the config file format, but existing config file formats don't do that.

reply
TOML has a datetime type (both with or without tz), as well as plain date and plain time:

  start_at = 2026-05-27T07:32:00Z  # RFC 3339
  start_at = 2026-05-27 07:32:00Z  # readable
We should extend it with durations:

  timeout = PT15S  # RFC 3339
And like for datetimes, we should have a readable variant:

  timeout = 15s   # can omit "P" and "T" if not ambiguous, can use lowercase specifiers
Edit: discussed in detail here: https://github.com/toml-lang/toml/issues/514
reply
great, now attackers can also target all the libraries to enable all that complexity in npm too.
reply
> adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either

I'd argue that it is ideal, in the sense that it's the sweet spot for a general config file format to limit itself to simple, widely reusable building blocks. Supporting more advanced types can get in the way of this.

Programs need their own validation and/or parsing anyway, since correctness depends on program-specific semantics and usually only a subset of the values of a more simply expressed type is valid. That same logic applies across inputs: config may come from files, CLI args, legacy formats, or databases, often in different shapes. A single normalization and validation path simplifies this.

General formats must also work across many languages with different type systems. More complex types introduce more possible representations and therefore trade-offs. Even if a file parser implements them correctly (and consistently with other such parsers), it must choose an internal form that may not match what a program needs, forcing extra, less standard transformation and adding complexity on both sides for little gain.

Because acceptable values are defined by the program, not the file, a general format cannot fully specify them and shouldn’t try. Its role is to be a medium and provide simple, human-usable (for textual formats), widely supported types, avoid forcing unnecessary choices, and get out of the way.

All in all, I think it can be more appropriate for a program to pick a parsing library for a more complex type, than to add one consistently to all parsers of a given file format.

reply
Another parsing step is the common case. Few parameters represent untyped strings where all characters and values are valid. For numbers as well, you often have a limited admissible range that you have to validate for. In the present case, you wouldn’t allow negative numbers, and maybe wouldn’t allow fractional numbers. Checking for a valid number isn’t inherently different from checking for a regex match. A number plus unit suffix is a straightforward regex.
reply
deleted
reply
timeoutMs is shorter ;)

You guys can't appreciate a bad joke

reply
Megaseconds are about the right timescale anyway
reply
What megaseconds? They clearly meant the Microsoft-defined timeout.
reply
Well megaseconds has the nice property that it's about about equal to a Scaramucci so it can be used across domains.
reply
timoutμs is even better. People will learn how to type great symbols.
reply
They wouldn't have to, if the file format accepted floats in proper exponential format.
reply
Yes timout indeed!
reply
not timeout at all is even shorter.
reply
deleted
reply
Pnpm did this first but I’m glad to see all the others follow suit

For anyone wondering, you need to be on npm >= 11.10.0 in order to use it. It just became available Feb 11 2026

https://github.com/npm/cli/releases/tag/v11.10.0

reply
> PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages.

The solution is not moar toolz. That's the problem—this crazy mindset that the problems endemic to bad tooling have a solution in the form of complementing them with another layer, rather than fewer.

Git and every sane SCM already allow you to manage your source tree without jumping through a bunch of hoops to go along with wacky overlay version control systems like the one that the npmjs.com crew designed, centering around package.json as a way to do an end-run around Git. You don't need to install and deploy anything containing never-before-seen updates just because the NodeJS influencer–developers say that lockfiles are the Right Way to do things. (It's not.)

Opting in to being vulnerable to supply chain attacks is a choice.

<https://news.ycombinator.com/item?id=46006471>

<https://news.ycombinator.com/item?id=46360308>

reply
Is there a way to do that per repo for these tools ? We all know how user sided configuration works for users (they usually clean it whenever it goes against what they want to do instead of wondering why it blocks their changes :))
reply
At least with npm, you can have a .npmrc per-repo
reply
pnpm does global + per-repo
reply
And when you actually need a super hot fix for a 0-day, you will need to revert this and keep it that way for some time to then go back to minimum age.

While this works, we stillneed a permanent solution which requires a sort of vetting process, rather than blindly letting everything through.

reply
pnpm since v10.19.0 allows excluding specific dependencies from minReleaseAge by version.
reply
Who will do the vetting process?
reply
I think my vetting would settle for a repo diff against the previous version, confirming the only difference was the security fix (though that doesn't cover all the bases).
reply
Jia Tan
reply
min release age to 7 days about patch releases exposes you to the other side of the coin, you have an open 7 days window on zero-day exploits that might be fixed in a security release
reply
The packages that are actually compromised are yanked, but I assume you're talking about a scenario more like log4shell. In that case, you can just disable the config to install the update, then re-enable in 7 days. Given that compromised packages are uploaded all the time and zero-day vulnerabilities are comparatively less common, I'd say it's the right call.
reply
`uv` has per-package overrides, I imagine there may be similar in other managers
reply
I haven't checked, but it would be surprising that the min-release-age applies to npm audit and equivalent commands
reply
At least with pnpm, you can specify minimumReleaseAgeExclude, temporarily until the time passes. I imagine the other package managers have similar options.

[1]: https://pnpm.io/settings#minimumreleaseageexclude

reply
Not really an issue though right because virtually none of these have lasted more than 1-2 days before being discovered?
reply
Out of the frying pan and into the frier.....
reply
Exactly what I thought too when I read this...

Urgent fix, patch released, invisible to dev team cause they put in a 7 day wait. Now our app is vulnerable for up to 7 days longer than needed (assuming daily deploys. If less often, pad accordingly). Not a great excuse as to why the company shipped an "updated" version of the app with a standing CVE in it. "Sorry we were blinded to the critical fix because set an arbitrary local setting to ignore updates until they are 7 days old". I wouldn't fire people over that, but we'd definitely be doing some internal training.

reply
It's wild that none of these are set by default.

I know 90% of people I've worked with will never know these options exist.

reply
That would likely mean same amount of people get the vulnerability, just 7 days later.
reply
The compromised packages were removed from the registry within hours.
reply
Because everyone got updates immediately. If the default was 7 days, almost no one would get updates immediately but after 7 days, and now someone only finds about after 7 days. Unless there is a poor soul checking packages as they are published that can alert the registry before 7 days pass, though I imagine very few do that and hence a dedicated attacker could influence them to not look too hard.
reply
If I remember correctly, in all the recent cases it was picked up by automated scanning tools in a few hours, not because someone updated the dependency, checked the code and found the issue.

So it looks like even if no one actually updates, the vast majority of the cases will be caught by automated tools. You just need to give them a bit of time.

reply
If everyone or a majority of people sets these options, then I think issues will simply be discovered later. So if other people run into them first, better for us, because then the issues have a chance of being fixed once our acceptable package/version age is reached.
reply
mise has an option as well (note the caveats though):

https://mise.jdx.dev/configuration/settings.html#install_bef...

And homebrew has discussed it, kinda sorta:

https://github.com/Homebrew/brew/issues/21129

reply
and for yarn berry

    ~/.yarnrc.yml
    npmMinimalAgeGate: "3d"
reply
If everyone avoids using packages released within the last 7 days, malicious code is more likely to remain dormant for 7 days.
reply
What do you base that on? Threat researchers (and their automated agents) will still keep analyzing new releases as soon as they’re published.
reply
Their analysis was triggered by open source projects upgrading en-masse and revealing a new anomalous endpoint, so, it does require some pioneers to take the arrows. They didn't spot the problem entirely via static analysis, although with hindsight they could have done (missing GitHub attestation).
reply
A security company could set up a honeypot machine that installs new releases of everything automatically and have a separate machine scan its network traffic for suspicious outbound connections.
reply
The problem is what counts as suspicious. StepSecurity are quite clear in their post that they decide what counts as anomalous by comparing lots of open source runs against prior data, so they can't figure it out on their own.
reply
The fact threat researchers and especially their automated agents are not all that good at their jobs
reply
Those threat researchers and their autonomous agents caught this axios release.
reply
deleted
reply
> What do you base that on?

The entire history of malware lol

reply
Can you elaborate? Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?
reply
Attackers going "low and slow" when they know they're being monitored is just standard practice.

> Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?

I'm sure they will, but attackers will adapt. And I'm really unconvinced that these delays are really going to help in the real world. Imagine you rely on `popular-dependency` and it gets compromised. You have a cooldown, but I, the attacker, issue "CVE-1234" for `popular-dependency`. If you're at a company you now likely have a compliance obligation to patch that CVE within a strict timeline. I can very, very easily pressure you into this sort of thing.

I'm just unconvinced by the whole idea. It's fine, more time is nice, but it's not a good solution imo.

reply
What, in your view, is a better solution?
reply
There are many options. Here's a post just briefly listing a few of the ones that would be handled by package managers and registries, but there are also many things that would be best done in CI pipelines as well.

https://news.ycombinator.com/item?id=47586241

reply
that's why people are telling others to use 7 days but using 8 days themselves :)
reply
You don't have to be faster than the bear, you just have to be faster than the other guy.
reply
brb, switching everything to 9 days
reply
That is 3D chess level type shit. xD
reply
Worth noting this attack was caught because people noticed anomalous network traffic to a new endpoint. The 7-day delay doesn't just give scanners time, it gives the community time to notice weird behavior from early adopters who didn't have the delay set.

It's herd immunity, not personal protection. You benefit from the people who DO install immediately and raise the alarm

reply
But wouldn't the type of people that notifes anomalous network activity be exactly the type of people who add a 7 day delay because they're security conscious?
reply
And I’ll bet a chunk of already-compromised vibe coders are feeling really on-top-of-shit because they just put that in their config, locking in that compromised version for a week.
reply
[dead]
reply
I suspect most packages will keep a mix of people at 7 days and those with no limit. That being said, adding jitter by default would be good to these features.
reply
>adding jitter by default would be good

This became evident, what, perhaps a few years ago? Probably since childhood for some users here but just wondering what the holdup is. Lots of bad press could be avoided, or at least a little.

reply
They’re usually picked up by scanners by then.
reply
Most people won’t.

7 days gives ample time for security scanning, too.

reply
This highly depends on the detection mechanism.
reply
> If everyone avoids using packages released within the last 7 days

Which will never even come close to happening, unless npm decides to make it the default, which they won't.

reply
[dead]
reply
reply
I think the npm doesn't support end of line comments, so

  ~/.npmrc
  min-release-age=7 # days 
actually doesn't set it at all, please edit your comment.

EDIT: Actually maybe it does? But it's weird because

`npm config list -l` shows: `min-release-age = null` with, and without the comment. so who knows ¯\_(ツ)_/¯

reply
ok, it works, only the list function shows it as null...
reply
Where in the pnpm documentation does it say that it ignores scripts by default?

From https://pnpm.io/cli/install#--ignore-scripts:

> Default: *false*

reply
Weird. The config also appears to default to `false`

https://pnpm.io/settings#ignorescripts

reply
This page describes the behavior, "disables the automatic execution of postinstall scripts in dependencies":

https://pnpm.io/supply-chain-security

While this explicitly calls out "postinstall", I'm pretty sure it affects other such lifecycle scripts like preinstall in dependencies.

The --ignore-scripts option will ignore lifecycle scripts in the project itself, not just dependencies. And it will ignore scripts that you have previously allowed (using the "allowBuilds" feature).

reply
Run npm/pnpm/bun/uv inside a sandbox.

There is no reason to let random packages have full access to your machine

reply
Sandboxing by to default world be really nice. One of the things I really appreciate about Claude Code is its permissions model
reply
Props to uv for actually using the correct config path jfc what is “bunfig”
reply
Silly portmanteau of "bun" and "config"
reply
A trendy sandwich
reply
Everyone has forgotten standard ISO 8601 durations and invented their own syntax.
reply
deleted
reply
The config for uv won't work. uv only supports a full timestamp for this config, and no rolling window day option afaik. Am I crazy or is this llm slop?
reply
https://docs.astral.sh/uv/concepts/resolution/#dependency-co...

> Define a dependency cooldown by specifying a duration instead of an absolute value. Either a "friendly" duration (e.g., 24 hours, 1 week, 30 days) or an ISO 8601 duration (e.g., PT24H, P7D, P30D) can be used.

reply
My bad. This works for per project configuration, but not for global user configuration.
reply
It should work for global configuration too, please file an issue if you’re observing otherwise.

(Make sure you’re on a version that actually supports relative times, please!)

reply
This is what tripped me up. I added that config and then got this error:

error: Failed to parse: `.config/uv/uv.toml` Caused by: TOML parse error at line 1, column 17 | 1 | exclude-newer = "7 days" | ^^^^^^^^ failed to parse year in date "7 days": failed to parse "7 da" as year (a four digit integer): invalid digit, expected 0-9 but got

I was on version 0.7.20, so I removed that line, ran "uv self update" and upgraded to 0.11.2 and then re-added the config and it works fine now.

reply
Yeah, that error message isn’t ideal on older versions, but unfortunately there’s no way to really address that. But I’m glad it’s working for you on newer versions.
reply
For what it's worth the error made sense enough to me that I figured I needed to upgrade. :-)
reply
I think it should work at the user config level too:

> If project-, user-, and system-level configuration files are found, the settings will be merged, with project-level configuration taking precedence over the user-level configuration, and user-level configuration taking precedence over the system-level configuration.

https://docs.astral.sh/uv/concepts/configuration-files/

reply
deleted
reply
Good luck with any `npm audit` in a pipeline. Sometimes you have to pull the latest release because the previous one had a critical vulnerability.
reply
npm is claiming this doesn’t exist
reply
Make sure you're on version 11.10 or later?
reply
[dead]
reply
[dead]
reply
[dead]
reply