Someone at my org used their main company email address for a root user om an account we just closed and a 2nd company email for our current account. We are past the time period where AWS allows for reverting the account deletion.
This now means that he isn’t allowed to use SSO via our external IdP because the email address he would use is forever attached to the deleted AWS account root user!
AWS support was rather terrible in providing help.
We've tried talking to everyone we can, opening tickets, chats, trying to talk to their assigned account rep, etc, no one can remove the MFA. So right now luckily they have other admin accounts, but we straight up can't access their root account. We might have to nuke the entire environment and create a new account which is VERY lame considering they have a complicated and well established AWS account.
They treat it like the organization is attempting to commandeer someone else's account so all the privacy protections you expect for your own stuff is applied no matter how much you can prove it is not some other individuals account.
The best part is the billing issues that arise from that. In your example, if the previous engineer logged into that account (because they can) and racked up huge costs, assuming that account is getting billed or can be tied to your client, Amazon will demand your client pay for them, while at the same time refusing to assist in getting access to the account because it's someone else's. They hold you responsible, but unable to act in a responsible manner.
I know that that's not ideal, but as a practical matter perhaps it would be easier than creating a new account, if you can get the engineer to agree to it?
In a nutshell: if a past signatory was a regular employee, it just takes any other signatory to remove them. If there was no other signatory, or if the past signatory was an officer, it takes a current officer (as set forth in the company's AOI or corporate minutes). Usually only the latter 2 situations of the 3 above require an in-person visit to the local branch office, and that only requires a few minutes.
In a past life, we printed the MFA QR code and the head of finance put it into a safe.
I checked the documentation but I couldn't find anything to show this to be a problem other than that the practice is discouraged.
It's really nice when you have to hire someone new for the position. You add them to the DL and they're automatically in control of all those accounts.
I have no idea why more companies don't do this.
Still on Amazon to clearly tell people it is this way so they can properly plan for it, but employee's email addresses really shouldn't be used for the root account.
And on the flip side I can easily see why not allowing email addresses to be used again is a reasonable security stance, email addresses are immutable and so limiting them only to one identity seems logical.
Sounds quite frustrating for this user of course but I guess it sounds a bit silly to me.
Have you ever worked in a company of any size or complexity before?
1. Multiple accounts at the same company, spun up by different teams (either different departments, regions, operating divisions, or whatever) and eventually they want to consolidate
2. Acquisitions: Company A buys Company B, an admin at Company A takes over AWS account for Company B, then they eventually work on consolidating it down to one account
I'm not arguing that it was impossible to know the long term outcome here, but it doesn't mean it isn't frustrating. If you've spent any length of time working in AWS, you know that documentation can be difficult to find and parse.
I can certainly understand why the policy exists. What I think should be possible is in these situations to provide proof of ownership of the old email address so it can be released and reused somehow.
1. Use "admin@domain.com"
2. Let the domain registration lapse
3. Someone else registers the domain and now can't create an AWS account.
Rare but not impossible.
AWS has been around for quite a while now. It’s also not impossible to believe that there are companies out there that might have moved from aws to gcp or something, and maybe it’s time to move back.
When I started, AWS was in its infancy and I was just some guy working on a special project.
Now that same account is bound into an AWS Organization.
AWS Changed. My company changed. the policies change out from under you.
If they aren't actually deleting the account in the background and so no longer have a record of that e-mail address, then they must allow re-activation of the account tied to that e-mail address using the sign-up process.
The author probably misunderstood what "account name" is in Azure Storage's context, as it's pretty much the equivalent of S3's bucket name, and is definitely still a large concern.
A single pool of unique names for storage accounts across all customers has been a very large source of frustration, especially with the really short name limit of only 24 characters.
I hope Microsoft follows suit and introduces a unique namespace per customer as well.
I've never really understood S3's determination not to have a v2 API. Yes, the V1 would need to stick around for a long time, but there's ways to encourage a migration, such as having all future value-add on the V2, and maybe eventually doing marginal increases in v1 API costs to cover the dev work involved in maintaining the legacy API. Instead they've just let themselves, and their customers, deal with avoidable pain.
And sure, v1 is forever, but between getting to the point where new accounts can’t use it without a special request (or grandfathered in sweetheart rates, though that might be a PR disaster) and incentivizing migration off for existing users could absolutely get s3v1 to the point where it could be staffed for maintenance mode rather than staffed as a flagship feature.
It’d take years, but is totally possible. Amazon knows this. If they’re not doing it, it’s because the costs don’t make sense for them.
Storage accounts are one of the worst offenders here. I would really like to know what kind of internal shenanigans are going on there that prevent dashes to be used within storage account names.
I’ve lost track of servers in Azure because the name suddenly changed to all uppercase ave their search is case sensitive but whatever back-end isn’t.
And with no meaningful separator characters available! No dashes, underscores, or dots. Numbers and lowercase letters only. At least S3 and GCS allow dashes so you can put a little organization prefix on them or something and not look like complete jibberish.
This approach goes a long way toward democratizing the name space, since nobody can "own" the tag prefix. (10000 people can all share it). This can also be used to prevent squatting and reuse attacks - just burn the full account name if the corresponding user account is ever shut down. And it prevents early users from being able to snap up all the good names.
Their stated reason[1] for doing so being:
> This lets you have the same username as someone else as long as you have different discriminators or different case letters. However, this also means you have to remember a set of 4-digit numbers and account for case sensitivity to connect with your friends.
[1]: https://support.discord.com/hc/en-us/articles/12620128861463...
Not saying that wasn't ONE of the reasons but the main reason was really that a large chunk of users had no idea that they even had a discriminator, as it was added on top of your chosen username. "add me on discord, my username is slashink" didn't work as people expected and caused more confusion than it was solving. This wasn't universally true either, if you come from a platform like Blizzard's Battle.net that has had discriminators since Battlenet 2.0 came out in 2009 it was a natural part of your identity. End of the day there were more users that expected usernames to be unique the way they are today than expected discriminators.
Addressing that tension was the core reason we made this change. We are almost 3 years past this decision ( https://discord.com/blog/usernames ) and I personally think this change was a positive one.
> Starting March 4, 2024, Discord will begin assigning new usernames to users who have not chosen one themselves. If your username still has a discriminator (username*#0000*), Discord will begin assigning you a new, unique username as soon as March 4, 2024. We will try to assign you a unique username that is similar to your current username.
Just some days ago I received warning from Discord that they'll delete my account since I haven't logged in for two years.
> Your Discord account has been inactive for over 2 years, and is scheduled to be deleted on $DATE. But don’t worry! Dust off the cobwebs and prevent your account from being deleted just by logging in.
Imagine trying to connect with your friends... by telephone.
For buckets I thought easy to use names was a key feature in most cases. Otherwise why not assign randomly generated single use names? But now that they're adding a namespace that incorporates the account name - an unwieldy numeric ID - I don't understand.
In the case of buckets isn't it better to use your own domain anyway?
Also, if you have a bunch of accounts, it's far easier for troubleshooting that the accountId is in the name: "I can't access bucket 'foo'" vs. "I can't access bucket 'foo-12345678901'"
I think for a larger public service it would make sense to expose some sort of internal id(or hash of it. What bob am I talking to?. but people share the same name all the time it is strange that we can't in our online communities.
For particularly high risk activities if circumstances permit you can sidestep the entire issue by adding a layer of verification using a preshared public key. As an arbitrary example, on android installing an app with the same name but different signing key won't work. It essentially implements a TOFU model to verify the developer.
It won't surprise you the scheme never caught on and has been decommissioned (you can now register any available domain as an individual as well). The difference is probably few people use a personal TLD, but many use a name on some social media.
I'm excited for IaC code libraries like Terraform to incorporate this as their default behavior soon! The default behavior of Terraform and co is already to add a random hash suffix to the end of the bucket name to prevent such errors. This becoming standard practice in itself has saved me days in not having to convince others to use such strategies prior to automation.
[1] https://aws.amazon.com/blogs/aws/introducing-account-regiona...
GCP, however, has does this to itself multiple times because they rely so heavily on project-id, most recently just this February: https://www.sentinelone.com/vulnerability-database/cve-2026-...
Once they are not renewed, they eventually become available again. Then anyone can re-register them, set up an MX record, and start receiving any emails still being sent to recipients in that domain. This could include password reset authentications for other services, etc.
See also their recent innovation of letting you be logged into the console with up to five (???) of the many accounts their bizarre IAM system requires, implemented with a clunky system of redirections and magic URL prefixes. As opposed to GCP just having a sensible system in the first place of projects with permissions, and letting you switch between any of them at will using the same user account.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Virtua...
When a name becomes free and somebody else uses it, it points to another thing. What that means for consumers of the name depends on the context, most likely it means not to use it. If you yourself reassign the name you can decide that the new thing will be considered to be identical to the old thing.
“While account IDs, like any identifying information, should be used and shared carefully, they are not considered secret, sensitive, or confidential information.” https://docs.aws.amazon.com/accounts/latest/reference/manage...
But probably best to not advertise it too much.
https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f...
* Backwards compatible
* Keeps readability
* Solves problem
My pet conspiracy theory: this article was written by bucket squatters who want to claim old bucket names after AI agents read this and blindly follow.
myapp-123456789012-us-west-2-an
vs myapp.123456789012.us-west-2.s3.amazonaws.com
The manipulations I will need to do to fit into the 63 char limit will be atrocious.
This is where IaC shines.
Edit: crossout incorrect info
In either case, the subdomain you use in DNS requests are not private. Attackers can collect those from passive DNS logs or in other ways.
"Leak" is maybe a bit over-exaggerated, although if someone MitM'd you they definitely be able to see it. But "leak" makes it seem like it's broadcasted somehow, which obviously it isn't.
You'd need to check the privacy policy of your DNS provider to know if they share the data with anyone else. I've commonly seen source IP address consider as PII, but not the content of the query. Cloudflare's DNS, for example, shares queries with APNIC for research purposes. https://developers.cloudflare.com/1.1.1.1/privacy/public-dns... Other providers share much more broadly.
How does one execute this "passive DNS" without quite literally being on the receiving end, or at least sitting in-between the sending and receiving end? You're quite literally describing what I'm saying, which makes it less of a "leak" and more like "others might collect your data, even your ISP", which I'd say would be accurate than "your DNS leaks".
> Passive DNS is a historical database of how domains have resolved to IP addresses over time, collected from recursive DNS servers around the world. It has been an industry-standard tool for more than a decade.
> Spamhaus’ Passive DNS cluster handles more than 200 million DNS records per hour and stores hundreds of billions of records per month, providing you with access to a vast lake of threat intelligence data.
https://www.spamhaus.com/resource-center/what-is-passive-dns...
Yes, of course, because those DNS servers are literally receiving the queries, eg "receiving the data".
Again, there is nothing "leaking" here, that's like saying you leak what HTTP path you're requesting to a server, when you're sending a HTTP request to that server. Of course, that's how the protocol works!
Putting a secret subdomain in a DNS query shares it with the recursive resolver, who's privacy policy may permit them to share it with others. This is a common practice and attackers have access to the aggregated datasets. You are correct that third-party web servers or CDN could share your HTTP path, but I am not aware of any examples and most privacy policies should prohibit them from doing so. If your web server provider or CDN do this, change providers. DNS recursive resolvers are chosen client side, so you can't always choose which one handles the query. Even privacy-focused DNS recursive resolvers share anonymized query data. They remove the source IP address, since it's PII, but still "leak" the secret subdomain.
Any time you send secret data such that it travels to an attacker visible dataset it is vulnerable to attack. I call that a leak but we can use a different term.
What gave you that idea? Maybe because my initial comment started with:
> "Leak" is maybe a bit over-exaggerated...
And continues with about why I think so?
I raised this sub-thread specifically because I got hung up on "leak", that's entire point of the conversation in my mind.
If anyone wants them to be user facing resources, then treat them as such, and ensure they're secure, and don't store sensitive info on them. Otherwise, put a service infront of them, and have the user go through it.
The S3 protocol was meant to make the lives of programmers easier, not end users.
[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket...
Namespaces are annoying but at least let you reorganize or fix mistakes. If you want to prevent squatting, rate limiting creation and deletion or using a quarantine window is more practical. No recovery path just rewards trolls and messes with anyone whose processes aren't perfect.
Not to mention the ergonomics would suck - suddenly your terraform destroy/apply loop breaks if there’s a bucket involved
a) AWS will need to maintain a database of all historical bucket names to know what to disallow. This is hard per region and even harder globally. Its easier to know what is currently in use rather know what has been used historically.
b) Even if they maintained a database of all historically used bucket names, then the latency to query if something exists in it may be large enough to be annoying during bucket creation process. Knowing AWS, they'll charge you for every 1000 requests for "checking if bucket name exists" :p
c) AWS builds many of its own services on S3 (as indicated in the article) and I can imagine there may be many of their internal services that just rely on existing behaviour i.e. allowing for re-creating the same bucket name.
As for c), I assume it's not just AWS relying on this behaviour. https://xkcd.com/1172/
I think that's an important defense that AWS should implement for existing buckets, to complement account scoped bucket.
This is not me criticising you. I totally understand the urge to say it. We're all thinking the thing you're thinking of. It takes effort not to give into it ;)
The reason I personally would refrain from making such comments is that they have the potential to end up as highest ranked comment. That would be a shame. Topic of S3 bucketsquatting is rather important and very interesting.
I wasn't but I sure am now.
If you mean to use a "secret" prefix (i.e. pepper) then, that would generate effectively globally unique names each time (and unpredictable too) but you can't change the pepper and it's only a matter of time it'd leak.
The public/private distinction seems moot here, too: the salt is a throwaway since you just need the bucket name.
Even if you do need to keep track of the salt, it should be safe for the attacker to know, at least with respect to this attack, because you already own the bucket which the attacker would otherwise hoard.
1. You set up an aws bucket with some name (any name whatsoever).
2. You have code that reads and/or writes data to the bucket.
3. You delete the bucket at some later date, but miss some script/process somewhere that is still attempting to use the bucket. For the time being, that process lies around, silently failing to access the bucket.
4. The bucket name is recycled and someone else makes a bucket with the same name. Perhaps it's an accident, or perhaps it's because by some means an attacker became aware of the bucket name, discovers that the name is available, and decided to "squat" the name.
5. That overlooked script or service is happy to see the bucket it's been trying to access all this time is available again.
You now have something potentially writing out private data, or potentially reading data and performing actions as a result, that is talking to attacker-owned infrastructure.