what is the engineering used to determine a weak startup from a growing company you ask? well....googlers again use random numbers not logic (human interaction avoidance firewall) to determine that and set the floor at $30M "publicly declared investment capital". so what happens when you the gcp architect consultant hired to help this successful startup productionalize their gcp infra but their last round was private? google tells the soon to be $100M success company they are not real yet.....so they go get their virtual cpu,ram,disk from aws who knows how to treat customers right by being able to talk to humans by hiring account managers who pick up the phone and invite you to lunch to talk about your successful startup growing on aws. googlers are the biggest risk factor to the far superior gcp infrastructure for any business, startup or fortune 10.
We add support when we want to do something new, like MediaTailor + SSAI. At that point we're exploring and trying to get our heads around how things work. Once it works there's no real point in support.
That said, you need to ask your account manager about (1) discounts in exchange for spend commitments, and (2) technical assistance. In general we have a talk with our AM when we're doing something new, and they rope in SMEs from the various products for us.
We're not that big, and I haven't worked for large companies, and it's always been a mystery to me why people have problems dealing with AWS. I've always found them to be super responsive and easy to get ahold of. OTOH we actually know what we're doing technically.
Google Cloud, OTOH, is super fucked up. I mean seriously, I doubt anyone there has any idea WTF is happening or how anything works anymore. There's no real cohesion, or at least there wasn't the last time I was abused by GCP.
Depending what precisely you mean by the second one, you may not even need an AM/support for that.
They won't help me use the platform, but they will still address issues with the platform. If you run into bugs, things not behaving how they're documented, or something that simply isn't exposed/available to customers they seem to be pretty good about getting it resolved regardless of your spend or support level.
(On my personal account with minimal spend, no AM, and no support... I've had engineers from the relevant teams email me directly after submitting a ticket for issues.)
So yeah, "if you know what you're doing" you probably don't even need the paid-for support.
Hard disagree. I have to engage with AWS support almost once every 6 months. A lot of them end up being bugs identified in their services. Premium support is extremely valuable when your production services are down and you need to get them back up asap.
Both times they were serious production bugs that took at least a week to resolve, though I only had the lowest tier of support package.
GCP’s architecture seems clearly better to me especially if you are looking to be global.
Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
GCP’s use of folders makes way more sense.
GCP having global VPCs is also potentially a huge benefit if you want your users to hit servers that are physically close to them. On AWS you have to architect your own solution with global accelerator which becomes even more insane if you need to cross accounts, which you’ll probably have to do eventually because of the aforementioned insanity of AWS account/organization best practices.
Know how you find all the permissions a single user in GCP has? You have to make 9+ API calls, then filter/merge all the results. They finally added a web tool to try and "discover" the permissions for a user... you sit there and watch it spin while it madly calls backend APIs to try to figure it out. Permissions for a single user can be assigned to users, groups, orgs, projects, folders, resources, (and more I forget), and there's inheritance to make it more complex. It can take all day to track down every single place the permissions could be set for a single user in a single hierarchical organization, or where something is blocking some permission. The complexity increases as you have more GCP projects, folders, orgs. But, of course, if you don't do all this, GCP will fight you every step of the way.
Compare that to AWS, where you just click a user, and you see what's assigned to it. They engineered it specifically so it wouldn't be a pain in the ass.
> Every organization I’ve ever witnessed eventually ends up with some kind of struggle with AWS’ insane organizations and accounts nightmare.
This was an issue in the early days, but it's well solved now with newer integrations/services. Follow their Well Architected Framework (https://docs.aws.amazon.com/wellarchitected/latest/framework...), ask customer support for advice, implement it. I'm not exaggerating when I say this is the best description of the best information systems engineering practice in the world, and it's achievable by startups. It just takes a long time to read. If you want to become an excellent systems engineer/engineering manager/CTO/etc, this is your bible. (Note: you have to read the entire thing, especially the appendixes; you can't skim it like StackOverflow)
The problem is that no company I’ve ever worked for implemented the well architected framework with their AWS environment, and not one company will ever invest the time to make their environment match that level of quality.
I think what you describe with the web tool to discover user permissions sounds a lot like the AWS VPC Reachability Analyzer which I had to live in for quite a while because figuring out where my traffic was getting blocked between an endless array of AWS accounts and cross-region transit gateways was such a nightmare that wouldn’t exist with GCP global VPCs and project/folder based permissions.
I don’t like the GCP console, but I also wouldn’t consider a lot of the AWS console to be top tier software. Slow/buggy/inconsistent are words I would use with the AWS console. I can concede that AWS has better documentation, but I don’t think it’s a standout, either.
Architecturally I'd go with GCP in a heartbeat. Bigquery was also one of the biggest wins in my previous role. Completely changed out business for almost everyone, vs Redshift which cost us a lot of money to learn that it sucked.
You could say I'm biased as I work at Google (but not on any of this), but for me it was definitely the other way around, I joined Google in part because of the experience of using GCP and migrating AWS workloads to in.
What are these struggles? The product I work on uses AWS and we have ~5 accounts (I hear they used to be more TBF) but nowadays all the infrastructure is on one of them and the other are for some niche stuff (tech support?). I could see how going overboard with many accounts could be an issue, but I don't really see issues having everything on one account.
The way to automate provisioning of new AWS accounts requires you to engage with Control Tower in some way, like the author did with Account Factory for Terraform.
Just before they announced that I was working on creating org accounts specifically to contain S3 buckets and then permitting the primary app to use those accounts just for their bucket allocation.
AWS themselves recommend an account per developer, IIRC.
It's as you say, some policy or limitation might require lots of accounts and lots of accounts can be pretty challenging to manage.
I have almost 40 AWS accounts on my login portal.
Two accounts per product, one for development environments and one for production environments, every new company acquisition has their own accounts, then we have accounts that solely exist to help traverse accounts or host other ops stuff.
Maybe you don’t see issues with everything in one account but my company would.
I don’t really think they’re following current best practices but that’s a political issue that I have no control over, and I think if you went back enough years you’d find that we followed AWS’ advice at the time.
Undersea cable failures are probably more likely than a google core networking failure.
In AWS a lot of "global" things are actually just hosted in us-east-1.
Guessing that's similar on the other clouds.
The routing isn’t centralized, it’s distributed. The VPCs are a logical abstraction, not a centralized dependency.
If you have a region/AZ going down in your global VPC, the other ones are still available.
I think it’s also not that much of an advantage for AWS to be able to say its outages are confined to a region. That doesn’t help you very much if their architecture makes architecting global services more difficult in the first place. You’re just playing region roulette hoping that your region isn’t affected. Outages frequently impact all/multiple AZs.