It should be illegal for these companies, just like utilities, to deny service to anyone or any entity in good standing for dues.
There is little hope for getting this through in the US where most politicians of any stripe hate the public, and the ones that don't have hardly any power. But it might be possible to do this in the EU.
Then, we non-EU folks need to apply for Estonian e-residency [1] which may get us EU regulatory coverage.
More regulation won't help here, because the regulation-maker is itself the hostile party.
What would help is full control over the supply chain. Hardware that you own, free and open-source operating systems where no single person is the bottleneck to distribution, and free software that again has no single person who is a failure point and no way to control its distribution.
It's easy to paint the big gov as bad, but this is a case where unfortunately the populace seems to be in agreement with the big bad gov. While most US citizens support encryption, 76% or so, the vast majority 63% also favor government "backdoor" access for national security reasons.
I guess either we believe in democracy or we don't. It could be said that if Veracrypt isn't/can't be backdoor'd, perhaps the gov is simply implementing the will of the people :( via Microsoft.
We're in an interesting spot here and the tension is tangible.
WASHINGTON, DC—Assuming that there must be a good reason for the order, U.S. citizens lined up at elementary schools and community centers across the nation Monday for government-mandated fingerprinting. “I’m not exactly sure what this is all about,” said Ft. Smith, AR, resident Meredith Lovell while waiting in line. “But given all the crazy stuff that’s going on these days, I’m sure the government has a very good reason.” Said Amos Hawkins, a Rockford, IL, delivery driver: “I guess this is another thing they have to do to ensure our freedom.”
(source: The Onion, October 9, 2002[1])
[1] https://theonion.com/american-people-shrug-line-up-for-finge...
There are legitimate reasons for governments to intercept information, with the correct oversight -- enforced legally in an "checks and balances" manner. The fact that there is a breakdown of trust between government and people won't be solved with more encryption.
If in a democratic society, the majority agrees that government should have backdoors (with the correct oversight). Then it follows that Veracrypt should be illegal as its use is not in alignment with the will of the majority.
I personally don't agree with the majority here but can you fault the logic?
In the U.S. in particular, there's strong respect for individual rights enshrined in the Constitution, and a key role of the judicial branch is ensuring that those rights are respected regardless of what the majority thinks. The majority cannot enslave the minority, for example, regardless of what the legislature votes. Nor can it deprive it of speech or free assembly, or guns, or a right to trial by jury.
if only it were so simple
aka leave it to the experts because the majority isn't qualified to make such decisions.
Don't do math that way! That math is illegal! Good boys and girls don't keep secrets!
These people sound ridiculous
Could this be the one exceptional case where people agree with the direction of policymaking? Sure. Is that likely? No, not really.
Also “there is no appeal possible” should be plain illegal.
There’s no apparent mechanism to do so. Support was clueless. The privacy email address responded weeks later with “not out department”.
"I'm doing it wrong and it doesn't work" means you're doing it wrong, not that it doesn't work.
And https://www.facebook.com/help/contact/178402648024363 doesn't work either. Black hole, as far as I can determine.
Their chatbot, when asked, sends you to https://help.meta.com/support/privacy/ and says:
> To submit a GDPR objection request on Facebook, you can use the Privacy Rights Request channel.
> Select Facebook as the product you want to submit an objection about.
> Choose the option "How can I object to the use of my information" and follow the instructions.
But that option doesn't exist.
"In addition to the information referred to in paragraph 1, the controller shall, at the time when personal data are obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing: the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."
EDPB Guidelines on automated decision making: https://ec.europa.eu/newsroom/article29/items/612053 especially page 25 is relevant
C‑634/21 is also somewhat relevant to understand how courts have applied ADM in general context of credit reporting https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A... though it didn't specify what information actually needs to provided for 13(2)(f).
I don’t know the number. But personally I think using the services and ‘simply’ only use them if the disappearance isn’t catastrophic and have the price be low or free while it works isn’t too bad a trade-off.
Admittedly that’s a big ‘if.’
If this requirement was in place they would be a bit more careful about terminating accounts because the cost equation would incentivize it. Maybe they would be more careful in their automation or require more than one level of human review before cutting off access.
These companies are gatekeepers for their platform. It isn’t crazy to require them to act more responsibly.
Start worrying about the erosion of your rights as a consumer.
For instance I don't think to this day it is possible to operate a Mastodon server and be compliant with GPDR and the UK online safety Act. There was the famous case of LFGSS forum about to shut down due to the former, the forum was kind of saved by a group of individuals willing to take the risk but the founder stepped down from fear of legal risks.
There hasn't been home raided and servers and personal computers seized yet but that doesn't mean it can't happen and technically any EU or UK volunteer hosting some forums or open source based social media that isn't GPDR or online safety act compliant could be at risk. For most I believe it is not that they don't want to be compliant but they aren't aware of that and/or don't have the technical means without further development on the software they are using and despite them not abiding to their own user rights, most of their users would be more sad to see them shutdown than the current status of not obeying the law.
It wouldn't. For example, before Gmail, email was often free or nearly free (bundled with your internet service), but in most cases, you could talk to a human if you had issues with the service.
What we couldn't do is turn these business models into planetary-scale behemoths that rake in hundreds of billions of dollars in revenue. In essence, you couldn't have Google or Facebook with good customer support. I'm not here to argue that Google or Facebook are a net negative, but the trade-offs here are different from what you describe.
The contrasting approach, where one designs a platform that remains secure even if the owner is allowed to run whatever software they like, may be more complex but is overall much better. There aren’t many personal-use systems like this, but systems like AWS take this approach and generally do quite well with it.
There's a lot that one can gripe about Amazon as a company about, but credit where credit is due -- their inversion of responsibility is game-changing.
You see this around the company, back to their "Accept returns without question" days of mail order.
Most critically, this inversion turns customer experience problems (it's the customer's problem) into Amazon problems.
Which turns fixing them into Amazon's responsibility.
Want return rates to go down because the blanket approval is costing the company too much money? Amazon should fix that problem.
Too often companies (coughGoogleMicrosoftMetacough) set up feedback loops where the company is insulated from customer pain... and then everyone is surprised when the company doesn't allocate resources to fix the underlying issue.
If false positive account bans were required to be remediated manually by the same team who owned automated banning, we'd likely see different corporate response.
"Financially, it was a year of record performance. Revenue was $281.7 billion, up 15 percent. Operating income grew 17 percent to $128.5 billion." https://www.microsoft.com/investor/reports/ar25/index.html
So don't be so naive to tell us that 1-2 additional people to handle the appeal process is anything but rounding error in their balance sheet.
Do not discount complete, total, utter, profound fucking incompetence as the driving reason behind this.
Getting the business verification was an astounding shitshow. With a registered C corp and everything, massively unclear instructions, UI nestled in a partner site with tons of dead ends. And then even after all the docs, it took another week because -- in an action that nobody could possibly have ever foreseen -- we had two different microsoft accounts due to a cofounder buying ONE LICENSE of O365 for excel and doing domain verification because it suggested it.
<Tin foil hat on> Microsoft doesn't want to allow software that would allow the user to shield themselves, either by totally encrypting a drive, or by encrypting their network traffic! </Tin foil hat on>
I don't think Microsoft cares (about anything besides making mo' money), but there are plenty of (state) actors that can influence the decision-making at Microsoft when it comes to these issues.
No tinfoil needed.
That's what Big Tinfoil wants you to believe!
https://www.tiktok.com/@etong_winter_palikir/video/739554877...
Microsoft the corporation may only care about making money, but a lot of very high ranking folks within MS Security aren't just friendly to intelligence agencies, they take genuine pride in helping intelligence agencies. They're the kinds of people who saw nothing wrong or objectionable with PRISM whatsoever, they were just mad they got caught, and that the end user (who they believe had no right to even know about it) found out anyway. The kind of people who openly defend the legitimacy of the FISA court.
This aren't baseless accusations, this comes from first-hand experience interacting with and talking to several of them. Charlie Bell literally kept a CIA mug on a shelf behind him, prominently visible during Teams calls, as if to brag.
Remember - Microsoft was the very first company on the NSA's own internal slide deck depicting a timeline of PRISM collection capabilities by platform, started all the way back in 2007. All companies on that slide may have been compelled to assist with national security letters. Some were just more eager than others to betray the privacy and trust of their own customers and end-users.
I was always convinced that Skype was bought by microsoft so CIA/US intelligence agencies to have listening capabilities.
The first thing Microsoft did after the Skype purchase was making it easier to tap into the calls by removing p2p calling and routing calls using centralized servers.
The catch is, views like those must be kept to a fairly modest level by the people who hold them. Discussing them with ideologically aligned colleagues may be fine, but for example, when someone makes statements or asks questions with such pro-privacy framing on stage directly to security leadership at internal company conferences, that is a quick way to a severance package not only for the person on stage, but also for dozens of folks in the audience who clapped a little too enthusiastically at the onstage remarks.
If Microsoft amounts to a sentient entity (i.e. is able to care about things), we have a bigger problem.
If we put the wall of metaphor between us and that interpretation, it still remains likely that "users shielding themselves" is of primary concern to Microsoft's bottom line.
At least it reached its goal if it entertained you
It also reminds me of the case of the entire family who lost all of their payment-linked individual accounts including business data and an academic dissertation because the son allegedly behaved inappropriately with a bot. Collective punishment on top of technofeudal instant banishment.
Microsoft even supports Wireguard in Azure Kubernetes Service.
https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...
?
They've since moved on to the SSS strategy: Ship, Slip, Slop.
Who cares if it's OSI-approved or not, a line saying "M$, Google, and the like need written permission for every use case" would help to make those leeches honest. Just learn from the JSLint example.
plus n-word dot com hosts information about the plus n-word license which purports:
- The software will not be used or hosted by western corporations that promote censorship
- The software will not be used or hosted by compromised individuals that promote censorship
- Users of the software will be immune to attacks that would result in censorship of others
That would be both hilarious and horrifying if the only thing stopping the corporate dystopia is that Microsoft doesn't want to say the N word.
Valkey is better because all of the new development work happens on Valkey, not because of the license. If the actual developer changed the license, that would be a different situation.
In digital services there's no such thing. There's only a damned corporation employing idiots who don't care about community.
But yes, there's a lot of critical single maintainer projects.
It's outrageous. MS is simply enforcing some Government crackdown on encryption software that would interfere with backdoors.
This is the same thing that's happened every time I've tried to have a Microsoft account. I don't think Microsoft wants to have customers who aren't rich.
Nothing in the Apple site or phone stuff would even clue the user in to what was happening, much less how to resolve it.
60 days, long enough for the US to exploit the vulnerabilities discovered by Claude Mythos, short enough to plausibly be bureaucratic corporate awfulness by Microsoft when all is said and done. Basically freezing you and other security software out of protecting the bad guys they particularly want to get at until after the bad guys get got, then everything goes back to normal and Microsoft says "oops, here, we fixed your access."
> Effective October 16, 2025, Microsoft will initiate mandatory account verification for all partners in the Windows Hardware Program who have not completed account verification since April 2024.
> Partners who fail to complete Account Verification by the deadline, or who do not meet the requirements, will have their status set to Rejected and will be suspended from the program.
This is stupid. If Microsoft wants people to stop writing kernel drivers, that's potentially doable (we just need sufficient user mode driver equivalents...) but not doing that and also shortening the list of who can sign kernel drivers down to some elite group of grandfathered companies and individuals is the worst possible outcome.
But at this point I almost wish they didn't fix it, just to drive home the point harder to users how little they really own their computer and OS anymore.
https://github.com/rustdesk/rustdesk/discussions/13025 https://github.com/microsoft/winget-pkgs/pull/345601
tl;dr: ESET Antivirus flags RustDesk as a "Potentially Unsafe Application" because it is a remote administration tool, despite not flagging similar commercial products in the same way, and the WinGet Community repo policy is to block anything flagged as such. Since they were unable to update the repo the RustDesk team requested that the older versions be removed to prevent users from unknowingly installing old versions that could potentially be a security issue in the future. Apparently this has been an issue for a lot of applications especially in the VPN and remote control categories.
There is a discussion about how best to handle these sorts of situations where legitimate and desirable applications get flagged as "potentially unsafe" or "potentially unwanted" but so far it's just been a discussion with no actual changes proposed yet.
They always just tell me to ask copilot, then they open a case using copilot, and then they tell me to ask copilot again. I said I wanted to prove that the code didn't contain malicious code, and they still told me to ask copilot...
This account has been suspended because the code you submitted contains malware or potential vulnerabilities. If you believe your account was suspended in error and can demonstrate that the code you submitted does not contain malware or vulnerabilities, please follow the below steps, and contact us. . Go here: http://aka.ms/hardwaresupport 2. Click Contact Us 3. Make sure you are signed in with a user associated with the HDC account in Partner Center 4. Select Ask Copilot to receive email support.
Windows users are in a tough spot, but with the dawn of Copilot, nobody should be surprised. Frankly, those who remain with Windows after this latest betrayal have chosen their fate.
Ah. So almost every single business in the world… suckers?
because most managers I know in my professional life go with the vendor that buys them dinner or slips them tickets for box seats.
I wouldn’t be surprised if NSA already had a list of these applications and the strategies on how to cripple them or worse, compromise them.
No one is calling an executive meeting to discuss banning an OSS dev’s account.
"Currently undergoing some sort of 60 days appeals process, but who knows."
.. and the op said:
"I have tried to contact Microsoft through various channels but I have only received automated replies and bots. I was unable to reach a human."
... which is a roundabout way of saying you did not spend lawyer hours and you did not contact them through channels that they cannot ignore: registered, physical mail, from a lawyer.
I'm sorry for these difficulties, truly, but don't tell me you can't reach a human when you most definitely can reach a human. From my own experience with an organization at least as calloused and indifferent as MS[1], as soon as I sent a real, legal communication I had real live humans lining up to talk to me.
[1] Pacific Gas and Electric
Sometimes, it's both incompetence AND malice.
Honestly, anyone still using Windows probably deserves it.