upvote
You shall wait. It's a volunteer powered system and while the ops are silent and terse in their mails, they're nice people.

Their plate is already quite full and they operate a whole universe of services, so cut them some slack.

It's not an ordinary service which is exposed to internet trying to turn a profit. They run SDF, two Mastodon instances, a mail server, a Git server, trying to salvage/keep alive living computer museum (SDF Vintage Systems), etc. etc.

reply
I tried signing up to their mastodon three times and just never received the email accepting me. It's a shame because I wanted to be part of their community
reply
Unless things have changed recently (last year or so), the SDF Mastodon servers are really slow and terrible at federating. They even had an incident where the servers failed, everyone lost their posts and had to start over again. Downtime was terrible.

SDF welcomed everyone openly during the initial Mastodon waves, so it was all very Eternal September.

If you're joining to make a spare account to participate with SDF people, awesome! But if you want it as your identity for all of Fedi, I think that would be a bad experience. I ended up getting my own MastoHost account for a while and it was a vastly better experience, until I burned out on Fedi.

SDF is a super fun place to experiment with Gopher though. I absolutely recommend getting your own Gopherhole on SDF. It's like the old Geocities days but in ASCII. (And make sure you grab Lagrange as your GUI Gopher / Gemini client. I liked Phetch as my terminal Gopher client.)

reply
The performance hit was due to database work they were doing on the instance. Now it's a lot faster. The latest announcement reads as follows:

    We've completed our first phase of database clean up, thank you for your patience.  The impact on performance was heavy, but it was a necessary step.  All active users and their posts, profile, connections and media will be migrated to the new servers.  Once that has been completed, any remaining data will stay online for further migration and clean up.  Our instance is nearly 10 years old of constant daily operation, but we ran into a migration wall which held us back on 4.1.x.  Now that it is deprecated, we will do our best to jump to the latest version rather than migrate through.  Your support and patience has been greatly appreciated.
reply
Which one? There are two instances, one for members, and one for everyone.
reply
I didn't know there were two, so probably the one for everyone. Maybe I should join and try the other one
reply
I get that it's a volunteer system, but having donated for 2 years to help support their Lemmy instance, it's frustrating it's been down for 2 weeks without much of an update, just a hint "there's a good chance" it will come back. To me that seems lacking of transparency, not terse. How much disk space is it using? Maybe others in the community could help? How can they if they don't respond to emails? It was a nice thing while it lasted, but for federated social media, that kind of downtime hurts communities the most.
reply
Don't publish. You already notified them, your shell escape isn't a big deal, publishing it will only be a pain for the volunteers running the service.
reply
> your shell escape isn't a big deal

You can't have it both ways: if it's not a big deal, then he can publish it.

If you say "Don't publish", then you acknowledge that it's a big deal.

I say to GP: "Congrats for finding a shell escape, it's always a big deal. But don't publish it... Yet".

Give them a chance to fix it. But it they don't even answer to the emails, even just saying: "thx we're busy we can't fix right now but will do", then at some point you just publish.

It doesn't take long to answer an email saying "thanks, we'll fix it eventually".

reply
"We'll fix it eventually" is not good enough. If a human can find a flaw, then a bot can find the same flaw, and the bots are always watching and always testing. If someone can't commit to immediate security response when running a public-facing internet service then they should not be running that service, because the rest of the internet will not forgive them when their machine gets popped and becomes everyone else's problem.

If they can't commit to a hard timeline of less than a few days, then publish. What happens next is not your fault - it was inevitable anyway.

Edit for clarity: This is just in general, not specifically SDF or small orgs or large orgs. The internet does not care about the difference. The internet just does not care period. Nobody is going to give anyone else any breaks, and especially not a botnet.

reply
Definitely wait at least a few months if you've not already. There are legal risks with these kinds of things and some orgs move slowly.
reply
I think you should create some visible but harmless nuisance using this shell escape, so that it's likely to get noticed, but doesn't damage anyone's valuable data.

Perhaps just run "bash -c 'stress --cpu 64 ; echo fix your shell escape'"l " or something like that.

reply
Creating a nuisance is not a good way to go about it.

Some security practices sometimes feels like someone stabbing you just to prove you could be stabbed. Then they point at the wound and say: "See? You should be more careful."

Yes, the risk is real, but creating harm to demonstrate it isnt the same as protecting people.

reply
Well, ruining everyone's day on that particular host is not a nice way to "bring this to attention".

If I ever experienced something like that, I'd be banning the person (or limiting their resources drastically) for 60 to 90 days to bring the impact of this matter to their attention.

Anything affecting users on a system is not harmless.

reply
I did it too but TBH as I used small tools such as tcc, jimsh, eforth+muxleq, sacc, smu, catpoint+pointtools, compilers from https://t3x.org... I didn't care a lot on the rest, I'm pretty happy with my current account.

You can do a lot with S9 Scheme and the Unix API/syscalls it supports.

reply