Mossy Feathers (They/Them)

A

  • 18 Posts
  • 1.11K Comments
Joined 1 year ago
cake
Cake day: July 20th, 2023

help-circle








  • I think you’re thinking of Slayers X: Terminal Aftermath: Vengance of the Slayer, which is not a shitpost but the greatest fps game ever made, you turd!

    (It’s a spinoff of Hypnospace Outlaw, which I highly recommend, especially if you’re old enough to remember the internet pre-y2k because it’s basically a very high quality pre-y2k Internet simulator. Also it doesn’t use the same “S” but it is a parallel-universe version of it).








  • That chart kinda confuses me.

    For one thing, I feel like cars shouldn’t be included because that’s people being idiots in cars. I guess if the park is designed in such a way that cars can be a real hazard (like trails that cross roads), then it would make the park more dangerous, but idk; seems weird to include them when people get run over all the time, regardless of whether they’re in a national park or not.

    I guess what I’m trying to say is that it’s not a hazard unique to the area, so unless the park is poorly designed in a way that notibly amplifies the danger of vehicles, it probably shouldn’t be included.

    (Edit: I just realized it’s also counting suicides as well, which again, people can commit suicide anywhere, why is it being counted against the park?)

    Another is that there are multiple “dangerous” parks that have few or no fatalities, and very few SAR operations. I know it’s based on per million visitors, but when you have less than 5~10 of each, then to me that’s a fluke. The park could be extremely safe and had a visitor group or family that did something dumbfoundingly stupid and got themselves lost or killed.

    If I understand the chart, it looks like the SAR is per million in 2023, while the fatalities is per million is over the 2007-2023 period. The former is more reasonable when considering that the listed visitor count is for 2023, but fatalities per million is going to get fucky when you have less than a million visitors and it’s over a 16yr period. It’d be more reasonable to either list fatalities per million for 2023, or use the average visitor count and SAR incidents per million over the 2007-2023 period.

    It’s an interesting chart, but I think the methodology might be flawed. I’m curious if anyone else feels the same way.



  • I’m… honestly kinda okay with it crashing. It’d suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don’t really have the corporate ethics or morals for it, nor do we have the economic structure for it.

    AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What’s the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I’m not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it’s paired with a general understanding of other fields).

    However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?

    So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that’s ignoring that, from what I’ve seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you’re basically guaranteed to replace humans with AI.

    We just don’t have the society for that. We should be moving in that direction, but we’re not even close to being there yet. So, again, as much potential as AI has, I’m kinda okay if it crashes. There aren’t enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.