Ooh never heard that but it kinda makes sense
Ooh never heard that but it kinda makes sense
No doubt. I think an easy way to counter that is to put a “deliberation” time on legislation. I’m spitballing but maybe require two votes 3 months apart, and they must both agree (otherwise there’s a third tiebreaker vote another 3 months later)? That would help kill off the flash fire effect that a viral meme can create and focus more on fixing problems that occur over a longer period of time.
I mean I’m no political scientist so I’d love to hear more about what methods are proven for direct democracy.
And that’s why everybody gets to be “Dr. B.” to me. There’s no way I can pronounce that foreign name!
Honestly with the way the internet exists now, we might feasibly be able to do something closer to direct democracy.
But good luck convincing the people in charge to lay down their power.
I mean with only 300 billion, we could make 300,000 new millionaires. If you took that from Musk, he’d still have over 100 billion dollars all to himself.
The hell are you talking about? It’s right there in the article. But maybe you didn’t read it?
Ad hominem attacks like you are using are a sign you don’t have anything useful to say.
Stupid users send private keys and other secrets to their AIs all the time. This is a big fucking threat to US global imperialism.
The US trusts OpenAI (even if they shouldn’t) to not send hackers after US companies. They definitely don’t trust Chinese companies to have the same restraints.
Nah, I’m speaking from the perspective of the US, since the article is about US policy. The decision making is obvious when you’re thinking at a national protectionist level.
Obviously privacy violations are bad for the user regardless. Never trust your corporations or government!
Well yeah, it’s obviously more of a risk to send directly to your rival than internally. Both are risky but one is much, much worse.
Not really, on virtually any poll with enough people, something like 10-20% of people will always be contrarian, no matter the question. You could ask if kicking puppies is evil and 15% would answer “it’s okay sometimes, as a treat”.
Fair enough, it’s not source code, so open source doesn’t apply.
Training code created by the community always pops up shortly after release. It has happened for every major model so far. Additionally you have never needed the original training dataset to continue training a model.
No, but I do call a CC licensed png file open source even if the author didn’t share the original layered Photoshop file.
Model weights are data, not code.
It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.
The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.
Interesting conclusion, LLMs are inherently 1D in nature, and ARC is a 2D task. LLMs are able to emulate 2D reasoning for sufficiently small tasks, but suffer greatly as the size of the task increases. This is like asking humans to solve 4D problems.
This is probably a fundamental limitation in LLM architecture and will need to be solved someday, presumably by something completely different.
I think the explanation might be even simpler - right wing content is the lowest common denominator, and mindlessly watching every recommended short drives you downward in quality.
I suspect we’ll have another Rosa Parks moment for the history books sometime in the next four years or so.
Hmmm, I don’t think I will.
Pubkit.net looks quite useful. I may be testing that out soon.
I fail to see how that’s different than the way it currently works, except you get the tyranny of the far right minority instead of tyranny of the majority.
Or another way to look at it, with your analogy, instead of two wolves, you have one professional career wolf who is far more effective at his job.