I wish more guys just said they didn’t know something instead of clearly not knowing what they’re talking about and running their mouth based on vibes
I wish more guys just said they didn’t know something instead of clearly not knowing what they’re talking about and running their mouth based on vibes
I sort of agree, but I think it depends on effort.
Type one word in and try and sell the easiest generated image? Low value.
But typing the right combo to create assets to create something larger than the model is capable of? That’s more valuable.
Criticizing AI or artists that leverage AI is like criticizing an artist for using a printer instead of drawing by hand
Or saying someone’s digital work is inferior because they used a tool to help make their image…
On that note, when working on a large project, is an AI artist as pretentious as the artist in the comic because they got some help generating the project from an AI instead of another human? Or is someone’s work ethic less credible for Google searching instead of asking a person? Are works of art valuable because they’re entirely original and uninfluenced by anything else but the artist themself? Because with that metric no artists are valuable since nothing is entirely original anyways
25% of reddit comments are chatgpt trash if not worse. It used to be an excellent Open Source Intelligence tool but now it’s just a bunch of fake supportive and/or politically biased bots
I will miss reddits extremely niche communities, but I believe Lemmy has reached the inflection point to eventually reach the same level of niche communities
Don’t tell him, if too many people get ad blockers they’re just going to keep evolving
Meanwhile: NixOS
538s model was a good estimator that year too, they leaned towards Hillary (and to be fair, she did win the popular vote) but certainly kept a trump win in the swing states within margin of error.
270 to win is another good site
I’ll look into LN more, I’m familiar with the centralization concerns (but still think they’re able to be mitigate until more upgrades), but am not familiar with the costs you’re bringing up. Fee estimators notoriously round up, I’ve never spent more than a dollar but that’s anecdotal
BCH is still an attempt at centralization from bitmain, a company which literally installed kill switches in their miners without telling anyone, and ran botting attacks in /r/Bitcoin and /r/BTC during that fiasco - the hard fork they created is absolutely more centralized than Bitcoin
There will be a time to do something as risky as hard fork for a block size upgrade, but to do it for the sake of just one upgrade that serious doesn’t make sense to me. If a hard fork must happen there might as well include other bips that necessitate a hard fork like drivechain.
Soft fork upgrades which enable more efficient algorithms like schnorr / SegWit in the meantime have scaled tps without having to waste block space. Bch is cheap because there’s no demand or usage.
Fiat makes itself obsolete
Bitcoin cash was an attempt at centralized control by Jihan Wu. Just because the block size is bigger doesn’t mean it’s better for decentralization. In fact, the increased costs of maintaining a node just makes it harder for people in (typically poorer) oppressive countries to self verify
They are still increasing the TPS, lightning network isn’t perfect, but it can scale beyond visa until more upgrades are implemented
Ollama (+ web-ui but ollama serve & && ollama run
is all you need) then compare and contrast the various models
I’ve had luck with Mistral for example
Russia (allegedly) has elections too however
I’ve tried a few IDEs, mainly Microsoft ones as of recently, but I still prefer my neospacevim setup. Microsoft has a very nice debugger and other useful features for navigating large software projects, but even on my 3080 12th Gen i7 rig with 32GB the plugins I use end up slowing things down. Plus, a similar debugger interface can normally be found in an init.toml layer
With neospacevim, I can specify which plugins get loaded for which file types, so my LaTeX plugins don’t interfere with my Python plugins for example.
Also the macro language locks me into vim, I even installed vimium keybinds for my browser. Spacevim is nice because you can see all the available keybinds option trees by pressing Space.
I mentioned spacevim/SpacEmacs because your post focused on emacs/vim, if you do choose either to make an IDE in I would imagine SpacEmacs/spacevim might be a little closer to an IDE than a text editor.
Spacevim is nice because it will auto install packages declared in the init.toml, sometimes with vanilla vim or neovim you need a plugin manager installed separately
I like Spacevim a lot (inspired by SpacEmacs), you can use neovim as the underlying vim package as well. Then update init.toml with whatever layers/plugins you want
Hmm true that is a concern
I’m just speculating here, but I remember way back when reddit was just a bunch of shitty html css and blue links. People would joke it would weed some times of people out
Maybe the complicated nature of federated web apps will drive away a similar crowd
buut iiim laaazyyy
Hmm yea I don’t like it much either, however, I remember /r/technology got progressively worse and the alternative was just a shittier subreddit with a slightly different name.
Unison would be nice, but it’s not so different from reddit come to think of it
Thanks for the feedback! I also asked a similar question on the ai stack exchange thread and got some helpful feedback there
It was a great project for brushing up on seq2seq modeling, but I decided to shelve it since someone released a polished website doing the same thing.
The idea was the vocabulary of music composition are chords and the sentences / paragraphs that are measures are sequences of chords or sequences of measures
I think it’s a great project because the limited vocab size and max sequence length are much shorter than what is typical for transformers applied to LLM tasks like digesting novels for example. So for consumer grade harder (12GB VRam) it’s feasible to train a couple different model architectures in tandem
Additionally, nothing sounds bad in music composition, it’s up to the musician to find a creative way to make it sound good. So even if the model is poorly trained, so long as it doesn’t output EOS immediately after BOS, and the sequences are unique enough, it’s pretty hard to find something that isn’t different that still works.
It’s also fairly easy to gather data from a site like iRealPro
The repo is still disorganized, but if you’re curious the main script is scrape.py
https://github.com/Yanall-Boutros/pyRealFakeProducer