• 0 Posts
  • 44 Comments
Joined 18 days ago
cake
Cake day: February 10th, 2025

help-circle

  • I used 3.7 on a project yesterday (refactoring to use a different library). I provided the documentation and examples in the initial context and it re-factored the code correctly. It took the agent about 20 minutes to complete the re-write and it took me about 2 hours to review the changes. It would have taken me the entire day to do the changes manually. The cost was about $10.

    It was less successful when I attempted to YOLO the rest of my API credits by giving it a large project (using langchain to create an input device that uses local AI to dictate as if it were a keyboard). Some parts of the codes are correct, the langchain stuff is setup as I would expect. Other parts are simply incorrect and unworkable. It’s assuming that it can bind global hotkeys in Wayland, configuration required editing python files instead of pulling from a configuration file, it created install scripts instead of PKGBUILDs, etcetc.

    I liken it to having an eager newbie. It doesn’t know much, makes simple mistakes, but it can handle some busy work provided that it is supervised.

    I’m less worried about AI taking my job then my job turning into being a middle-manager for AI teams.







  • When we’re talking about legal issues, the terms are important.

    Copyright violation isn’t stealing. It is, at worse, a civil matter where one party can show how they’ve been harmed and recover damage. In addition, copyright law allows use of the copyrighted work without the author’s permission in some circumstances.

    You’re simply stating that ‘AI is stealing’ when that just isn’t true. And, assuming you mean a violation of copyright, if it was a civil violation then exactly how much would the model owe in damages to any given piece of art? This kind of case would have to be litigated as a class action lawsuit and, if your “AI is stealing committing mass copyright violation” theory is correct then there should be a case where this has been successfully litigated, right?

    There are a lot of dismissed class action lawsuits on the topic, but you can’t find any major cases where this issue has been resolved according to your “AI is stealing” claim. On the other hand, there ARE plenty of cases where Machine Learning (the field of which generative AI is a subset) using copyrighted data was ruled as fair use:

    (from https://www.cjr.org/the_media_today/an-ai-engine-scans-a-book-is-that-copyright-infringement-or-fair-use.php )

    Google has won two important copyright cases that seem relevant to the AI debate. In 2006, the company was sued by Perfect 10, an adult entertainment site that claimed Google had infringed its copyright by generating thumbnail photos of its content; the court ruled that providing images in a search index was “fundamentally different” from simply creating a copy, and that in doing so, Google had provided “a significant benefit to the public.” In the other case, the Authors’ Guild, a professional organization that represents the interests of writers, sued Google for scanning more than twenty million books and showing short snippets of text when people searched for them. In 2013, a judge in that case ruled that Google’s conduct constituted fair use because it was transformative.

    Creating a generative model is fundamentally different than copying artwork and it also provides a significant benefit to the public. The AI models are not providing users with copies of the copyrighted work. They’re, literally, transformative.

    This isn’t a simple matter of it being automatically wrong and illegal if copyrighted work was used to create the models. Copyright law, and law in general, is more complex than a social media meme like ‘AI is stealing’.





  • I’m not quite sure I’m following.

    Are you saying that AI trained on the output of humans is unethical, unless those humans are programmers?

    Or, as a professional programmer, you understand the limitations of AI in your field so you don’t feel threatened by it while simultaneously assuming, on behalf of another profession, that AI in “artistic” fields is somehow far more capable and an actual threat?

    Terrible programmers don’t become professional programmers because they subscribe to Copilot. It provides a crutch to absolute beginners, allowing even the least skilled individual to create some low quality output. For professionals, AI allows for some aspects of existing tools to perform slightly better but cannot replace the knowledge, experience and practice of a human when it comes to applying those skills in novel and interesting ways.

    Terrible artists don’t become professional artists because they subscribe to Midjourney. It provides a crutch to absolute beginners, allowing even the least skilled individual to create some low quality output. For professionals, AI allows for some aspects of existing tools to perform slightly better but cannot replace the knowledge, experience and practice of a human when it comes to applying those skills in novel and interesting ways.


  • Also, screw the independent developer who doesn’t have artists to lay off nor the budget to hire them.

    If they want to make a game then they should spend decades learning to program and decades learning to create art and decades learning to create music.

    If they use AI to make code or assets then it completely invalidates their work and the fun that I’m having with their game is just fake fun.

    The only Real Games are those made by giant corporations with the capital to hire artists, programmers and musicians that can lovingly hand craft the loot boxes for the next major children’s casino.

    e: Honestly, it’s embarrassing that I have to add a /s for people to understand





  • I am pretty sure that if asked, the serverside protections can be circumvented

    No, they literally cannot. The entire protocol is open sourced and has been audited many times over.

    One of the fundamental things you assume when designing a cryptosystem is that the communication link between two parties is monitored. The server mostly exists as a tool to frustrate efforts by attackers that have network dominance (i.e. secret police in oppressive regimes) by not allowing signals intelligence to extract a social graph. All this hypothetical attacker can see is that everyone talks to a server so they can’t know which two people are communicating.

    The previous iteration, TextSecure, used SMS. Your cellular provider could easily know WHO you were talking to and WHEN each message was sent. So SMS was replaced with a server and the protocol was amended so that even the server has no way of gaining access to that information.

    The sealed sender feature is something that the client does. It was best effort because, at the time, they still supported older clients and needed backwards compatibility. This is no longer the case.



  • This is definitely a topic where a vast majority of people have been “informed” of their opinions by social media memes instead of through a reasoned examination of the situation.

    People who’re probably too young to have ever lived through major technology breakthroughs.

    This same “debate” always happens. When digital cameras were being developed, their users were seen as posers encroaching on the terf of “Real Photographers”.

    You’d hear “Now just anybody can take pictures and call themselves a photographer?”

    Or “It takes no skill to take a digital photograph, you can just manipulate the image in Photoshop to create a fake image that Real Photographers have to work years developing the skills to capture”

    Computers were things that some people, reluctantly, had to use for business but could never be useful to the average person. Smartphones were ridiculous toys for out of touch tech nerds. Social Media was an oxymoron because social people don’t use the Internet. GPS is just a toy for hikers and people that are too dumb to own paper maps. Etc, etc, etc

    It’s the same neo-luddite gatekeeping that’s happening towards AI. Any technology that puts capabilities in the hands of regular people is viewed by some people as fundamentally stealing from professionals.

    And, since the predictable response is to make some arcane copyright claim and declare training “stealing”: Not all AI is trained on copyrighted materials.