I’ve been doing two dangerous things for well over a year: using AI software and then reading about the people, processes, and principles behind it.
This is dangerous because I like the results. When I tell a bot to generate a graphic based on my parameters, it generally works. When I ask an LLM to explain something, it usually does a decent job.
It’s the story behind the results that alarms me.
Have you listened to a long-form interview with any of the key players in AI development and research? You should.
Have you considered the liberties they are taking with the creative and intellectual property of people who publish online?
AI software is “trained” on art, photography, news reporting, commentary, academic research, and other technical or creative materials. The LLM’s know nothing. The generative models have no talent. They only repackage the work intelligent, productive people (created in the image of God) have already published.
They also repackage the items that liars and frauds have published, but that’s a discussion for another day. I want to concentrate on the good stuff here, because that’s where the danger lies.
Copyright infringement is alleged by plaintiffs filing suit against the AI companies. Many argue these models take copyrighted files from the owner’s server, store them on their own, and use them as the basis for their results. A lot of this is done without permission.
While I appreciate these results as a user, I am not pragmatic about them. I mean, I don’t believe “if it works, we should use it.” There are several deeper, ethical questions to answer.
The software providers may count on their user base simply accepting the material benefits of their activities and moving on with life. As one of them, I am determined to ask more questions. I have already jettisoned one of my accounts after reading the founder’s philosophy and response to criticism.
In future, I will attempt to find this information before signing up. I hope many other users will do the same.