It sounds to me that allowing the general search of everyone because they MIGHT BE getting around a copyright should be stopped by the 4th Amendment, "no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized" .
That judge has ordered a general search of everyone who used those apps just to see if they did something wrong.
The larger issue is how we got to this place. OpenAI, and other companies, taking data that doesn't belong to them and offering it to others that it doesn't belong to. Yes, individuals have the Fourth Amendment (in the states at least) to make sure a search of your possessions is warranted. But companies creating things also have the right, under copyright law, to protect their intellectual property. It sounds like the court in this case took the wrong approach. Instead of telling OpenAI to preserve all ChatGPT information indefinitely, they should have forced OpenAI to stop scrapping the internet. They can use whatever training data that someone intentionally gives them, but they do not have the right to all information on the net, especially information that is intentionally put behind a paywall to prevent people from seeing it that shouldn't.
Thanks for the warning about this, but there needs to be more pushback on the tech companies to do the right thing in the first place. And hold their feet to the fire if they don't. Could you imagine if the court told them that they had to wipe out their data model entirely because it was created with illegally gotten information, and start over from scratch? The company would be out of business in a week.
And what I teach AI how to make mind control, control people from remote one day AI launch program to make great things. It supper assist people from remote without their knowledge.
Does anyone have knowledge if they only store info when you’re logged in or is it just in general? I felt this coming a few months ago. This article was a great read! 💫💫💫
I wanted to reply to Jeremy Krall that commented my post, but it seems both are deleted... -_- (if we can't speak as privacy advocates, sauron's already won).
To Jeremy: even if data are non published by openAI, they remain in the hands of Sam Altman, the traitor that even support worldcoin.
"Won't touch this even with a jedi lightsaber" is my stance.
Another example of the fact that once a secret is told, that's no longer a secret. This case is public, but surveillance is also untold and not judged in courts. See last article from Snowden, and even there, it's talking of the visible part of the iceberg.
Privacy is about defining a zone where things are kept secret, alias never told.
The case is public, the data isn't. It will still be in the hands of OpenAI, just like it was when people were running their queries. Now how the court forces the examination of the data to make sure there is no copyright material is the question. There can be non-public ways that the information can be investigated and only the suspect copyright data shown in the public case.
It sounds to me that allowing the general search of everyone because they MIGHT BE getting around a copyright should be stopped by the 4th Amendment, "no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized" .
That judge has ordered a general search of everyone who used those apps just to see if they did something wrong.
The larger issue is how we got to this place. OpenAI, and other companies, taking data that doesn't belong to them and offering it to others that it doesn't belong to. Yes, individuals have the Fourth Amendment (in the states at least) to make sure a search of your possessions is warranted. But companies creating things also have the right, under copyright law, to protect their intellectual property. It sounds like the court in this case took the wrong approach. Instead of telling OpenAI to preserve all ChatGPT information indefinitely, they should have forced OpenAI to stop scrapping the internet. They can use whatever training data that someone intentionally gives them, but they do not have the right to all information on the net, especially information that is intentionally put behind a paywall to prevent people from seeing it that shouldn't.
Thanks for the warning about this, but there needs to be more pushback on the tech companies to do the right thing in the first place. And hold their feet to the fire if they don't. Could you imagine if the court told them that they had to wipe out their data model entirely because it was created with illegally gotten information, and start over from scratch? The company would be out of business in a week.
The hesitation I have is that if we enter a world where governments have these tools, but everyday people don't, we enter the worst possible timeline.
A government mind demand a company delete their model, but they'll never delete their own.
Thanks. I’d always dismissed Leo as bloat. Not any more!
And what I teach AI how to make mind control, control people from remote one day AI launch program to make great things. It supper assist people from remote without their knowledge.
Does anyone have knowledge if they only store info when you’re logged in or is it just in general? I felt this coming a few months ago. This article was a great read! 💫💫💫
I would presume they're storing as much data as they can
That makes sense. I’d love for you to do a video on your cloud preferences!
I wanted to reply to Jeremy Krall that commented my post, but it seems both are deleted... -_- (if we can't speak as privacy advocates, sauron's already won).
To Jeremy: even if data are non published by openAI, they remain in the hands of Sam Altman, the traitor that even support worldcoin.
"Won't touch this even with a jedi lightsaber" is my stance.
Another example of the fact that once a secret is told, that's no longer a secret. This case is public, but surveillance is also untold and not judged in courts. See last article from Snowden, and even there, it's talking of the visible part of the iceberg.
Privacy is about defining a zone where things are kept secret, alias never told.
The case is public, the data isn't. It will still be in the hands of OpenAI, just like it was when people were running their queries. Now how the court forces the examination of the data to make sure there is no copyright material is the question. There can be non-public ways that the information can be investigated and only the suspect copyright data shown in the public case.