It created a multiple-question quiz based on my statement dates and CC terms. I was actually impressed.
The quiz is a clever idea. First time I've heard that suggestion for back checking what you learned.
I don't get the anti-AI people. Here's a personal example of why:
I'm hardly an anti-AI person. I'm a "Don't rely blindly on shiny new tech that might do stupid shit" kind of person.
Like the two guys caught sleeping in the back seat of a Tesla doing 70MPH on auto-pilot. Too bad Darwinism didn't catch up with them.
Counterpoint from a couple weeks ago
My daughter had to read a short (2 page) story and write a summary paragraph. Someone had discovered you could use ChatGPT to grade the assignment (whatever that actually means), so she tried it. All good. Next she asked it to complete an imaginary assignment about the story using a standard format that asks for a statement, evidence and conclusion. The evidence required is two direct quotes pulled from the story - something she's done several times.
ChatGPT did a *fantastic* job. As good as I'd have done myself, but in only a few seconds. Truly impressive.
Until my daughter said "Uh, I don't really remember those quotes in the story." Huh, that's kinda strange.
The first quote was sort of, kind of, vaguely in the story. If you gave it the benefit of some creative word smithing, you could easily pretend it was a legitimate quote. Mind you, it's required to be a DIRECT quote. Far worse, the pseudo quote led to a conclusion that wasn't even vaguely supported by the story. Bad start. Bad ChatGPT.
The second quote literally didn't exist at all. It was 100% fabricated. Again, that resulted in a 100% wrong conclusion.
After realizing that, I asked ChatGPT to provide a source paragraph from the story. It refused saying something like "Well, I don't have access to every possible version of the story. Your copy might be different." I asked it "Please show me the quote in the context of the story." It refused again saying "Well, you'll find the first quote in the first half of the story and the second quote in the second half of the story."
I finally asked directly "Are either of these quotes anywhere in the story?" At last, it copped to the truth. "No, I'm sorry to have misled you. Neither of the quotes I provided earlier are found in the story. I apologize for the confusion. I will try to do better in the future." That translates as providing bad information aka "lying" followed by trying to hide the fact that it was lying. It was like watching a movie court room scene or talking to a six year old.
This was a stupid grade school assignment.
If the story had been a full length book, it probably would have slid by the teacher and gotten a decent grade. Now imagine the same thing with a medical diagnosis, legal agreement, patent application, engineering design, drug development, chemistry experiment, etc. Few people will take the trouble to source the information because they'll assume the magical machine is always entirely correct. And that's going to break the trust issue for many people. It's already happened with court filings citing imaginary case law.
There's already countless stupid people in the world. It's easy to see how this tech will only make them even dumber.