“ Students are permitted to use AI assistants for all homework and programming assignments (especially as a reference for understanding any topics that seem confusing), but we strongly encourage you to complete your final submitted version of your assignment without AI. You cannot use any such assistants, or any external materials, during in-class evaluations (both the homework quizzes and the midterms and final).
The rationale behind this policy is a simple one: AI can be extremely helpful as a learning tool (and to be clear, as an actual implementation tool), but over-reliance on these systems can currently be a detriment to learning in many cases. You absolutely need to learn how to code and do other tasks using AI tools, but turning in AI-generated solutions for the relatively short assignments we give you can (at least in our current experience) ultimately lead to substantially less understanding of the material. The choice is yours on assignments, but we believe that you will ultimately perform much better on the in-class quizzes and exams if you do work through your final submitted homework solutions yourself.”
It feels downstream of CMU's "reasonable person principle". They know that people are going to use AI on their homework, but they trust that they want to learn and improve their skills -- and this is good advice for doing so.
I'm somewhat biased because I was involved in a previous, related course. The important takeaways aren't really about gritty debugging of (possibly) large homework assignments, but the high-level overview you get in the process. AI assistance means you could cover more content and build larger, more realistic systems.
An issue in the first iteration of Deep Learning Systems was that every homework built on the previous one, and errors could accumulate in subtle ways that we didn't anticipate. I spent a lot of time bisecting code to find these errors in office hours. It would have been just as educational to diagnose those errors with an LLM. Then students could spend more time implementing cool stuff in CUDA instead of hunting down a subtle bug in their 2d conv backwards pass under time pressure... But I think the breadth and depth of the course was phenomenal, and if courses can go further with AI assistance then it's great.
This new class looks really cool, and Zico is a great teacher.
I'm old but cumulative assignments are nothing new (the build an OS class, build a compiler class, etc) and my recollection is after you submitted an assignment the instructor would release a correct version you could swap in for yours. So any bugs in previous modules (that the TA/grader didn't catch) couldn't hold up the current assignment.
I don't think the final evaluation is to "cement the understanding" so much as _verify_ that students have taken accountability for their own learning process.
This is what a student, who truly wants to learn rather than simply complete a course / certification, would do... Use AI tools to explain + learn, but not outsource the learning process itself to the tools.
> What’s your hypothesis of how AI can accelerate how your brain understands something?
What are your beliefs / hypothesis of how having a human teacher can help you understand something?
AI explanations are no longer terrible garbage. The LLM might not be doing original research, but it has definitely read the textbook. :/ And 1000 related works.
You shouldn't believe the LLM when it tells you how to micro-optimize your code, but you can take suggestions as a starting point and verify them.
This is a bad case of whataboutism (I hate this word but it describes the answer you gave), what do you mean by accelerating understanding? Maybe they are good as suggestion engines, but it is very early to state what you did.
Theyre alot more than "suggestion engines". They can reason with you, show you examples, tell you how to dig deeper and verify what theyre saying, etc.
I have some success with this method: I try to write an explanation of something, then ask the LLM to find problems with the explanation. Sometimes its response leads me to shore up my understanding. Other times its answer doesn’t make sense to me and we dig into why. Whether or not the LLM is correct, it helps me clarify my own learning. It’s basically rubber duck debugging for my brain.
Way back when I was a student I had a professor who had a policy that if your homework scores differed substantially from your exam scores, the homework portion of your grade would be disregarded and your final grade would be determined solely by the midterm and the final. It was a harsh policy, and at the time I hated it, but in retrospect it was fair. Seems even more relevant today.
Apropos of nothing in particular, back in the previous millennium, UNIX (and Linux) had the `fortune` command. It would spew out a "cookie" - a pithy one-/two-liner, usually funny, often thought-provoking, and sometimes offensive (when invoked with `fortune -o`)
I had added it to my `motd`; it would give me a chuckle every time I logged in.
One of the cookies I recall:
A nuclear war can ruin your whole day. [1]
And that's what I think of when I see this absurd new war.
As somebody who JUST got the UK ETA recently (~2 weeks ago), I can talk about my experience.
Basically, as a US Citizen, even though I will only be transiting via the shthole of an airport (LHR, obviously), I need this ETA.
The process seemed* painless when described, but is rather painful. Essentially, they WANT you to use the mobile app. They do everything to make that happen (unless you are applying for someone else, in which case you may use your PC/laptop).
So I downloaded the iOS app; you have to take a selfie (so, obviously, as well lit place, neutral background, etc etc). The selfie itself took a few tries. Then you pay GBP 16 (USD ~21).
Then, the worst experience was matching the NFC-enabled US passport with the app, so that it reads the stored info from the passport chip. My US passport is recent (renewed within last 6 months). Try as I might, I just couldn't get the app to "read" the NFC-stored info (on the back cover of the passport). I tried 15 times, with the passport held at various angles, touching the iPhone here and there. It worked on the 16th try (= the passport backcover has to be held EXACTLY halfway down).
"You are holding it wrong" x 10000
I almost gave up half way thru this extremely frustrating @#$@@!!!! experience. Even as I write this I am cursing the app developers.
I can only imagine how somebody else -- say a senior citizen, who may not know tech enough, or whose fingers are not nimble enough, etc -- can easily give up this process after just a couple of tries. The usability experience is just plain shitty. Think about the consequences.
I hope the app developers are reading this.
I'm just glad I dont have to do this for 2 more years.
but then I'm not on social media^[1]. Gave up on those things way before social media fatigue became en vogue.
I was just frustrated with the whole app experience. Some people had a smoother experience, which is fine. Mine was terrible, and I'm sure my parents would not have been able to use this at all.
[1] I am still on LI... which is something I want to give up next.
This is too funny to not laugh at the absurdity of "safety and alignment" researchers blindly trusting agents like Claw without fully understanding. Or maybe they were researching.
Every word of hers is dripping with wisdom, and I feel not enough people are paying attention to her. She talks of "artificial intimacy" and "pretend empathy" and how people are addicted to ChatGPT and its ilk primarily because of the pretension / sycophancy, and choosing that over the real-life friction, disagreements and negotiation required and necessary for healthy relationships IRL. And how social media is a gateway drug to chatbots.
Thanks for writing this up. I didn't realize the privacy rot went so deep.
Aside from their AI-slopped newsfeed (F@#$!!!) which should have died long ago, this is atrocious. "Enshittification" was created just for this.
Sorry, I got sidetracked.
But tbh, it'll more likely be repairing those burger flippin' robots
reply